Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 3 hours 57 min ago

Major PostgreSQL version upgrade in a Patroni cluster

Wed, 2022-06-22 09:03
Introduction

One of my customer recently asked me to upgrade its PostgreSQL instance from version 9.6 to version 14.3. The infrastructure is composed of 4 servers :

– DB02-04 – Ubuntu 20.04 – supporting PostgreSQL, Patroni, HAProxy and etcd
– DB02-05 – Ubuntu 20.04 – supporting PostgreSQL, Patroni, HAProxy and etcd
– DB02-06 – Ubuntu 20.04 – supporting PostgreSQL, Patroni, HAProxy and etcd
– DB02-07 – Ubuntu 20.04 – pgBackRest server

I also had to upgrade Patroni from version 2.0.1 to version 2.1.4 for compatibility reason, and I took the opportunity to upgrade pgBackRest from version 2.24 to version 2.39.

Source versionTarget versionPostgreSQL9.6.1814.3Patroni2.0.22.1.4pgBackRest2.242.39


The purpose of this blog post is to explain all the required steps to achieve this.

PostgreSQL 14.3 installation

Before installing PostgreSQL 14.3, it is necessary to install some required packages.

[postgres@db02-04 ~] $ sudo apt install llvm clang pkg-config liblz4-dev libllvm7 llvm-7-runtime libkrb5-dev libossp-uuid-dev

I’m used to compile and install PostgreSQL from the source code. Once the archive is downloaded and transferred to the server, we have to extract its content.

[postgres@db02-04 upgrade] $ tar -xzf postgresql-14.3.tar.gz

[postgres@db02-04 upgrade] $ ll postgresql-14.3
total 768
-rw-r--r--  1 postgres postgres    445 May  9 21:14 aclocal.m4
drwxr-xr-x  2 postgres postgres   4096 May  9 21:24 config
-rwxr-xr-x  1 postgres postgres 587897 May  9 21:14 configure
-rw-r--r--  1 postgres postgres  85458 May  9 21:14 configure.ac
drwxr-xr-x 58 postgres postgres   4096 May  9 21:24 contrib
-rw-r--r--  1 postgres postgres   1192 May  9 21:14 COPYRIGHT
drwxr-xr-x  3 postgres postgres   4096 May  9 21:24 doc
-rw-r--r--  1 postgres postgres   4259 May  9 21:14 GNUmakefile.in
-rw-r--r--  1 postgres postgres    277 May  9 21:14 HISTORY
-rw-r--r--  1 postgres postgres  63944 May  9 21:25 INSTALL
-rw-r--r--  1 postgres postgres   1665 May  9 21:14 Makefile
-rw-r--r--  1 postgres postgres   1213 May  9 21:14 README
drwxr-xr-x 16 postgres postgres   4096 May  9 21:25 src
[postgres@db02-04 upgrade] $

Then the directory where the binaries will be installed must be created.

[postgres@db02-04 upgrade] $ mkdir -p /u01/app/postgres/product/14/db_3

Our standard when installing PostgreSQL from the sources is to create and execute a shell script which will automatically compile and install the binaries.

[postgres@db02-04 postgresql-14.3] $ cat compile_from_source.sh
#!/bin/bash

PGHOME=/u01/app/postgres/product/14/db_3
SEGSIZE=2
BLOCKSIZE=8

./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-llvm \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-systemd \
            --with-gssapi \
            --with-icu \
            --with-lz4 \
            --with-uuid=ossp \
            --with-system-tzdata=/usr/share/zoneinfo \
            --with-extra-version=" dbi services build"

make -j $(nproc) all
make install
cd contrib
make -j $(nproc) install
[postgres@db02-04 postgresql-14.3] $
[postgres@db02-04 postgresql-14.3] $ chmod +x compile_from_source.sh
[postgres@db02-04 postgresql-14.3] $ ./compile_from_source.sh

Obviously the steps described above have to be performed on all nodes.

Patroni upgrade

Before upgrading Patroni to the latest version, it is important to upgrade pip and setuptools .

[postgres@db02-04 ~] $ python3 -m pip install --upgrade pip
[postgres@db02-04 ~] $ python3 -m pip install --upgrade setuptools

Then we can upgrade Patroni.

[postgres@db02-04 ~] $ patronictl version
patronictl version 2.0.2
[postgres@db02-04 ~] $

[postgres@db02-04 ~] $ python3 -m pip install --upgrade --user patroni[etcd]
Requirement already satisfied: patroni[etcd] in ./.local/lib/python3.8/site-packages (2.0.2)
Collecting patroni[etcd]
  Downloading patroni-2.1.4-py3-none-any.whl (225 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 225.0/225.0 kB 5.9 MB/s eta 0:00:00
Requirement already satisfied: python-dateutil in ./.local/lib/python3.8/site-packages (from patroni[etcd]) (2.8.1)
Requirement already satisfied: urllib3!=1.21,>=1.19.1 in /usr/lib/python3/dist-packages (from patroni[etcd]) (1.25.8)
Requirement already satisfied: prettytable>=0.7 in ./.local/lib/python3.8/site-packages (from patroni[etcd]) (2.1.0)
Requirement already satisfied: ydiff>=1.2.0 in ./.local/lib/python3.8/site-packages (from patroni[etcd]) (1.2)
Requirement already satisfied: click>=4.1 in /usr/lib/python3/dist-packages (from patroni[etcd]) (7.0)
Requirement already satisfied: psutil>=2.0.0 in ./.local/lib/python3.8/site-packages (from patroni[etcd]) (5.8.0)
Requirement already satisfied: PyYAML in /usr/lib/python3/dist-packages (from patroni[etcd]) (5.3.1)
Requirement already satisfied: six>=1.7 in /usr/lib/python3/dist-packages (from patroni[etcd]) (1.14.0)
Requirement already satisfied: python-etcd<0.5,>=0.4.3 in ./.local/lib/python3.8/site-packages (from patroni[etcd]) (0.4.5)
Requirement already satisfied: wcwidth in ./.local/lib/python3.8/site-packages (from prettytable>=0.7->patroni[etcd]) (0.2.5)
Requirement already satisfied: dnspython>=1.13.0 in ./.local/lib/python3.8/site-packages (from python-etcd<0.5,>=0.4.3->patroni[etcd]) (2.1.0)
Installing collected packages: patroni
  Attempting uninstall: patroni
    Found existing installation: patroni 2.0.2
    Uninstalling patroni-2.0.2:
      Successfully uninstalled patroni-2.0.2
Successfully installed patroni-2.1.4
[postgres@db02-04 ~] $

[postgres@db02-04 ~] $ patronictl version
patronictl version 2.1.4
[postgres@db02-04 ~] $ patroni --version
patroni version 2.1.4

[postgres@db02-04 ~] $

Again, we have to do this on all nodes.

New cluster creation

Below steps have to be performed on the Leader node only. The following command can be used to check who is the Leader.

[postgres@db02-04 ~] $ patronictl list
+ Cluster: DEMO (6938400030986650439) -----------------------+
| Member  | Host        | Role    | State   | TL | Lag in MB |
+---------+-------------+---------+---------+----+-----------+
| db02-04 | 10.0.148.31 | Leader  | running | 15 |           |
| db02-05 | 10.0.148.32 | Replica | running | 15 |         0 |
| db02-06 | 10.0.148.33 | Replica | running | 15 |         0 |
+---------+-------------+---------+---------+----+-----------+
[postgres@db02-04 ~] $

The new 14.3 cluster is created in this directory.

[postgres@db02-04] $ mkdir -p /u02/pgdata/14/PROD

To create it, we use the initdb utility provided by the new PostgreSQL binaries.

[postgres@db02-04 ~] $ /u01/app/postgres/product/14/db_3/bin/initdb -D /u02/pgdata/14/PROD
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u02/pgdata/14/PROD ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /u01/app/postgres/product/14/db_3/bin/pg_ctl -D /u02/pgdata/14/PROD -l logfile start

[postgres@db02-04 ~] $

Following files have to be taken from the old cluster to the new one.

[postgres@db02-04 ~] $ cp /u02/pgdata/96/PROD/pg_hba.conf /u02/pgdata/14/PROD/ 
[postgres@db02-04 ~] $ cp /u02/pgdata/96/PROD/patroni.dynamic.json /u02/pgdata/14/PROD/

The file patroni.dynamic.json contains a dump of the DCS options. It will be read during a later stage.

In order to apply our best practices for PostgreSQL 14, the following instance parameters are applied to the new cluster.

[postgres@db02-04 ~] $ cat /u02/pgdata/14/PROD/postgresql.conf
listen_addresses = '10.0.148.31'
port=5432
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_rotation_age = '1440'
log_line_prefix = '%m - %l - %p - %h - %u@%d - %x'
log_directory = 'pg_log'
log_min_messages = 'WARNING'
log_autovacuum_min_duration = '60s'
log_min_error_statement = 'NOTICE'
log_min_duration_statement = '30s'
log_checkpoints = 'on'
log_statement = 'ddl'
log_lock_waits = 'on'
log_temp_files = '0'
log_timezone = 'Europe/Zurich'
log_connections=off
log_disconnections=off
log_duration=off
checkpoint_completion_target=0.9
checkpoint_timeout='5min'
client_min_messages = 'WARNING'
wal_level = 'replica'
hot_standby_feedback = 'on'
max_wal_senders = '10'
cluster_name = 'PROD'
max_replication_slots = '10'
shared_buffers=128MB
work_mem=8MB
effective_cache_size=512MB
maintenance_work_mem=64MB
wal_compression=on
shared_preload_libraries='pg_stat_statements'
autovacuum_max_workers=6
autovacuum_vacuum_scale_factor=0.1
autovacuum_vacuum_threshold=50
autovacuum_vacuum_cost_limit=3000
archive_mode='on'
archive_command='pgbackrest --stanza=PROD archive-push %p'
wal_log_hints='on'
password_encryption='scram-sha-256'
default_toast_compression='lz4'
[postgres@db02-04 ~] $
Upgrade

Due to corruption on some data files (invalid page checksums), I was not able to use pg_upgrade to perform the upgrade from 9.6 to 14.3. Therefore, I had no choice to use pg_dumpall to move the data.

[postgres@db02-04 ~] $ pg_dumpall -p 5432 -U postgres -l postgres -f /home/postgres/upgrade/dump/prod.dmp

Once done, Patroni can be stopped on all nodes.

[postgres@db02-04 ~] $ sudo systemctl stop patroni
[postgres@db02-05 ~] $ sudo systemctl stop patroni
[postgres@db02-06 ~] $ sudo systemctl stop patroni

Bonus : we use the following systemd service definition for Patroni.

[postgres@db02-04 ~] $ cat /etc/systemd/system/patroni.service
#
# systemd integration for patroni
# Put this file under /etc/systemd/system/patroni.service
#     then: systemctl daemon-reload
#     then: systemctl list-unit-files | grep patroni
#     then: systemctl enable patroni.service
#

[Unit]
Description=dbi services patroni service
After=etcd.service syslog.target network.target

[Service]
User=postgres
Group=postgres
Type=simple
ExecStartPre=/usr/bin/sudo /sbin/modprobe softdog
ExecStartPre=/usr/bin/sudo /bin/chown postgres /dev/watchdog
ExecStart=/u01/app/postgres/local/dmk/bin/patroni /u01/app/postgres/local/dmk/etc/patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=process
Restart=no
TimeoutSec=30

[Install]
WantedBy=multi-user.target

[postgres@db02-04 ~] $

(Of course, the ExecStart parameter must be adapted to you environment.)

It’s now time to start the new cluster and to import the dump.

[postgres@db02-04 ~] $ /u01/app/postgres/product/14/db_3/bin/pg_ctl -D /u02/pgdata/14/PROD -l logfile start

[postgres@db02-04 ~] $ /u01/app/postgres/product/14/db_3/bin/psql postgres < /home/postgres/upgrade/dump/prod.dmp

Once the import is done, we must change the parameters data_dir and bin_dir of the Patroni configuration file in order to match to the new cluster.

[postgres@db02-04 ~] $ cat /u01/app/postgres/local/dmk/etc/patroni.yml
...
...
...
postgresql:
  listen: 10.0.148.31:5432
  connect_address: 10.0.148.31:5432
  data_dir: /u02/pgdata/14/PROD/
  bin_dir: /u01/app/postgres/product/14/db_3/bin
#  config_dir:
  pgpass: /u01/app/postgres/local/dmk/etc/pgpass0
  authentication:
    replication:
      username: replicator
      password: *****
    superuser:
      username: postgres
      password: *****
  parameters:
    unix_socket_directories: '/tmp'
...
...
...

[postgres@db02-04 ~] $

Before restarting Patroni, the previous configuration information must be removed from the DCS.

[postgres@db02-04 ~] $ patronictl remove PROD
+ Cluster: PROD (6946441255879209913------------+
| Member | Host | Role | State | TL | Lag in MB |
+--------+------+------+-------+----+-----------+
+--------+------+------+-------+----+-----------+
Please confirm the cluster name to remove: PROD
You are about to remove all information in DCS for PROD, please type: "Yes I am aware": Yes I am aware
[postgres@db02-04 ~] 

Then, Patroni can be restarted on all nodes and the replicas will be built automatically on db02-05 and db02-06.

[postgres@db02-04 ~] $ sudo systemctl start patroni
[postgres@db02-05 ~] $ sudo systemctl start patroni
[postgres@db02-06 ~] $ sudo systemctl start patroni

[postgres@db02-04 ~] $ patronictl list
+ Cluster: PROD (7109360479587211872) ------+----+-----------+
| Member  | Host        | Role    | State   | TL | Lag in MB |
+---------+-------------+---------+---------+----+-----------+
| db02-04 | 10.0.148.31 | Leader  | running |  2 |           |
| db02-05 | 10.0.148.32 | Replica | running |  2 |         0 |
| db02-06 | 10.0.148.33 | Replica | running |  2 |         0 |
+---------+-------------+---------+---------+----+-----------+
[postgres@db02-04 ~] $

That’s it ! The PostgreSQL cluster and Patroni have been successfully upgraded.

Switchover test

An important thing to do on the Patroni side is to test the switchover of the new cluster.

[postgres@db02-04 ~] $ patronictl switchover
Master [db02-04]: db02-04
Candidate ['db02-05', 'db02-06'] []: db02-05
When should the switchover take place (e.g. 2022-06-15T14:23 )  [now]: now
Current cluster topology
+ Cluster: PROD (7109360479587211872) -----------+-----------+-----------------+
| Member  | Host        | Role    | State   | TL | Lag in MB | Pending restart |
+---------+-------------+---------+---------+----+-----------+-----------------+
| db02-04 | 10.0.148.31 | Leader  | running |  2 |           | *               |
| db02-05 | 10.0.148.32 | Replica | running |  2 |         0 | *               |
| db02-06 | 10.0.148.33 | Replica | running |  2 |         0 | *               |
+---------+-------------+---------+---------+----+-----------+-----------------+
Are you sure you want to switchover cluster PROD, demoting current master db02-04? [y/N]: y
2022-06-15 13:23:28.83611 Successfully switched over to "db02-05"

+ Cluster: PROD (7109360479587211872) ------------+-----------+-----------------+
| Member  | Host        | Role    | State    | TL | Lag in MB | Pending restart |
+---------+-------------+---------+----------+----+-----------+-----------------+
| db02-04 | 10.0.148.31 | Replica | stopping |    |   unknown | *               |
| db02-05 | 10.0.148.32 | Leader  | running  |  2 |           | *               |
| db02-06 | 10.0.148.33 | Replica | running  |  2 |         0 | *               |
+---------+-------------+---------+----------+----+-----------+-----------------+
[postgres@db02-04 ~] $

[postgres@db02-04 ~] $ patronictl list
+ Cluster: PROD (7109360479587211872) -----------+-----------+-----------------+
| Member  | Host        | Role    | State   | TL | Lag in MB | Pending restart |
+---------+-------------+---------+---------+----+-----------+-----------------+
| db02-04 | 10.0.148.31 | Replica | running |  3 |         0 |                 |
| db02-05 | 10.0.148.32 | Leader  | running |  3 |           | *               |
| db02-06 | 10.0.148.33 | Replica | running |  3 |         0 | *               |
+---------+-------------+---------+---------+----+-----------+-----------------+
[postgres@db02-04 ~] 
pgBackRest upgrade

Following packages are mandatory before upgrading pgBackRest to version 2.39.

[postgres@db02-04 ~] $ sudo apt install libpq-dev libyaml-dev libbz2-dev

I have compiled and installed pgBackRest from the source code.
Once the archive is downloaded and transferred to the server, we have to extract its content.

[postgres@db02-04 upgrade] $ unzip -q pgbackrest-release-2.39.zip

[postgres@db02-04 upgrade] $ ll pgbackrest-release-2.39
total 80
-rw-r--r--  1 postgres postgres 10374 May 16 12:46 CODING.md
-rw-r--r--  1 postgres postgres 37765 May 16 12:46 CONTRIBUTING.md
drwxr-xr-x  6 postgres postgres  4096 May 16 12:46 doc
-rw-r--r--  1 postgres postgres  1168 May 16 12:46 LICENSE
-rw-r--r--  1 postgres postgres  9607 May 16 12:46 README.md
drwxr-xr-x 11 postgres postgres  4096 May 16 12:46 src
drwxr-xr-x  7 postgres postgres  4096 May 16 12:46 test
[postgres@db02-04 upgrade] $

And the the installation can be started.

[postgres@db02-04 upgrade] $ cd pgbackrest-release-2.39/src/

[postgres@db02-04 src] $ ./configure && make

[postgres@db02-04 src] $ sudo mv /usr/bin/pgbackrest /usr/bin/pgbackrest_old
[postgres@db02-04 src] $ sudo cp pgbackrest /usr/bin/
[postgres@db02-04 src] $ pgbackrest version
pgBackRest 2.39
[postgres@db02-04 src] $

The pg1-path parameter of the pgBackRest Stanza configuration must be adapted on each node in order to perform the backups against the new cluster.

[postgres@db02-04 ~] $ cat /etc/pgbackrest.conf 
[global]
backup-host=DB02-07
backup-user=postgres
log-level-file=detail

[PROD]
pg1-path=/u02/pgdata/14/PROD
pg1-socket-path=/tmp
pg1-user=postgres
[postgres@db02-04 ~] $

The configuration file of the pgBackRest server must be adapted as well.

[postgres@db02-07 ~] $ cat /etc/pgbackrest.conf
[global]
repo1-path=/networkshare/pgbackrest
repo1-cipher-pass=IUlCfTExDg1x7WBTsl83rrwINn7eCKRMDyi5SsPHUjj+ywULThyRtCWMd5GVZXR4
repo1-cipher-type=aes-256-cbc
log-level-console=info
log-level-file=debug
compress-level=3
repo1-retention-full=2
repo1-retention-diff=7
repo1-type=cifs
archive-timeout=10000

[PROD]
pg1-path=/u02/pgdata/14/PROD
pg1-port=5432
pg1-host=DB02-04
pg1-socket-path=/tmp
pg1-host-user=postgres
pg1-user=postgres
pg2-path=/u02/pgdata/14/PROD
pg2-port=5432
pg2-host=DB02-05
pg2-socket-path=/tmp
pg2-host-user=postgres
pg2-user=postgres
pg3-path=/u02/pgdata/14/PROD
pg3-port=5432
pg3-host=DB02-06
pg3-socket-path=/tmp
pg3-host-user=postgres
pg3-user=postgres
[postgres@db02-07 ~] $

Finally, the Stanza must be upgraded.

[postgres@db02-07 ~] $ pgbackrest stanza-upgrade --stanza=PROD

2022-06-15 08:21:35.308 P00   INFO: stanza-upgrade command begin 2.39: --exec-id=34516-c2b709b7 --log-level-console=info --log-level-file=debug --pg1-host=DB02-04 --pg2-host=DB02-05 --pg3-host=DB02-06 --pg1-host-user=postgres --pg2-host-user=postgres --pg3-host-user=postgres --pg1-path=/u02/pgdata/14/PROD --pg2-path=/u02/pgdata/14/PROD --pg3-path=/u02/pgdata/14/PROD --pg1-port=5432 --pg2-port=5432 --pg3-port=5432 --pg1-socket-path=/tmp --pg2-socket-path=/tmp --pg3-socket-path=/tmp --pg1-user=postgres --pg2-user=postgres --pg3-user=postgres --repo1-cipher-pass=<redacted> --repo1-cipher-type=aes-256-cbc --repo1-path=/networkshare/pgbackrest --repo1-type=cifs --stanza=PROD
2022-06-15 08:21:38.903 P00   INFO: stanza-upgrade for stanza 'PROD' on repo1
2022-06-15 08:21:39.501 P00   INFO: stanza-upgrade command end: completed successfully (4194ms)
[postgres@db02-07 ~] $
Hope it helps !

L’article Major PostgreSQL version upgrade in a Patroni cluster est apparu en premier sur dbi Blog.

Ansible: Imagination is the Limit

Wed, 2022-06-22 07:36

As you might know, I have discovered Ansible 1 year ago. Since then, I am not using for its main purpose as wikipedia says:

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code.

Instead, I develop few playbooks for others objectives.

I will present you two of them:

  • Search in logs
  • Count clients connections
Search in Logs

In one customer, they have an application deployed over 12 nodes cluster. For day to day operation, I receive user tickets with an error message, but without knowing on which server they were connected while facing the problem (There is a load balancer in front of clients). Unfortunately, Centralized Log Management is not ready, thus I had to think to another solution. Here is where Ansible could help.

The advantage of Ansible over bash scripting, in this situation, is that all users credentials are already managed by the Ansible environment that was developed:

  - name: Include common tasks
    tags: [ always ]
    include_role:
      name: common
      apply:
        tags: always

These transparently manages:

  • Service user name and associated credentials.
  • Access to server with admin account (login and password).
What are we Looking for?

Let’s focus on main feature: The search. To do that, first thing is to know what we are looking for:

  - name: Prompt pattern
    block:
    - name: Prompt pattern
      register: pattern_input
      ansible.builtin.pause:
        prompt: |
          Enter searched pattern
    - name: Set pattern fact
      set_fact:
        pattern: "{{ pattern_input.user_input }}"

    when: pattern is not defined
    delegate_to: localhost
    run_once: True

What I did is to interactively request the pattern we are looking for if it was not already provided as playbook parameter (ie. not defined). The block will not be executed if pattern is already set.

To avoid requesting same pattern as many times as there are servers in the cluster, this will be run only once (line 14) and it will not be related to a specific host of the inventory, so I kept it local (line 13).

Searching

Now, I am ready to do the actual search:

  - name: Search {{ pattern }} in log
    find:
      paths: /opt/,/u02/app/weblogic/config/domains/{{ weblogic_domain }}/servers/
      file_type: file
      exclude: '*.gz'
      contains: '.*{{ pattern }}.*'
      recurse: true
      age: -5d
    register: findings
    become: true
    become_user: weblogic

I am using the find module with a regex. This regex requires “.*”, meaning any characters any amount of time, to be added at beginning and end. If not, it will search only files contain exactly the pattern. Not more, not less. Result will be stored (ie. registered) in findings variable. Note that I searched for files not older than 5 days (line 8) and exclude archived logs (line 5) for faster results.

Then, my idea was to provide a list of files containing the pattern:

  - name: output the path of the files
    set_fact:
     path: "{{ findings.files | map(attribute='path') | join('\n - ')  }}"

path will be a temporary variable which will be written to a file local to the Ansible controller server.

Finally, writing the file:

  - name: Remove {{ ansible_limit }} file
    ansible.builtin.file:
      path: "{{ ansible_limit }}.out"
      state: absent
    delegate_to: localhost
    run_once: True

  - name: Copy list of files in {{ ansible_limit }}
    ansible.builtin.lineinfile:
      path: "{{ ansible_limit }}.out"
      line: "{{inventory_hostname}}:\n - {{ path }}"
      create: yes
      mode: 0666
    delegate_to: localhost
    throttle: 1

In first task, I am removing the file and, secondly, I am writing result in file. Initially, results were not ordered as it depends on the completion time of task on each nodes. To avoid that, I added a “throttle: 1” option which will ensure, it is run one task at a time. “order: sorted” is also added at the beginning of the playbook to ensure that.

Count Clients Connections

This second playbook is to get the amount of client connected to each servers to confirm they are correctly load balanced across all nodes.

First task is to get the process ID with a “shell” task:

  - name: Getting process ID
    shell: ps aux | grep '{{ pattern }}' | grep -v grep | tr -s ' '| cut -d' ' -f2
    register: ps_output

“pattern” is a string which will help to find the PID.

Then, I used netstat to find all established connection to that process (pid_string= “{{ ps_output.stdout }}/java”):

  - name: netstat
    shell: netstat -anpt 2>/dev/null | grep '{{ pid_string }}' | grep ESTABLI | grep -v 1521
    register: conn_list
    become: true
    become_user: weblogic

I filtered out connection to Oracle Database (port 1521) as this process has connections to it as well.

The amount of lines of “conn_list” variable will be the amount of connections:

  - name: Set conn_count
    set_fact:
      conn_count: "{{ conn_list.stdout_lines | length }}"

The same way a previous playbook, I am creating a file local to the Ansible controller where I write a line for each nodes with amount of connections:

      - name: Copy result in {{ result_file }}
        ansible.builtin.lineinfile:
          path: "{{ result_file }}"
          line: "{{inventory_hostname}};{{ pattern }};{{ pid_string }};{{ conn_count }}
          create: yes
          mode: 0666
        throttle: 1

I have also included what was the pattern used and the pid of the process on each host. Keep in mind that all tasks related to the local file are delegated to localhost in one block.

Finally, I thought I could add a total of connections for all nodes. This was the difficult part. Initially, I used a sed of the file to do it, but then, I thought “There is nothing that Ansible can’t do!”. So I persevered and found that solution:

      - name: Calculate totals in {{ result_file }}
        set_fact:
          TotalConnLines: "{{ansible_play_hosts_all | map('extract', hostvars, 'conn_count') | map('int') | sum }}"
        run_once: True

Let’s detail that jinja template:

  1. ansible_play_hosts_all
  2. map(‘extract’, hostvars, ‘conn_count’)
  3. map(‘int’)
  4. sum

Part 1 is to get a list of all hosts where Ansible is ran. Then, part 2, I extract from “hostvars“, the variable “conn_count” for each hosts. This is now a list of counts. I could simply pipe it to “sum”, but this failed because elements of the list are strings. So, I had to apply “int” on them with help of map (part 3). Finally, summing up counts at part 4.

Then, I write the total line to the resulting file:

      - name: Add total line in {{ result_file }}
        ansible.builtin.lineinfile:
          path: "{{ result_file }}"
          line: ";{{ TotalConnLines }}"
          insertbefore: EOF
        run_once: True

This is quite a complex jinja template to do it, but we see nothing is impossible.

And Yours?

And you, for what are you using Ansible which was not his main purpose?

L’article Ansible: Imagination is the Limit est apparu en premier sur dbi Blog.

Facial recognition, between advanced biometrics and needs for privacy… – Part 2 of 4

Tue, 2022-06-21 01:54

Continuation of our series on facial recognition. After defining facial recognition among the biometric processes (see here), let’s see the techniques used for its application.

State of the art of the techniques used for its application

The technology on this topic is constantly evolving, driven in particular by the web giants, who directly publish their theoretical discoveries in the areas of AI and image recognition, to advance the state of the art as quickly as possible.

Main steps of the process

The facial recognition process can be performed from photos or videos. [6]

It takes place in five major steps:

  1. Face detection: the system will isolate the faces present in the image from the rest of the image, to prepare them for processing.
  2. Preparation of the images to align them to a precise standard: the goal is to make variables such as the position of the head, the size of the image and photographic qualities such as lighting or gray level as little influential as possible on the measurements that will follow.
  3. Facial data extraction. Once the image is prepared, all the facial data that the AI will use to compare the information is extracted from the image.
  4. A model, called “template”, which represents the biometric characteristics of the face appearing in the image (or video) is made.
  5. The values of this template are then compared with templates calculated in real time from the stored biometric data.

For authentication, this template is made from the stored data for the identity that the person claims to be.
For identification, the template made in 4. is compared with the different people present in the database, and the AI selects the most matching person, provided that the similarity score is above a predetermined threshold. [6]

The different techniques that accompany the stages of facial recognition

Algorithms with “feature-based” approach

The first of the two main approaches for face recognition algorithms is to identify the different features of a face by extracting them from the image. The algorithms will retrieve from the image the different values associated with the criteria listed in the section above (parameters used for comparison).

This approach is also called “geometric”.

The criteria most often used by algorithms with geometric approach are:

  • Eye distance
  • Nose bridge distance
  • Commissures of the lips
  • Ears
  • Chin
  • Face shape
  • Shape of the jaw

Algorithms with a “holistic” approach
Holistic approach algorithms aim at normalizing a gallery of face images, compressing the face data, and saving only the part of the data that is useful for facial recognition. This compressed representation of a face gives a template, and the different templates are then used for comparison. [7]

This approach is also called “photometric”.

Photometric algorithms include:

  • Eigenfaces (oldest algorithm developed, 1991) [8]
  • Fisherfaces
  • Elastic bunch graph matching [9]
  • Linear discriminant analysis [10]
  • Hidden markov model [11]
  • Local Binary Patterns Histograms (LBPH), which is one of the most popular today.
Remote human identification

To enable automatic human identification at distance (HID) (and thus at low resolution for the human in question shown in the photograph), the initial low resolution is enhanced using a process called “face hallucination”. [12]

This process then precedes the traditional face recognition steps, to prepare the image. It uses either:

  • A machine learning AI, trained by face examples. The AI will trace the face in more detail to improve the resolution of the image based on these face examples to determine what that part of the face would probably look like in a more accurate photo.
  • A k-nearest neighbor distribution, which is a statistical mathematical function often used in the AI world. This approach aims to mathematically deduce what the neighboring pixels of already known pixels “most likely” look like.

These processes can be enhanced by incorporating information about face characteristics based on age and different demographics into the AI, to help it make the right choice.
This is particularly useful on:

  • Images from traditional video surveillance cameras, where the resolution is usually much too low for the image to be used as is for facial recognition.
  • In the case of using facial recognition algorithms that require particularly high resolutions, the face hallucination process is also used to achieve a sufficiently high resolution for more standard resolution images, and thus widen the usable database.
  • In case of hidden or partially hidden faces. This is one of the methods to recalculate the masked part of the face (by glasses for example).
3D recognition

The use of 3D sensors allows for a more accurate capture of information about the shape of the face. This method of capture has several advantages:

  • It is not affected by changes in ambient lighting.
  • It can identify more easily photographs taken in profile.
  • Using 3D data gives much better performance for facial recognition AI.

Some facial recognition systems already in place use a 3-camera system to capture faces in 3D.
Note that the use of 3D faces makes facial recognition very sensitive to facial expressions that distort it; it becomes necessary to pre-process the image to compensate for this influence and allow good results.

Use of thermal imaging cameras

In order to completely bypass any attempt to hide one’s face, facial recognition techniques using thermal cameras have been developed. These methods are not very effective on their own for several reasons.

  1. The databases of faces taken with thermal imaging cameras are very limited. Where other facial recognition methods can for example feed their databases via an aspiration of content found on the internet, infrared camera databases almost always have to be built from scratch.
  2. This method does not currently work for photos taken outdoors. It needs a stable temperature environment.

On the other hand, researchers at the ARL (US Army Research Laboratory) have developed a method that allows to compare images taken in infrared with images taken by normal cameras. This solves the problem seen in 1.

Other techniques and associated technologies

Some other techniques are used in facial recognition to tailor it to specific needs.

  • Match on card: biometric data is contained on a card in our possession rather than stored in a database. They can then be compared without any data leaving the card.
  • Multi-modal biometrics and other authentication factors: when more reliability is needed than facial recognition alone, systems use multiple authentication factors.
  • Data Anonymization: A biometric database can avoid linking data to an identity, and instead link it to a string of characters.

________________________________________________________________________

[6] Official site of the CNIL, Facial Recognition: https://www.cnil.fr/fr/definition/reconnaissance-faciale
[7] Wikipedia, Système de reconnaissance faciale: https://en.wikipedia.org/wiki/Facial_recognition_system
[8] Ravi S., 2013, A study on Face Recognition Technique based on Eigenface
[9] Wiskott L., 1997, Face Recognition by elastic bunch graph matching
[10] ETEMAD Kamran and CHELLAPPA Rama, 1997, Discriminant analysis for recognition of human face images: https://www.face-rec.org/algorithms/LDA/discriminant-analysis-for-recognition.pdf
[11] ARA V. Nefian et MONSON H. Hayes III, 1998, Face detection and recognition using hidden markov models: http://www.anefian.com/research/nefian98_face.pdf
[12] XIAOU Tang, 2015, Hallucinating Face by Eigentransformation: https://www.researchgate.net/publication/3421633_Hallucinating_Face_by_Eigentransformation

L’article Facial recognition, between advanced biometrics and needs for privacy… – Part 2 of 4 est apparu en premier sur dbi Blog.

Will Exa@CC change or kill your DBA job?

Mon, 2022-06-20 09:31
Introduction

Exadata Cloud@Customer (Exa@CC) from Oracle is an hybrid solution for customer willing a Cloud-like platform without actually being in the public Cloud. Behind cloud’s concept is a high-level management of complex stuff, like provisioning homes and databases, patching, aso. And this has made classic deployments rather obsolete. Exa@CC brings these Cloud features to a kind of on-prem solution.

What is Exadata?

Exadata is the server behind Exa@CC. Actually, this is not a single server, it’s a set of servers and network equipment inside a full rack. There is two kinds of servers inside an Exadata, compute nodes (at least 2) for running databases, and storage cells (at least 3), for running ASM storage. The main difference compared to any other solution is that storage is aware of database queries and can offload part of the job from the compute nodes to the cells.

Exadata is considered to be the highest-end solution for very demanding and highly critical Oracle databases. As you may guess, this is not for you if you only run a couple of instances.

What is Exadata Could@Customer?

Key points of the Exa@CC:

  • A “classic” Exadata installed by Oracle in your data center
  • Paid as a cloud subscription (monthly fee) for a number of years
  • Included license: Oracle Enterprise Edition Extreme performance (understand with all options)
  • Manageable with the OCI console (Oracle’s public cloud portal)
  • Manageable with the OCI REST APIs
  • Fully integrated to your network as if it were yours
  • Run databases inside Virtual Machines clusters
  • Configured to your needs with very few limits (root access to VMs)
  • Nothing resides in a public Cloud
Promises of Exa@CC

This solution is the Oracle masterpiece regarding databases. The promises are:

  • First class performance
  • Easy setup
  • Easy patching
  • Easy provisioning
  • Easy Data Guard setup
  • Easy backup
  • Pay-as-you-run cost

That sounds great. Exadata made easy, this is what some of us always expected! For sure this solution comes at a price point that may not fit your purse. But if you work with hundreds of databases and if Oracle database is one of the most important component in your IT infrastructure, it’s definitely a solution to consider.

Reality: what is really easy?

Once Exadata is in your data center, and linked to OCI with the correct privileges, the first step is to provision the VM clusters. Exadata is a powerful machine, and you will split it into multiple VM clusters for production, test, certification, … Note that most of the time, you will need at least 2 Exadatas, Disaster Recovery not being addressed by using another VM cluster on the same hardware.

Provisioning VM clusters is quite easy and is not really a DBA task, as it consists of providing CPU sizing, memory, disk space and network configuration. The virtual servers will then be provisioned, and for the RAC setup, because Exadata means RAC, each component is configured automatically during VM cluster provisioning. You can compare this to provisioning an HA Oracle Database Appliance on which you don’t do the RAC setup yourself.

Provisioning DB homes and databases is also very easy once VM clusters are available, straight from the OCI console. You eventually don’t need any DBA knowledge for that, just choose your version, database name and character set and it’s done.

Provisioning PDBs is also included in OCI, this is quite a new feature.

Data Guard configuration is also so easy. Create a Data Guard configuration is choosing target VM cluster for creating the standby and protection mode for your configuration, and everything is done automatically. You can later switchover or failover from OCI. Goodbye broker and command line interface.

Regarding backup, each database can be configured for automatic backup: just provide your usual nfs share and your target backup retention and it will be configured and scheduled automatically. Restore is also done from the OCI console, this is really nice.

Regarding these points, obviously there is less work for you. Or more precisely, these tasks will take you less time to complete.

Reality: limits of the easy stuff

Everything would be easy if Oracle and Exadata together were less complex. Underlying complexity on these technologies is still there.

In case of a problem, because problems can happen on this platform too, troubleshooting will be needed. And you will need the knowledge on RAC, Data Guard, Linux, and so on.

Another point is that some easy-to-implement stuff may not fit your needs.

For example, if a specific configuration is needed for Data Guard (basically if some databases need 2 standbys), you will need to configure it yourself, with a classic tools like dgmgrl.

This is the same for backups, if you need to build a complex backup strategy based on a mix of disk backups and tape backups, automatic backup will not allow you to do that, at least not yet. You will then go back to rman and shell scripting, and you will need a scheduler.

Fortunately, your Exa@CC will benefit from new features quite regularly. I’ve only been working 1 year with this platform and I already seen some great improvements. Improvements are mainly new features brought to the OCI console: you can still manage everything manually if you want to.

DBA tasks on Exa@CC

There is still a lot of work for a DBA on this platform:

  • Provisioning DB homes, databases and PDBs
  • Defining resources between PDB, CDB and VM clusters
  • Tuning the databases
  • Planning and applying patches, because you will decide when you apply them once they are available
  • Monitoring the platform, databases, backups
  • Doing the migrations from you current environment
  • Leveraging the potential of the platform (using available options, optimizing offload to the cells)
  • Managing credits and resources according to planned needs

Managing credits is a hot topic when it comes to resources on the Cloud. Basically every active resource will cost something. On Exa@CC, you should know that you can provision a VM cluster with zero cores, and increase the number of cores when you start to use it. You may also think about stopping some of your VM clusters when they are not absolutely needed. For example during the night. This is also something greener than letting everything running 24h a day.

Conclusion

Don’t be afraid of this solution. Exa@CC will not steal your job, it will change it a little bit. There is still a lot to do for DBAs, the few things removed from your to do list will be replaced by other tasks, more interesting ones in my opinion. This is the real challenge. A DBA should continue learning to keep his job. But this is the same for a lot of other jobs now.

L’article Will Exa@CC change or kill your DBA job? est apparu en premier sur dbi Blog.

CIFS mounts no more compatible with ODA

Tue, 2022-06-14 11:44
Introduction

Some of us are using CIFS mountpoints on the ODA. This is mostly for sharing files with Windows application servers. On ODA, as on other Linux setup, it works like a charm. But it ends working starting from patch 19.11. Now, it’s no more possible to use these kind of mount.

What is CIFS?

CIFS (Common Internet File System) also called SMB (Server Message Block) is a file sharing protocol created by Microsoft a long time ago. It was implemented for Linux using reverse engineering under the name Samba. It’s still updated more or less frequently, and it has been known for some security issues over the time. CIFS relies on user/password authentication, most often as clear text in /etc/fstab.

CIFS is only used when sharing files from Windows to Linux or vice-versa. Sharing files between Linux servers is much more common. For this purpose, NFS protocol (Network File System), an open standard created by Sun Microsystems (now part of Oracle), is broadly used. One of the main difference compared to CIFS is that is does not rely on authentication but on user and group ids. NFS server explicitely decides which client is able to connect, and both machines are supposed to use the same ids for users and groups.

Error mounting CIFS shares on ODA >= 19.11

When mounting a CIFS share on your ODA >= 19.11, the following error will raise:

mkdir /WinShare
echo "//10.36.0.250/winshare /WinShare cifs user=winuser,password=* 0 0" >> /etc/fstab
mount -a
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Message is not that clear, and after investigating I found that this may due to FIPS being enabled on my ODA (recently patched from 19.9 to 19.14).

What is FIPS?

FIPS (Federal Information Processing Standards) is a new security standard from the USA. It prevents security flaws on a system, thus making a system more secure. One of the features is to disallow unsecured authentications like NTLM, NTLMv2 and NTLMSSP which are CIFS standards.

How to solve this problem?

There is multiple ways of dealing with this problem. From the best to the worst.

Configuring a Windows NFS share

This first solution is the best one in my opinion: use NFS instead of CIFS. You may say that NFS is not compatible with Windows, but it’s not true anymore. Starting from Windows 2008, this OS is able to create a NFS share very easily. And this is definitely much cleaner because Microsoft implemented an open standard. I would recommend using Windows 2012 or later for NFS v4.1 support. Here is a blog post I would recommend if you need to configure a NFS share on Windows Server 2016.

Using Kerberos on top of CIFS

Using CIFS is not the problem for FIPS, the problem is using basic authentication method with login/password being sent through the network. Kerberos is a much more elaborated mechanism based on keys. But it needs a more complex setup. Thankfully, Active Directory can act as a Kerberos server. I would love to test it but for now I don’t have the adequate lab environment for that.

Disabling FIPS

This is definitely not recommended. If you choose ODA, you must accept system changes including the inclusion of FIPS protocol. If you don’t accept these changes, consider using old versions of ODA patches, but it’s also not recommended.

I found the method to disable FIPS in this blog post.

First of all, check if FIPS is enabled on your system (should be true if your ODA runs 19.11 or later patch):

cat /proc/sys/crypto/fips_enabled
1

FIPS is configured as a kernel option in /etc/grub.conf. You first need to remove some packages, do a backup of the initramfs, generate a new initramfs (dracut -f) modify the grub options, recompile the grub config file and reboot the server (to be done on each node with an HA ODA):

yum remove dracut-fips*
Loaded plugins: langpacks, priorities, ulninfo, versionlock
Resolving Dependencies
--> Running transaction check
---> Package dracut-fips.x86_64 0:033-572.0.9.el7 will be erased
---> Package dracut-fips-aesni.x86_64 0:033-572.0.9.el7 will be erased
--> Finished Dependency Resolution
ol7_UEKR6/x86_64 | 3.0 kB 00:00:00
ol7_UEKR6/x86_64/updateinfo | 507 kB 00:00:00
ol7_UEKR6/x86_64/primary_db | 40 MB 00:00:00
ol7_latest/x86_64 | 3.6 kB 00:00:00
ol7_latest/x86_64/group_gz | 136 kB 00:00:00
ol7_latest/x86_64/updateinfo | 3.4 MB 00:00:00
ol7_latest/x86_64/primary_db | 40 MB 00:00:00
Dependencies Resolved
===========================================================================================================================
Package Arch Version Repository Size
Removing:
dracut-fips x86_64 033-572.0.9.el7 @OSPatchBaseRepo 8.1 k
dracut-fips-aesni x86_64 033-572.0.9.el7 @OSPatchBaseRepo 18 k
Transaction Summary
Remove 2 Packages
Installed size: 26 k
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
oda-hw-mgmt-19.15.0.0.0_LINUX.X64_220530-1.x86_64 has missing requires of perl(GridDefParams)
oda-hw-mgmt-19.15.0.0.0_LINUX.X64_220530-1.x86_64 has missing requires of perl(s_GridSteps)
perl-RPC-XML-0.78-3.el7.noarch has missing requires of perl(DateTime::Format::ISO8601) >= ('0', '0.07', None)
Erasing : dracut-fips-aesni-033-572.0.9.el7.x86_64 1/2
Erasing : dracut-fips-033-572.0.9.el7.x86_64 2/2
Verifying : dracut-fips-033-572.0.9.el7.x86_64 1/2
Verifying : dracut-fips-aesni-033-572.0.9.el7.x86_64 2/2
Removed:
dracut-fips.x86_64 0:033-572.0.9.el7 dracut-fips-aesni.x86_64 0:033-572.0.9.el7
Complete!
cp -p /boot/initramfs-$(uname -r).img /opt/dbi/initramfs-$(uname -r).with_fips
dracut -f
vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="pci=noaer loglevel=3 panic=60 transparent_hugepage=never ipv6.disable=1 intel_idle.max_cstate=1 nofloppy numa=on console=ttyS0,115200n8 console=tty0 crashkernel=256M@64M rd.lvm.lv=VolGroupSys/LogVolRoot rd.md.uuid=10e67471:4b600fe1:d970d513:c635edf6 rd.md.uuid=1e334e65:aea7e87e:516f7dfe:8d79ccfb rd.lvm.lv=VolGroupSys/LogVolSwap biosdevname=1 boot=UUID=9cb4c7c1-e87a-4ae1-9c22-fcc5e3460ce1 fips=0 nvme.nvme_io_queues=32 nvme_core.multipath=0"
GRUB_DISABLE_RECOVERY="true"
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Generating grub configuration file …
WARNING: Ignoring duplicate config value: global_filter
WARNING: Ignoring duplicate config value: global_filter
WARNING: Ignoring duplicate config value: global_filter
WARNING: Ignoring duplicate config value: global_filter
Found linux image: /boot/vmlinuz-4.14.35-2047.512.6.el7uek.x86_64
Found initrd image: /boot/initramfs-4.14.35-2047.512.6.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-4.14.35-2047.510.5.4.el7uek.x86_64
Found initrd image: /boot/initramfs-4.14.35-2047.510.5.4.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-4.14.35-2047.505.4.3.el7uek.x86_64
Found initrd image: /boot/initramfs-4.14.35-2047.505.4.3.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-b7d66bf7abc14b359559ec75c7415cbc
Found initrd image: /boot/initramfs-0-rescue-b7d66bf7abc14b359559ec75c7415cbc.img
WARNING: Ignoring duplicate config value: global_filter
WARNING: Ignoring duplicate config value: global_filter
WARNING: Ignoring duplicate config value: global_filter
WARNING: Ignoring duplicate config value: global_filter
done
shutdown -r now
mount -a
df -h /WinShare
Filesystem Size Used Avail Use% Mounted on
//10.36.0.250/winshare 100G 85G 15G 86% /WinShare
echo "It works now" > /WinShare/test.txt
cat /WinShare/test.txt
It works now

If you need to reverse to FIPS enabled mode, then it’s possible.

Revert to FIPS enabled mode
yum install dracut-fips*
Loaded plugins: langpacks, priorities, ulninfo, versionlock
Excluding 111 updates due to versionlock (use "yum versionlock status" to show them)
Resolving Dependencies
--> Running transaction check
---> Package dracut-fips.x86_64 0:033-572.0.9.el7 will be installed
---> Package dracut-fips-aesni.x86_64 0:033-572.0.9.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===========================================================================================================================
Package Arch Version Repository Size
Installing:
dracut-fips x86_64 033-572.0.9.el7 ol7_latest 64 k
dracut-fips-aesni x86_64 033-572.0.9.el7 ol7_latest 68 k
Transaction Summary
Install 2 Packages
Total download size: 132 k
Installed size: 26 k
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/dracut-fips-033-572.0.9.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for dracut-fips-033-572.0.9.el7.x86_64.rpm is not installed
(1/2): dracut-fips-033-572.0.9.el7.x86_64.rpm | 64 kB 00:00:00
(2/2): dracut-fips-aesni-033-572.0.9.el7.x86_64.rpm | 68 kB 00:00:00
Total 300 kB/s | 132 kB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
Userid : "Oracle OSS group (Open Source Software group) build@oss.oracle.com"
Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
Package : 7:oraclelinux-release-7.9-1.0.9.el7.x86_64 (@anaconda/19.12)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : dracut-fips-033-572.0.9.el7.x86_64 1/2
Installing : dracut-fips-aesni-033-572.0.9.el7.x86_64 2/2
Verifying : dracut-fips-033-572.0.9.el7.x86_64 1/2
Verifying : dracut-fips-aesni-033-572.0.9.el7.x86_64 2/2
Installed:
dracut-fips.x86_64 0:033-572.0.9.el7 dracut-fips-aesni.x86_64 0:033-572.0.9.el7
Complete!
dracut -f
vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="pci=noaer loglevel=3 panic=60 transparent_hugepage=never ipv6.disable=1 intel_idle.max_cstate=1 nofloppy numa=on console=ttyS0,115200n8 console=tty0 crashkernel=256M@64M rd.lvm.lv=VolGroupSys/LogVolRoot rd.md.uuid=10e67471:4b600fe1:d970d513:c635edf6 rd.md.uuid=1e334e65:aea7e87e:516f7dfe:8d79ccfb rd.lvm.lv=VolGroupSys/LogVolSwap biosdevname=1 boot=UUID=9cb4c7c1-e87a-4ae1-9c22-fcc5e3460ce1 fips=1 nvme.nvme_io_queues=32 nvme_core.multipath=0"
GRUB_DISABLE_RECOVERY="true"
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
shutdown -r now
cat /proc/sys/crypto/fips_enabled
1
mount -a
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Conclusion

Switching your CIFS shares to NFS is the best approach, even if it needs some work on the Windows side. But continuously increasing security level should also be a task for the DBA.

L’article CIFS mounts no more compatible with ODA est apparu en premier sur dbi Blog.

Facial recognition, between advanced biometrics and needs for privacy… – Part 1 of 4

Tue, 2022-06-14 04:24

Among today’s biometric applications, one in particular is the subject of much controversy and discussion: facial recognition.
One of the most controversial applications of facial recognition is face recognition based on a photograph, as done by the company Clearview AI to name one.

After defining facial recognition among the biometric processes in this article, we will give an overview of the techniques used for its application in a second part.
In a third part, we will give the rules to follow to allow its use, its field of application and the regulatory limits that this implies (in European countries in particular).
In a fourth and final part, and in a defensive concern (to try to protect oneself against the techniques of acquisition of these images for example, to lure them in particular), we will indicate if parades exist today, and will give proposals of implementation if necessary.

Introduction

Facial recognition is the most talked about biometric technique of the last decade. Touted as a high-potential technology since the 1990s, it has long been hampered by technological limitations such as computational capabilities, camera image resolution, limited storage capacity, and the lack of effective neural network models. With these technological barriers lifted, the web and computer giants have set about advancing facial recognition, notably by making the results of their work on artificial intelligence available in open source.

Facial recognition systems, or FRSs, are now at least as effective at recognizing an individual as a human and require only a few seconds and no action by the individual to perform authentication. The different technologies are combined to reduce the risk of error and fraud, and image processing techniques prior to facial recognition itself allow the use of an increasingly wide range of images (low-resolution, profile, infrared, masked face, photograph of a crowd…) in a way that is almost as effective.

But with this progression, human rights and the laws that protect them are more and more likely to be violated. The global scene is becoming more polarized and opinion is divided over this technology that allows Orwellian levels of surveillance of the civilian population. Some digital giants like Amazon and Microsoft are backing down in the face of protests, and say they want to wait for more specific laws before continuing to market their FRS solutions; others, like the Chinese government, are embracing the technology and monitoring the population, while still others are looking for methods to block, lure facial recognition systems and protect their anonymity… Or break the law.

In the midst of all these questions, the various states are struggling to establish an effective legal framework to protect individuals from the abuses of the technology; and companies like Clearview AI (USA) have taken advantage of the general hesitation to take the plunge, announcing that they have collected more than 10 billion images of faces from the entire Internet without any authorization, and are making their artificial intelligence services available to government bodies and the private sector using this database.

We will review what facial recognition is and briefly look at its history, before dwelling on the different techniques used today, the legal framework existing around the world, and ending with a discussion of the different attempts to fool the technology and their results.

Facial recognition among biometric processes Biometrics – definition

Originally, the word “biometrics” referred to any analysis and measurement of physical characteristics strictly specific to a person (voice, face, iris, fingerprints…).

Nowadays, however, it is generally used to designate all computer techniques that allow for the automatic recognition, authentication and identification of an individual based on his or her physical, biological and even behavioral characteristics. Biometric data is therefore personal data, as it allows a person to be identified.

To achieve this goal, it is necessary that the characteristics used as criteria are:

  • Universal: every human being must possess them.
  • Unique: the criterion must not be identical between two given individuals, to limit authentication errors
  • Invariant: the value of this criterion for a given individual must not change throughout his or her life, so that the identification remains reliable over time.
  • Measurable: the current technology must allow a reliable measurement, so that the comparison is feasible.

The goal of biometrics is to make authentication, identification and recognition simpler, faster, and above all more secure.

The different categories of biometrics

There are three main categories of biometrics:

  • Biological (identification via DNA).
  • Morphological or morpho-physiological (hand, palm, fingerprints, venous network, face, iris, venous network of the retina, voice, gait, ear).
  • Behavioral.

The biological category will use blood, urine or saliva for identification. It is obviously time-consuming and is mostly used in the context of judicial investigations.

The behavioral category will use voice recognition, signature dynamics (speed of pen movement, pressure exerted, etc.), gait, as well as keyboard strokes.

The morphological category is the only one that can easily be used on a large scale and in both private and public domains, due to the ease of acquiring the necessary data for comparison.

The position of facial recognition within biometrics

Facial recognition is a morphological type of biometric technique.

It is one of the three most widely used biometric recognition technologies today, along with fingerprint and iris recognition. It is the most efficient, reliable and easy to deploy technology at our current level of technology.

The advantages of facial recognition over other forms of authentication and identification are:

  • For the best systems currently available, there is a strong resistance to fraud in all conditions of lighting, angles of the face or changes in it (motorcycle helmet, headphones, haircut, glasses, etc.)).
  • Its use is very fast, the user hardly needs to stop.
  • It is a contactless identification (interesting for several reasons, like the associated hygienic issue, particularly appreciated since the pandemic)
  • Identification in the middle of a crowd is quite possible, just like in other dynamic and unstable environments. [1]
  • Many of the required databases are very easy to populate in the case of simple photography. The Clearview AI example has shown that it is even possible to build up an effective international database simply by sucking up the images that people put on the web.

Facial Recognition Technologies (FRTs) can be used for:

  • Identification (1:N verification): find out who the person in the photograph is by searching for similar data in one or more databases.
  • Authentication (1:1 verification): Verify that the person is who they say they are by comparing them with pre-stored data for that person.
  • Detection: simply verify that there is a face present.
  • Verification: check that two biometric templates belong to the same person. The model does not need to know the identity of the person.
  • Categorization: classify people based on their morphological characteristics, or classify photographs based on facial expressions for example.
The evolution of facial recognition

Although the first attempts and algorithms for facial recognition date back to the early 1990s, the technology requires a higher level of technology than other biometric technologies like fingerprint recognition, etc. The lack of sufficiently accurate images, sufficiently large databases, and sufficiently large computational capacities in particular have meant that FRT have long lagged behind biometrics.

2014: GaussianFace algorithm (University of Honk Kong) achieves facial identification scores of 98.52%

2014: Facebook launches DeepFace (97.25% accuracy). First facial recognition technology implementing deep learning.

2015: Google launches FaceNet, which achieves up to 99.63% accuracy. It leads to an integration of the technology with Google Photo to sort users’ photographs and becomes available in an open-source version called OpenFace.

2018: Amazon promotes Rekognition to law enforcement. The solution can recognize up to 100 people in a crowd photograph and has a database with tens of millions of faces.

2018: LFIS, the Thales solution, achieves 98% accuracy with less than 5 seconds spent per face on a test conducted by the U.S. Homeland Security Science and Technology Directorate on 300 volunteers.

2019: FRTs now receive the third largest share of global AI investment ($4.7 billion). [2]

2020: NIST tests show that the best FRT algorithms no longer have a racial or gender bias.

2020: Amazon places a one-year moratorium (since extended) on police use of Rekognition, for ethical reasons, pending more comprehensive US legislation. Microsoft does the same, and Axon announces that it is withdrawing from marketing TRFs to US police forces. [3]

2025 Predictions are that FITs will be used for smartphone payments by more than 1.4 billion users by 2025. [4] The necessary technologies are already in place on most mobile OSes today. [5]

It is estimated that facial recognition performance has increased 20-fold between 2013 and 2018, and accuracy continues to improve each year by 30-50%.

Who uses facial recognition?

Facial recognition is now used on a daily basis in the private and public sectors.

  • Consumer applications: Smartphones, tablets, computers are equipped with facial recognition systems to authenticate their owners.
  • Social media: Snapchat, Facebook etc. also use this technology to authenticate users
  • Commercial applications: Identification of people approaching ATMs in banks, and anti-fraud banking checks, including on banks’ mobile applications.
  • Surveillance and access control in physical spaces: Facial recognition systems are installed, for example, to control access automatically and replace manual identity checks; for example, in the Singapore airport, or at border controls.
  • Identification in public spaces: Demonstrations, gatherings, or simply individuals moving normally in the street are identified via surveillance cameras. The system is already extremely developed and used in China, notably with SkyNet which monitors more than 1.4 billion suspects in their daily activities.
  • Healthcare: FRTs are used for automatic patient sorting, for example.
  • Elections: FRTs are also used for authentication during remote voting.
  • Sentiment analysis: Sentiment analysis research is now completely dependent on FRTs. This research in turn feeds the progression of FRTs which integrate a correction of facial deformations via emotion to improve their efficiency.

________________________________________________________________________

[1] Official site of the Thales group, Facial Recognition: https://www.thalesgroup.com/fr/europe/france/dis/gouvernement/biometrie/reconnaissance-faciale
[2] Standford University, 2019, The AI Index 2019 Annual Report: https://hai.stanford.edu/sites/default/files/ai_index_2019_report.pdf
[3] SMITH Rick, 2019, The future of face matching at Axon and AI ethics board report: https://www.axon.com/company/news/ai-ethics-board-report
[4] i-SCOOP, 2020, Facial recognition 2020 and beyond — trends and market AND Fortune Business Insight, 2021, Facial recognition market, Global Industry Analysis, Insights and Forecast, 2016-2027
[5] Juniper Research, 2021, Mobile payment authentication: Biometrics, Regulation & forecasts 2021-2025 et Facial Recognition for Payments Authentication to Be Used by Over 1.4 billion People Globally by 2025.

L’article Facial recognition, between advanced biometrics and needs for privacy… – Part 1 of 4 est apparu en premier sur dbi Blog.

A 7 year journey, my longest employment ever

Mon, 2022-06-13 06:06

It has been an amazing ride: 1st of May 2022 I have been with dbi services for exactly seven years. This is by far the longest employment in my career. When I started way back in 2015 most of my work was still around Oracle products, like the Oracle Database and Oracle Grid Infrastructure, Oracle GoldenGate and a bit of Oracle RAC, DataGuard and Exadata. I still remember the first discussions about my contract and the options to develop myself. The most important topic for me at that time was to develop the open source database business at dbi services, that was the unofficial deal.

By “unofficial” I mean that, it was not written in the contract and this brings me to one of the most important experiences I have had, and still have, at dbi services: Trust. Of course, trust is something you have to earn, but once you have it, you are free to go in any direction as long as it is related to the business. For me that meant reducing my Oracle tasks over the years and focus more and more on open source products. For this opportunity I want to say “Thank You” to my management, my colleagues and last but not least, to my team.

When I look back at how that open source business started it was a huge opportunity, but also a huge investment. Before you become visible in an open source community, you need to spend quite some time and investments until it becomes something which also gives back some money to pay the employees. I could also say it was a kind of risk. Taking that risk, investing into that is probably nothing every company is willing to do, and that brings us back to trust. Without trust you’ll loose trust of the employees and then you’re done or stuck into whatever you’re doing currently.

Today, we’re quite happy with our open source business. The most challenging task is to find qualified people in the market, or people who are willing to learn in our business. We’re still doing infrastructures, no fancy or cool application development. But without a solid infrastructure you can’t build applications that scale and perform. No matter if it is in a public Cloud, private Cloud, or in one or more data centers you own or rent.

We’re growing continuously and we need more human resources in all areas. How can we attract them? This is quite easy to answer: You want to work in a company where you are known and respected as a person, not as a number. You want to be able to develop yourself and you want to feel like being part of a family. This is, what I have always been looking for and this is exactly the reason I am still with dbi services, and probably will be for many more years.

I know this is not a technical blog post, but it needed to be said.

L’article A 7 year journey, my longest employment ever est apparu en premier sur dbi Blog.

Migrating CentOS 8 to Oracle linux OL8

Mon, 2022-06-13 02:14

As you might be aware, CentOS 8 is end of life since the 31st of December 2021. If you are still using and running CentOS 8, you might want to migrate to some alternative distributions in order to keep a Red Hat compatibility.

Those alternative distributions might be chosen between :

  • Rocky Linux
  • AlmaLinux
  • Red Hat Enterprise Linux
  • Oracle Linux

We are already having a few blogs describing how to migrate to Rocky Linux, AlmaLinux and Red Hat Enterprise Linux :

In this blog we will see how to migrate from CentOS 8 to Oracle Linux 8.

centos2ol.sh Oracle Script

Oracle provides a script to be able to switch from CentOS Linux 6, 7 or 8 to its equivalent version of Oracle Linux. The script as well as requirements, limitations, know issues and explanations can be found on the oracle GitHub website.

Please read carefully the limitations part. Access to both CentOS and Oracle Linux yum repositories is mandatory.

Before starting the migration using the script it is very important to ensure having a complete working backup of the system.

Migration demo

I have setup a linux environment running CentOS 8.

[root@centosmigr ~]# cat /etc/centos-release
CentOS Linux release 8.4.2105

We need to download the centos2ol.sh script and move it to the CentOS Linux server to be migrated. This can be done using curl command.

[root@centosmigr ~]# curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 22928  100 22928    0     0   120k      0 --:--:-- --:--:-- --:--:--  121k

[root@centosmigr ~]# ls -l centos*.sh
-rw-r--r--. 1 root root 22928 Jun  8 08:13 centos2ol.sh

Let’s first check the current kernel and kernel executable files :

[root@centosmigr ~]# uname -a
Linux centosmigr 4.18.0-305.3.1.el8.x86_64 #1 SMP Tue Jun 1 16:14:33 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

[root@centosmigr ~]# ls -l /boot
total 153964
-rw-r--r--. 1 root root   192095 Jun  1  2021 config-4.18.0-305.3.1.el8.x86_64
drwxr-xr-x. 3 root root       17 Jun  2  2021 efi
drwxr-xr-x. 2 root root       39 Jun  2  2021 grub
drwx------. 4 root root       83 Jun  8 08:14 grub2
-rw-------. 1 root root 64383726 Jun  2  2021 initramfs-0-rescue-40ec35b37a254049be3d85f9173b39a1.img
-rw-------. 1 root root 51128711 Jun  2  2021 initramfs-4.18.0-305.3.1.el8.x86_64.img
-rw-------. 1 root root 17727685 Jun  3 14:44 initramfs-4.18.0-305.3.1.el8.x86_64kdump.img
drwxr-xr-x. 3 root root       21 Jun  2  2021 loader
-rw-------. 1 root root  4164308 Jun  1  2021 System.map-4.18.0-305.3.1.el8.x86_64
-rwxr-xr-x. 1 root root 10026120 Jun  2  2021 vmlinuz-0-rescue-40ec35b37a254049be3d85f9173b39a1
-rwxr-xr-x. 1 root root 10026120 Jun  1  2021 vmlinuz-4.18.0-305.3.1.el8.x86_64

And my current centos linux package :

[root@centosmigr ~]# rpm -qa | grep -i centos
centos-linux-repos-8-2.el8.noarch
centos-logos-85.5-1.el8.x86_64
centos-linux-release-8.4-1.2105.el8.noarch
centos-gpg-keys-8-2.el8.noarch

We can run the conversion script from Oracle with centos user knowing it has the sudo priviledge. The option of the script are the following one :

[centos@centosmigr ~]$ sudo bash /root/centos2ol.sh -h
Usage: centos2ol.sh [OPTIONS]

OPTIONS
-h
        Display this help and exit
-k
        Do not install the UEK kernel and disable UEK repos
-r
        Reinstall all CentOS RPMs with Oracle Linux RPMs
        Note: This is not necessary for support
-V
        Verify RPM information before and after the switch

Note that in can we would prefer to migrate to the Red Hat compatible Kernel and not to UEK, we can use the option -k. This option will not install the UEK kernel and also disable UEK repository.

We will run the script with the -V option.

[centos@centosmigr ~]$ sudo bash /root/centos2ol.sh -V

After a first few checks, the script will backup and remove CentOS repository files, followed by a download and an installation of Oracle Linux repositories.

Backing up and removing old repository files...
Removing CentOS-specific yum configuration from /etc/yum.conf

We can easily confirm this by checking the yum repository files.

[centos@centosmigr tmp]$ ls -l /etc/yum.repos.d/
total 60
-rw-r--r--. 1 root root  891 Jun  8 08:34 CentOS-Linux-AppStream.repo.disabled
-rw-r--r--. 1 root root  876 Jun  8 08:34 CentOS-Linux-BaseOS.repo.disabled
-rw-r--r--. 1 root root 1302 Jun  8 08:34 CentOS-Linux-ContinuousRelease.repo.disabled
-rw-r--r--. 1 root root  490 Jun  8 08:34 CentOS-Linux-Debuginfo.repo.disabled
-rw-r--r--. 1 root root  904 Jun  8 08:34 CentOS-Linux-Devel.repo.disabled
-rw-r--r--. 1 root root  876 Jun  8 08:34 CentOS-Linux-Extras.repo.disabled
-rw-r--r--. 1 root root  891 Jun  8 08:34 CentOS-Linux-FastTrack.repo.disabled
-rw-r--r--. 1 root root  912 Jun  8 08:34 CentOS-Linux-HighAvailability.repo.disabled
-rw-r--r--. 1 root root  865 Jun  8 08:34 CentOS-Linux-Media.repo.disabled
-rw-r--r--. 1 root root  878 Jun  8 08:34 CentOS-Linux-Plus.repo.disabled
-rw-r--r--. 1 root root  896 Jun  8 08:34 CentOS-Linux-PowerTools.repo.disabled
-rw-r--r--. 1 root root 1070 Jun  8 08:34 CentOS-Linux-Sources.repo.disabled
-rw-r--r--. 1 root root 2961 Jun  8 08:35 oracle-linux-ol8.repo
-rw-r--r--. 1 root root  470 May 11 22:21 uek-ol8.repo
-rw-r--r--. 1 root root  243 May 11 22:21 virt-ol8.repo

The CentOS repositories have been disabled and the new Oracle Linux repositories have been installed.

The script will then download latest Oracle Linux release package.

Downloading Oracle Linux release package...
Oracle Linux 8 BaseOS Latest (x86_64)                                                                                                                                                                                         146 MB/s |  46 MB     00:00
Oracle Linux 8 Application Stream (x86_64)                                                                                                                                                                                    131 MB/s |  36 MB     00:00
Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)                                                                                                                                                    152 MB/s |  48 MB     00:00
Last metadata expiration check: 0:00:08 ago on Wed 08 Jun 2022 08:34:52 AM UTC.
(1/3): oraclelinux-release-el8-1.0-23.el8.x86_64.rpm                                                                                                                                                                          567 kB/s |  21 kB     00:00
(2/3): redhat-release-8.6-0.1.0.1.el8.x86_64.rpm                                                                                                                                                                              497 kB/s |  19 kB     00:00
(3/3): oraclelinux-release-8.6-1.0.5.el8.x86_64.rpm                                                                                                                                                                           1.6 MB/s |  77 kB     00:00

In order to finally start switching old CentOS release package with Oracle Linux.

Switching old release package with Oracle Linux...

The base packages for CentOS will be removed and the one for Oracle Linux will be installed.

Enabling ol8_appstream which replaces appstream
Enabling ol8_baseos_latest which replaces baseos
Installing base packages for Oracle Linux...
Oracle Linux 8 BaseOS Latest (x86_64)                                                                                                                                                                                         202 kB/s | 3.6 kB     00:00
Oracle Linux 8 Application Stream (x86_64)                                                                                                                                                                                    204 kB/s | 3.9 kB     00:00
> Package basesystem-11-5.el8.noarch is already installed.
Package initscripts-10.00.15-1.el8.x86_64 is already installed.
Package grub2-pc-1:2.02-99.el8.x86_64 is already installed.
Package grubby-8.40-41.el8.x86_64 is already installed.
> ==============================================================================================================================================================================================================================================================
 Package                                                       Architecture                                    Version                                                                       Repository                                                  Size
==============================================================================================================================================================================================================================================================
Installing:
 kernel-uek                                                    x86_64                                          5.4.17-2136.307.3.6.el8uek                                                    ol8_UEKR6                                                  109 M
 oracle-logos                                                  x86_64                                          84.5-1.0.1.el8                                                                ol8_baseos_latest                                          1.4 M
 plymouth                                                      x86_64                                          0.9.4-11.20200615git1e36e30.0.1.el8                                           ol8_appstream                                              127 k
Upgrading:
 grub2-common                                                  noarch                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                          895 k
 grub2-pc                                                      x86_64                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                           45 k
 grub2-pc-modules                                              noarch                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                          924 k
 grub2-tools                                                   x86_64                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                          2.0 M
 grub2-tools-extra                                             x86_64                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                          1.1 M
 grub2-tools-minimal                                           x86_64                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                          213 k
 grubby                                                        x86_64                                          8.40-42.0.1.el8                                                               ol8_baseos_latest                                           50 k
 tuned                                                         noarch                                          2.18.0-2.0.1.el8                                                              ol8_baseos_latest                                          318 k
Installing dependencies:
 grub2-tools-efi                                               x86_64                                          1:2.02-123.0.3.el8                                                            ol8_baseos_latest                                          478 k
 linux-firmware                                                noarch                                          999:20220304-999.13.gitf011ccb4.el8                                           ol8_baseos_latest                                          216 M
 linux-firmware-core                                           noarch                                          999:20220304-999.13.gitf011ccb4.el8                                           ol8_baseos_latest                                          509 k
 plymouth-core-libs                                            x86_64                                          0.9.4-11.20200615git1e36e30.0.1.el8                                           ol8_appstream                                              122 k
 plymouth-scripts                                              x86_64                                          0.9.4-11.20200615git1e36e30.0.1.el8                                           ol8_appstream                                               44 k
Removing:
 centos-gpg-keys                                               noarch                                          1:8-2.el8                                                                     @anaconda                                                  3.3 k
 centos-linux-release                                          noarch                                          8.4-1.2105.el8                                                                @anaconda                                                   25 k
 centos-logos                                                  x86_64                                          85.5-1.el8                                                                    @anaconda                                                  698 k
 python3-syspurpose                                            x86_64                                          1.28.13-2.el8                                                                 @anaconda                                                  138 k

Transaction Summary
==============================================================================================================================================================================================================================================================
Install  8 Packages
Upgrade  8 Packages
Remove   4 Packages

Total download size: 333 M
Downloading Packages:
(1/16): linux-firmware-core-20220304-999.13.gitf011ccb4.el8.noarch.rpm                                                                                                                                                         13 MB/s | 509 kB     00:00
(2/16): grub2-tools-efi-2.02-123.0.3.el8.x86_64.rpm                                                                                                                                                                            11 MB/s | 478 kB     00:00
(3/16): plymouth-0.9.4-11.20200615git1e36e30.0.1.el8.x86_64.rpm                                                                                                                                                                21 MB/s | 127 kB     00:00
...
...
...
Running transaction
  Preparing        :                                                                                                                                                                                                                                      1/1
  Running scriptlet: grub2-common-1:2.02-123.0.3.el8.noarch                                                                                                                                                                                               1/1
  Upgrading        : grub2-common-1:2.02-123.0.3.el8.noarch                                                                                                                                                                                              1/28
  Upgrading        : grub2-tools-minimal-1:2.02-123.0.3.el8.x86_64                                                                                                                                                                                       2/28
...
...
...
  Installing       : plymouth-scripts-0.9.4-11.20200615git1e36e30.0.1.el8.x86_64                                                                                                                                                                         7/28
  Installing       : plymouth-0.9.4-11.20200615git1e36e30.0.1.el8.x86_64                                                                                                                                                                                 8/28
  Installing       : linux-firmware-core-999:20220304-999.13.gitf011ccb4.el8.noarch                                                                                                                                                                      9/28
...
...
...
  Verifying        : centos-logos-85.5-1.el8.x86_64                                                                                                                                                                                                     27/28
  Verifying        : python3-syspurpose-1.28.13-2.el8.x86_64                                                                                                                                                                                            28/28

Upgraded:
  grub2-common-1:2.02-123.0.3.el8.noarch  grub2-pc-1:2.02-123.0.3.el8.x86_64  grub2-pc-modules-1:2.02-123.0.3.el8.noarch  grub2-tools-1:2.02-123.0.3.el8.x86_64  grub2-tools-extra-1:2.02-123.0.3.el8.x86_64  grub2-tools-minimal-1:2.02-123.0.3.el8.x86_64
  grubby-8.40-42.0.1.el8.x86_64           tuned-2.18.0-2.0.1.el8.noarch
Installed:
  grub2-tools-efi-1:2.02-123.0.3.el8.x86_64         kernel-uek-5.4.17-2136.307.3.6.el8uek.x86_64                linux-firmware-999:20220304-999.13.gitf011ccb4.el8.noarch             linux-firmware-core-999:20220304-999.13.gitf011ccb4.el8.noarch
  oracle-logos-84.5-1.0.1.el8.x86_64                plymouth-0.9.4-11.20200615git1e36e30.0.1.el8.x86_64         plymouth-core-libs-0.9.4-11.20200615git1e36e30.0.1.el8.x86_64         plymouth-scripts-0.9.4-11.20200615git1e36e30.0.1.el8.x86_64
Removed:
  centos-gpg-keys-1:8-2.el8.noarch                           centos-linux-release-8.4-1.2105.el8.noarch                           centos-logos-85.5-1.el8.x86_64                           python3-syspurpose-1.28.13-2.el8.x86_64

Complete!

We can then see a nice message that the switch was successful and that the system will now sync with the oracle Linux repositories.

Switch successful. Syncing with Oracle Linux repositories.                                                                                                                                                                                 204 kB/s | 3.9 kB     00:00

Syncing means upgrading all packages to Oracle Linux ol8 release and installing the kernel.

Dependencies resolved.
==============================================================================================================================================================================================================================================================
 Package                                                                  Architecture                                  Version                                                                Repository                                                Size
==============================================================================================================================================================================================================================================================
Installing:
 kernel                                                                   x86_64                                        4.18.0-372.9.1.el8                                                     ol8_baseos_latest                                        8.0 M
 kernel-core                                                              x86_64                                        4.18.0-372.9.1.el8                                                     ol8_baseos_latest                                         39 M
 kernel-modules                                                           x86_64                                        4.18.0-372.9.1.el8                                                     ol8_baseos_latest                                         32 M
Upgrading:
 NetworkManager                                                           x86_64                                        1:1.36.0-4.0.1.el8                                                     ol8_baseos_latest                                        2.3 M
 NetworkManager-libnm                                                     x86_64                                        1:1.36.0-4.0.1.el8                                                     ol8_baseos_latest                                        1.8 M
 NetworkManager-team                                                      x86_64                                        1:1.36.0-4.0.1.el8                                                     ol8_baseos_latest                                        153 k
 NetworkManager-tui                                                       x86_64                                        1:1.36.0-4.0.1.el8                                                     ol8_baseos_latest                                        345 k                                                                                                                                                                                 204 kB/s | 3.9 kB     00:00
...
...
...
Transaction Summary
==============================================================================================================================================================================================================================================================
Install    6 Packages
Upgrade  238 Packages

Total download size: 318 M
Downloading Packages:
(1/414): basesystem-11-5.el8.noarch.rpm                                                                                                                                                                                       342 kB/s |  10 kB     00:00
(2/414): bzip2-1.0.6-26.el8.x86_64.rpm                                                                                                                                                                                         13 MB/s |  60 kB     00:00
(3/414): bzip2-libs-1.0.6-26.el8.x86_64.rpm                                                                                                                                                                                    18 MB/s |  48 kB     00:00
(4/414): acl-2.2.53-1.el8.x86_64.rpm                                                                                                                                                                                          1.9 MB/s |  81 kB     00:00
(5/414): brotli-1.0.6-3.el8.x86_64.rpm                                                                                                                                                                                        7.3 MB/s | 323 kB     00:00
(6/414): cracklib-2.9.6-15.el8.x86_64.rpm                                                                                                                                                                                      17 MB/s |  93 kB     00:00
...
...
...

This intent to upgrade, reinstall, install or cleanup packages as well as running packages’ scriptlets.

  Upgrading        : filesystem-3.8-6.el8.x86_64                                                                                                                                                                                                        3/822
  Upgrading        : tzdata-2022a-1.el8.noarch                                                                                                                                                                                                          4/822
  Reinstalling     : fontpackages-filesystem-1.44-22.el8.noarch                                                                                                                                                                                         5/822
...
  Running scriptlet: libproxy-0.4.15-5.2.el8.x86_64                                                                                                                                                                                                   113/822
...
 Installing       : libbpf-0.4.0-3.el8.x86_64                                                                                                                                                                                                        117/822
...
 Cleanup          : glib2-2.56.4-9.el8.x86_64                                                                                                                                                                                                        695/822

As any upgrade, there will be a verification step.

  Verifying        : acl-2.2.53-1.el8.x86_64                                                                                                                                                                                                            1/822
  Verifying        : acl-2.2.53-1.el8.x86_64                                                                                                                                                                                                            2/822
  Verifying        : basesystem-11-5.el8.noarch                                                                                                                                                                                                         3/822

At the end of the package installation we can review the packages that were upgraded, installed or reinstalled.

Upgraded:
  NetworkManager-1:1.36.0-4.0.1.el8.x86_64                   NetworkManager-libnm-1:1.36.0-4.0.1.el8.x86_64                     NetworkManager-team-1:1.36.0-4.0.1.el8.x86_64                      NetworkManager-tui-1:1.36.0-4.0.1.el8.x86_64
  PackageKit-1.1.12-6.0.1.el8.x86_64                         PackageKit-glib-1.1.12-6.0.1.el8.x86_64                            audit-3.0.7-2.el8.2.x86_64                                         audit-libs-3.0.7-2.el8.2.x86_64
...
  virt-what-1.18-13.el8.x86_64                               which-2.21-17.el8.x86_64                                           xfsprogs-5.4.0-1.0.1.el8.x86_64                                    yum-4.7.0-8.0.1.el8.noarch
  yum-utils-4.0.21-11.0.1.el8.noarch                         zlib-1.2.11-18.el8_5.x86_64
Installed:
  NetworkManager-initscripts-updown-1:1.36.0-4.0.1.el8.noarch   kernel-4.18.0-372.9.1.el8.x86_64   kernel-core-4.18.0-372.9.1.el8.x86_64   kernel-modules-4.18.0-372.9.1.el8.x86_64   libbpf-0.4.0-3.el8.x86_64   python3-netifaces-0.10.6-4.el8.x86_64
Reinstalled:
  abattis-cantarell-fonts-0.0.25-6.el8.noarch          acl-2.2.53-1.el8.x86_64                            basesystem-11-5.el8.noarch                     brotli-1.0.6-3.el8.x86_64                           bzip2-1.0.6-26.el8.x86_64
...              xkeyboard-config-2.28-1.el8.noarch             xz-5.2.4-3.el8.x86_64                               xz-libs-5.2.4-3.el8.x86_64

Complete!

At the end of the script execution we can see the output that the sync is successful.

Sync successful.

The system will update the boot loader, generate the new grub configuration file and switch the default boot kernel to the Oracle Linux kernel (UEK).

Updating the GRUB2 bootloader.
Generating grub configuration file ...
done
Switching default boot kernel to the UEK.
Removing yum cache

The option -V has created four log files for listing and verifying the packages before and after the migration.

Creating a list of RPMs installed after the switch
Verifying RPMs installed after the switch against RPM database
Review the output of following files:
/var/tmp/centosmigr-rpms-list-before.log
/var/tmp/centosmigr-rpms-verified-before.log
/var/tmp/centosmigr-rpms-list-after.log
/var/tmp/centosmigr-rpms-verified-after.log

And finally the script will end with an output confirming the switch has been successful completed. A reboot is now more than recommended.

Switch complete.
Oracle recommends rebooting this system.
Checking system after switching to Oracle Linux

Let’s check the new Linux version and confirm we are running Oracle Linux.

[centos@centosmigr ~]$ ls -l /etc/*release*
-rw-r--r--. 1 root root  32 May 13 01:14 /etc/oracle-release
-rw-r--r--. 1 root root 479 May 13 01:14 /etc/os-release
-rw-r--r--. 1 root root  45 May 13 01:14 /etc/redhat-release
lrwxrwxrwx. 1 root root  14 May 13 01:14 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root  31 May 13 01:14 /etc/system-release-cpe

[centos@centosmigr ~]$ cat /etc/oracle-release
Oracle Linux Server release 8.6
[centos@centosmigr ~]$

Let’s check the running kernel.

[centos@centosmigr ~]$ uname -a
Linux centosmigr 4.18.0-305.3.1.el8.x86_64 #1 SMP Tue Jun 1 16:14:33 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

This is the old one, we need to reboot the server first.

[centos@centosmigr ~]$ sudo systemctl reboot
Connection to 172.21.9.153 closed by remote host.
Connection to 172.21.9.153 closed.

After reboot, the system will be started on the new Oracle Linux kernel.

[centos@centosmigr ~]$ uptime
 11:23:08 up 1 min,  1 user,  load average: 1.64, 0.53, 0.19

[centos@centosmigr ~]$ uname -a
Linux centosmigr 5.4.17-2136.307.3.6.el8uek.x86_64 #2 SMP Wed Jun 1 13:54:41 PDT 2022 x86_64 x86_64 x86_64 GNU/Linux

Let’s check the boot loader folder.

[centos@centosmigr ~]$ ls -ltrh /boot
total 516M
-rw-------. 1 root root 4.0M Jun  1  2021 System.map-4.18.0-305.3.1.el8.x86_64
-rw-r--r--. 1 root root 188K Jun  1  2021 config-4.18.0-305.3.1.el8.x86_64
-rwxr-xr-x. 1 root root 9.6M Jun  1  2021 vmlinuz-4.18.0-305.3.1.el8.x86_64
drwxr-xr-x. 3 root root   17 Jun  2  2021 efi
drwxr-xr-x. 3 root root   21 Jun  2  2021 loader
-rwxr-xr-x. 1 root root 9.6M Jun  2  2021 vmlinuz-0-rescue-40ec35b37a254049be3d85f9173b39a1
-rw-------. 1 root root  62M Jun  2  2021 initramfs-0-rescue-40ec35b37a254049be3d85f9173b39a1.img
drwxr-xr-x. 2 root root   39 Jun  2  2021 grub
-rw-------. 1 root root 4.2M May 12 03:21 System.map-4.18.0-372.9.1.el8.x86_64
-rw-r--r--. 1 root root 192K May 12 03:21 config-4.18.0-372.9.1.el8.x86_64
-rwxr-xr-x. 1 root root  10M May 12 03:21 vmlinuz-4.18.0-372.9.1.el8.x86_64
-rw-------. 1 root root 4.3M Jun  1 21:00 System.map-5.4.17-2136.307.3.6.el8uek.x86_64
-rw-r--r--. 1 root root 213K Jun  1 21:00 config-5.4.17-2136.307.3.6.el8uek.x86_64
-rwxr-xr-x. 1 root root  10M Jun  1 21:00 vmlinuz-5.4.17-2136.307.3.6.el8uek.x86_64
lrwxrwxrwx. 1 root root   57 Jun  8 08:35 symvers-5.4.17-2136.307.3.6.el8uek.x86_64.gz -> /lib/modules/5.4.17-2136.307.3.6.el8uek.x86_64/symvers.gz
-rw-------. 1 root root  86M Jun  8 08:36 initramfs-5.4.17-2136.307.3.6.el8uek.x86_64.img
-rwxr-xr-x. 1 root root  10M Jun  8 08:36 vmlinuz-0-rescue-ec2f03aecd97c5f99095ff2f3f63a155
-rw-------. 1 root root  86M Jun  8 08:37 initramfs-0-rescue-ec2f03aecd97c5f99095ff2f3f63a155.img
-rw-------. 1 root root  27M Jun  8 08:42 initramfs-4.18.0-305.3.1.el8.x86_64kdump.img
lrwxrwxrwx. 1 root root   49 Jun  8 08:42 symvers-4.18.0-372.9.1.el8.x86_64.gz -> /lib/modules/4.18.0-372.9.1.el8.x86_64/symvers.gz
-rw-------. 1 root root  87M Jun  8 08:44 initramfs-4.18.0-372.9.1.el8.x86_64.img
-rw-------. 1 root root  82M Jun  8 08:45 initramfs-4.18.0-305.3.1.el8.x86_64.img
-rw-------. 1 root root  27M Jun  8 11:22 initramfs-5.4.17-2136.307.3.6.el8uek.x86_64kdump.img
drwx------. 4 root root   83 Jun  8 11:25 grub2

And we can confirm that there is no CentOS base package installed on the system any more.

[centos@centosmigr ~]$ rpm -qa | grep -i centos
[centos@centosmigr ~]$
Conclusion

Switching CentOS 8 to Oracle Linux 8 was quite fast and simple. Of course, it might be more complicated on a production system having several other third party packages and softwares installed. Those package might need to certainly be reinstalled. The packages that install 3rd party kernel modules will for sure not work any more after switching, and would need to be reinstalled. Also the script only enables base repositories needed to switch to Oracle Linux. Before upgrading some packages, some additional repositories might need to be enabled first.

L’article Migrating CentOS 8 to Oracle linux OL8 est apparu en premier sur dbi Blog.

Patch 19.15 is available for your ODA

Thu, 2022-06-09 09:36
Introduction

Patch 19.15 is now available on Oracle Database Appliance. It’s time to test it.

What’s new?

This version brings April’s PSU to database and grid homes with their bug fixes, as usual. It also brings latest database 21.6 but only for DB Systems (21c being an innovation release). As you may know, new ODA X9-2 is here with 3 models: X9-2S, X9-2L and X9-2HA. This release is the first to support these brand new appliances.

Most important feature is the new “Data Preserving Reprovisioning”: you can now reimage your ODA without erasing the DATA disks. It could be the definitive solution for clean patching without a complete reimaging.

Which ODA is compatible with this 19.15?

The brand new ODAs X9-2S/L/HA (basically a refresh of previous X8-2S/M/HA) is for sure supported, so are the X8, X7 and X6 series. X5-2HA is still on the compatibility list, but most of them are at least 6 years old now.

Is this patch a cumulative one?

This 19.15 can be applied on top of 19.11 or later. If you’re using older versions, you may think about using the new “Data Preserving Reprovisioning”, let’s call it DPR, the promise being exciting. But this is not for everyone: this DPR feature is limited to these releases: 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7, and 18.8. If you don’t use these releases, you will first need to jump to supported next one. This DPR feature is also not available if you already use one of the 19.x version.

Is there also a patch for my databases?

Only databases version 12.1 and 19c are now supported. Classic patching without DPR will preserve your existing binaries for unsupported versions.

Download the patch and clone files

Download the patch and the corresponding clones to be able to apply the complete patch.

  • 34069644 => the patch itself
  • 30403673 => the GI clone needed for deploying new version (mandatory)
  • 30403662 => the DB clone for deploying new version of 19c (if you use 19c databases)
  • 23494992 => the DB clone for deploying new version of 12.1 (if you use 12.1 databases)

Be sure to choose the very latest 19.15 when downloading the clones, download link will first propose older versions.

In this demo, I will not be able to use DPR because my ODA is already running 19.14.

Prepare the patching

Before running pre-patch, please check these prerequisites:

  • filesystems have 20% available free space (does not concern acfs volumes)
  • additional rpms manually installed should be removed
  • revert profile scripts to default’s one (for grid and oracle users)
  • make sure you can afford longer than planned downtime, 4 hours being the bare minimum for patching and troubleshooting. 1 day is never too much.

You can use odabr to backup your filesystems to snapshot or to nfs, or simply backup all your important files to a nfs share in case patching would fail.

Version precheck

Start to check current version on all components:

odacli describe-component | grep -v ^$
System Version
---------------
19.14.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.14.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK
                                          19.14.0.0.0           up-to-date
GI
                                          19.14.0.0.220118      up-to-date
DB {
[ OraDB19000_home1 ]
                                          19.12.0.0.210720      19.14.0.0.220118
[ OraDB19000_home4,OraDB19000_home6,
OraDB19000_home7 ]                        19.14.0.0.220118      up-to-date
}
DCSCONTROLLER
                                          19.14.0.0.0           up-to-date
DCSCLI
                                          19.14.0.0.0           up-to-date
DCSAGENT
                                          19.14.0.0.0           up-to-date
DCSADMIN
                                          19.14.0.0.0           up-to-date
OS
                                          7.9                   up-to-date
ILOM
                                          5.0.2.24.r141466      up-to-date
BIOS
                                          52050300              up-to-date
SHARED CONTROLLER FIRMWARE
                                          VDV1RL04              up-to-date
LOCAL DISK FIRMWARE
                                          1132                  up-to-date
SHARED DISK FIRMWARE
                                          1132                  up-to-date
HMP
                                          2.4.8.0.600           up-to-date

Once the patch will be registered in the ODA repository, the “Available Version” column will be updated with versions provided within the patch.

Patching from 19.14 will normally be easy.

Prepararing the patch and updating the DCS tools

Copy the patch files on your ODA in a temp directory. Then unzip the files:

cd /opt/dbi/
for f in p*1915000*.zip; do unzip -n $f; done
Archive:  p30403662_1915000_Linux-x86-64.zip
 extracting: odacli-dcs-19.15.0.0.0-220425-DB-19.15.0.0.zip
  inflating: README.txt
Archive:  p30403673_1915000_Linux-x86-64.zip
 extracting: odacli-dcs-19.15.0.0.0-220425-GI-19.15.0.0.zip
Archive:  p34069644_1915000_Linux-x86-64.zip
 extracting: oda-sm-19.15.0.0.0-220530-server.zip

rm -rf p*1915000*.zip

Register the patch in the repository:

odacli update-repository -f /opt/dbi/oda-sm-19.15.0.0.0-220530-server.zip

odacli describe-component | grep -v ^$
System Version
---------------
19.14.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.14.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK
                                          19.14.0.0.0           19.15.0.0.0
GI
                                          19.14.0.0.220118      19.15.0.0.220419
DB {
[ OraDB19000_home1 ]
                                          19.12.0.0.210720      19.15.0.0.220419
[ OraDB19000_home4,OraDB19000_home6,
OraDB19000_home7 ]                        19.14.0.0.220118      19.15.0.0.220419
}
DCSCONTROLLER
                                          19.14.0.0.0           19.15.0.0.0
DCSCLI
                                          19.14.0.0.0           19.15.0.0.0
DCSAGENT
                                          19.14.0.0.0           19.15.0.0.0
DCSADMIN
                                          19.14.0.0.0           19.15.0.0.0
OS
                                          7.9                   up-to-date
ILOM
                                          5.0.2.24.r141466      up-to-date
BIOS
                                          52050300              up-to-date
SHARED CONTROLLER FIRMWARE
                                          VDV1RL04              up-to-date
LOCAL DISK FIRMWARE
                                          1132                  up-to-date
SHARED DISK FIRMWARE
                                          1132                  up-to-date
HMP
                                          2.4.8.0.600           2.4.8.0.601

Update the DCS tooling of your ODA:

/opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.15.0.0.0
sleep 60;  /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.15.0.0.0
/opt/oracle/dcs/bin/odacli update-dcsagent -v 19.15.0.0.0

Note that updating the DCS components is not done through a job:

sleep 180; odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 4
ID                                       Description                                                                 Created                             Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
8b25f005-0b51-4fc3-bfbc-fc7fe28a2219     Repository Update                                                           June 9, 2022 8:14:53 AM CEST        Success
be22387a-e0e1-4c3e-9cd0-211700ba0679     DcsAdmin patching                                                           June 9, 2022 8:16:19 AM CEST        Success
7a995e7c-b2ba-4b76-b559-6a683062800a     DcsAgent patching                                                           June 9, 2022 8:19:26 AM CEST        Success

Now you can register GI and DB clones:

odacli update-repository -f /opt/dbi/odacli-dcs-19.15.0.0.0-220425-GI-19.15.0.0.zip
sleep 50; odacli update-repository -f /opt/dbi/odacli-dcs-19.15.0.0.0-220425-DB-19.15.0.0.zip

sleep 50; odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 3
ID                                       Description                                                                 Created                             Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
51e165e6-eb8a-4421-885a-09aeacb91613     Repository Update                                                           June 9, 2022 8:25:32 AM CEST        Success
d0db4cf9-0f2a-439e-a219-0b30dbec76f3     Repository Update                                                           June 9, 2022 8:26:37 AM CEST        Success

odacli describe-component | grep -v ^$
System Version
---------------
19.15.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.15.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK
                                          19.14.0.0.0           19.15.0.0.0
GI
                                          19.14.0.0.220118      19.15.0.0.220419
DB {
[OraDB19000_home1]
                                          19.12.0.0.210720      19.15.0.0.220419
[OraDB19000_home4 [wra]]
                                          19.14.0.0.220118      19.15.0.0.220419
[OraDB19000_home6 [roman]]
                                          19.14.0.0.220118      19.15.0.0.220419
[OraDB19000_home7 [DHE,MAW,TRSNOC,
TSYCDB1,TSYCDB2]]                         19.14.0.0.220118      19.15.0.0.220419
}
DCSCONTROLLER
                                          19.15.0.0.0           up-to-date
DCSCLI
                                          19.15.0.0.0           up-to-date
DCSAGENT
                                          19.15.0.0.0           up-to-date
DCSADMIN
                                          19.15.0.0.0           up-to-date
OS
                                          7.9                   up-to-date
ILOM
                                          5.0.2.24.r141466      up-to-date
BIOS
                                          52050300              up-to-date
LOCAL CONTROLLER FIRMWARE {
[c3]
[c4]
80007BC7 8000A87E 
} 
SHARED CONTROLLER FIRMWARE 
                                         VDV1RL04            up-to-date 
LOCAL DISK FIRMWARE           1132             up-to-date 
SHARED DISK FIRMWARE        1132             up-to-date 
HMP                               2.4.8.0.600             2.4.8.0.601

This update will be for Oracle software and also for some microcodes on my X8-2M.

Pre-patching report

Let’s do the pre-patching test:

odacli create-prepatchreport -s -v 19.15.0.0.0

sleep 500; odacli describe-prepatchreport -i 508808ef-dddd-4121-b7e8-1ba717c895ba

Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  508808ef-dddd-4121-b7e8-1ba717c895ba
            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER, SERVER]
                 Status:  SUCCESS
                Created:  June 9, 2022 8:42:08 AM CEST
                 Result:  All pre-checks succeeded

Node Name
---------------
dbi-oda-x8

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.15.0.0.0.
Is patch location available     Success   Patch location is available.
Verify OS patch                 Success   Verified OS patch
Validate command execution      Success   Validated command execution

__ILOM__
Validate ILOM server reachable  Success   Successfully connected with ILOM
                                          server using public IP and USB
                                          interconnect
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.15.0.0.0.
Is patch location available     Success   Patch location is available.
Checking Ilom patch Version     Success   Patch already applied
Patch location validation       Success   Successfully validated location
Validate command execution      Success   Validated command execution

__GI__
Validate GI metadata            Success   Successfully validated GI metadata
Validate supported GI versions  Success   Validated minimum supported versions.
Validate available space        Success   Validated free space under /u01
Is clusterware running          Success   Clusterware is running
Validate patching tag           Success   Validated patching tag: 19.15.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate ASM in online          Success   ASM is online
Validate kernel log level       Success   Successfully validated the OS log
                                          level
Validate minimum agent version  Success   GI patching enabled in current
                                          DCSAGENT version
Validate Central Inventory      Success   oraInventory validation passed
Validate patching locks         Success   Validated patching locks
Validate clones location exist  Success   Validated clones location
Validate DB start dependencies  Success   DBs START dependency check passed
Validate DB stop dependencies   Success   DBs STOP dependency check passed
Evaluate GI patching            Success   Successfully validated GI patching
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

__SERVER__
Validate local patching         Success   Successfully validated server local
                                          patching
Validate command execution      Success   Validated command execution

On my configuration it didn’t work on the first try because of an old 19.12 GI. I removed it manually:

rm -fr /u01/app/19.12.0.0/grid/

Everything is OK to start patching.

Patching infrastructure and GI

Let’s start the update-server:

odacli update-server -v 19.15.0.0.0
odacli describe-job -i 6ad4a7f1-cce9-42c0-929e-e509383311b4

Job details
----------------------------------------------------------------
                     ID:  6ad4a7f1-cce9-42c0-929e-e509383311b4
            Description:  Server Patching
                 Status:  Success
                Created:  June 9, 2022 8:51:32 AM CEST
                Message:  Successfully patched GI with RHP

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validating GI user metadata              June 9, 2022 8:51:47 AM CEST        June 9, 2022 8:51:47 AM CEST        Success
Validate ILOM server reachable           June 9, 2022 8:51:47 AM CEST        June 9, 2022 8:51:47 AM CEST        Success
Validate DCS Admin mTLS setup            June 9, 2022 8:51:47 AM CEST        June 9, 2022 8:51:47 AM CEST        Success
Configure export clones resource         June 9, 2022 8:51:48 AM CEST        June 9, 2022 8:51:48 AM CEST        Success
Creating repositories using yum          June 9, 2022 8:51:48 AM CEST        June 9, 2022 8:51:51 AM CEST        Success
Updating YumPluginVersionLock rpm        June 9, 2022 8:51:51 AM CEST        June 9, 2022 8:51:51 AM CEST        Success
Applying OS Patches                      June 9, 2022 8:51:51 AM CEST        June 9, 2022 8:59:00 AM CEST        Success
Creating repositories using yum          June 9, 2022 8:59:00 AM CEST        June 9, 2022 8:59:01 AM CEST        Success
Applying HMP Patches                     June 9, 2022 8:59:01 AM CEST        June 9, 2022 8:59:20 AM CEST        Success
Patch location validation                June 9, 2022 8:59:20 AM CEST        June 9, 2022 8:59:20 AM CEST        Success
oda-hw-mgmt upgrade                      June 9, 2022 8:59:20 AM CEST        June 9, 2022 8:59:52 AM CEST        Success
OSS Patching                             June 9, 2022 8:59:52 AM CEST        June 9, 2022 8:59:53 AM CEST        Success
Applying Firmware Disk Patches           June 9, 2022 8:59:53 AM CEST        June 9, 2022 8:59:57 AM CEST        Success
Applying Firmware Controller Patches     June 9, 2022 8:59:57 AM CEST        June 9, 2022 9:05:01 AM CEST        Success
Checking Ilom patch Version              June 9, 2022 9:05:01 AM CEST        June 9, 2022 9:05:01 AM CEST        Success
Patch location validation                June 9, 2022 9:05:01 AM CEST        June 9, 2022 9:05:01 AM CEST        Success
Save password in Wallet                  June 9, 2022 9:05:01 AM CEST        June 9, 2022 9:05:02 AM CEST        Success
Disabling IPMI v2                        June 9, 2022 9:05:02 AM CEST        June 9, 2022 9:05:02 AM CEST        Success
Apply Ilom patch                         June 9, 2022 9:05:02 AM CEST        June 9, 2022 9:05:02 AM CEST        Success
Copying Flash Bios to Temp location      June 9, 2022 9:05:02 AM CEST        June 9, 2022 9:05:02 AM CEST        Success
Starting the clusterware                 June 9, 2022 9:05:02 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
registering image                        June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
registering working copy                 June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
registering image                        June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
Creating GI home directories             June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
Extract GI clone                         June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
Provisioning Software Only GI with RHP   June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:06:49 AM CEST        Success
Patch GI with RHP                        June 9, 2022 9:06:49 AM CEST        June 9, 2022 9:15:56 AM CEST        Success
Updating GIHome version                  June 9, 2022 9:15:56 AM CEST        June 9, 2022 9:16:00 AM CEST        Success
Validate GI availability                 June 9, 2022 9:16:13 AM CEST        June 9, 2022 9:16:13 AM CEST        Success
Patch KVM CRS type                       June 9, 2022 9:16:13 AM CEST        June 9, 2022 9:16:15 AM CEST        Success
Patch VM vDisks CRS dependencies         June 9, 2022 9:16:15 AM CEST        June 9, 2022 9:16:15 AM CEST        Success
Update System version                    June 9, 2022 9:16:15 AM CEST        June 9, 2022 9:16:15 AM CEST        Success
Cleanup JRE Home                         June 9, 2022 9:16:15 AM CEST        June 9, 2022 9:16:15 AM CEST        Success
Add SYSNAME in Env                       June 9, 2022 9:16:15 AM CEST        June 9, 2022 9:16:15 AM CEST        Success
Starting the clusterware                 June 9, 2022 9:16:15 AM CEST        June 9, 2022 9:16:15 AM CEST        Success
Setting ACL for disk groups              June 9, 2022 9:16:15 AM CEST        June 9, 2022 9:16:19 AM CEST        Success
Update lvm.conf file                     June 9, 2022 9:18:09 AM CEST        June 9, 2022 9:18:09 AM CEST        Success
Update previous workarounds              June 9, 2022 9:18:09 AM CEST        June 9, 2022 9:18:09 AM CEST        Success
preRebootNode Actions                    June 9, 2022 9:18:09 AM CEST        June 9, 2022 9:21:14 AM CEST        Success
Reboot Ilom                              June 9, 2022 9:21:14 AM CEST        June 9, 2022 9:21:14 AM CEST        Success

Server reboots 5 minutes after the patch ends. On my X8-2M this server patching lasted 30 minutes.

Let’s check the component’s versions now:

odacli describe-component | grep -v ^$ 
System Version
---------------
19.15.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.15.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK
                                          19.15.0.0.0           up-to-date
GI
                                          19.15.0.0.220419      up-to-date
DB {
[OraDB19000_home1]
                                          19.12.0.0.210720      19.15.0.0.220419
[OraDB19000_home4 [wra]]
                                          19.14.0.0.220118      19.15.0.0.220419
[OraDB19000_home6 [roman]]
                                          19.14.0.0.220118      19.15.0.0.220419
[OraDB19000_home7 [DHE,MAW,TRSNOC,
RTSCDB1,RTSCDB2]]                         19.14.0.0.220118      19.15.0.0.220419
}
DCSCONTROLLER
                                          19.15.0.0.0           up-to-date
DCSCLI
                                          19.15.0.0.0           up-to-date
DCSAGENT
                                          19.15.0.0.0           up-to-date
DCSADMIN
                                          19.15.0.0.0           up-to-date
OS
                                          7.9                   up-to-date
ILOM
                                          5.0.2.24.r141466      up-to-date
BIOS
                                          52050300              up-to-date
LOCAL CONTROLLER FIRMWARE {
[c3]                                    80000681              up-to-date
[c4]                                    8000A87E              up-to-date
} 
SHARED CONTROLLER FIRMWARE VDV1RL04 up-to-date 
LOCAL DISK FIRMWARE            1132                up-to-date 
SHARED DISK FIRMWARE         1132                up-to-date 
HMP                                 2.4.8.0.601                up-to-date

This looks fine.

Patching the storage

Patching the storage is only needed for older ODAs/patch levels. In case you need to apply a patch on the storage it’s easy, there is a pre-patch, and then the patch:

odacli create-prepatchreport -st -v 19.15.0.0.0
odacli update-storage -v 19.15.0.0.0

For HA ODAs using RAC, patching can be done in a rolling fashion:

odacli update-storage -v 19.15.0.0.0 --rolling

I never encountered troubles during storage patching, so it should be fine.

Patching the DB homes

Time for patching the DB homes depends on the number of DB homes and number of databases. In this example, I will apply the patch on the latest DB home only:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
6f78a962-22b9-4dc4-b14f-6e5c8c81f248     OraDB19000_home1     19.12.0.0.210720                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 CONFIGURED
940087c7-feb2-4e51-88f7-77f3dcacd0a7     OraDB19000_home4     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED
adcf2c0d-7082-4ee0-9431-be331107f368     OraDB19000_home6     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_6 CONFIGURED
0f2eed26-e7ca-4021-9329-902a858ce3a1     OraDB19000_home7     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_7 CONFIGURED

A prepatching is also needed here:

odacli create-prepatchreport -d -i 0f2eed26-e7ca-4021-9329-902a858ce3a1 -v 19.15.0.0.0
odacli describe-prepatchreport -i 38b56fb7-d582-450b-97ac-c515e43fd268

Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  38b56fb7-d582-450b-97ac-c515e43fd268
            Description:  Patch pre-checks for [DB, ORACHKDB]: DbHome is OraDB19000_home7
                 Status:  FAILED
                Created:  June 9, 2022 9:57:47 AM CEST
                 Result:  One or more pre-checks failed for [ORACHK]

Node Name
---------------
dbi-oda-x8

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          0f2eed26-e7ca-4021-9329-902a858ce3a1
Validate patching tag           Success   Validated patching tag: 19.15.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01/app/odaorahome
Validate dbHomesOnACFS          Success   User has configured diskgroup for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Evaluate DBHome patching with   Success   Successfully validated updating
RHP                                       dbhome with RHP.  and local patching
                                          is possible
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Failed    Orachk validation failed: .
Validate command execution      Success   Validated command execution
Verify the Fast Recovery Area   Failed    AHF-2929: FRA space management
(FRA) has reclaimable space               problem file types are present
                                          without an RMAN backup completion
                                          within the last 7 days
Verify the Fast Recovery Area   Failed    AHF-2929: FRA space management
(FRA) has reclaimable space               problem file types are present
                                          without an RMAN backup completion
                                          within the last 7 days

I don’t care about Orachk recommendations on my databases as this is a test system. I will apply the patch on this DB home with the force option:

odacli update-dbhome -i 0f2eed26-e7ca-4021-9329-902a858ce3a1 -v 19.15.0.0.0 -f
odacli describe-job -i eaf625d9-0712-4c7f-a27f-d586a10f98e1

Job details
----------------------------------------------------------------
                     ID:  eaf625d9-0712-4c7f-a27f-d586a10f98e1
            Description:  DB Home Patching: Home Id is 0f2eed26-e7ca-4021-9329-902a858ce3a1
                 Status:  Success
                Created:  June 9, 2022 10:28:42 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Adding USER SSH_EQUIVALENCE              June 9, 2022 10:28:54 AM CEST       June 9, 2022 10:28:55 AM CEST       Success
Adding USER SSH_EQUIVALENCE              June 9, 2022 10:28:55 AM CEST       June 9, 2022 10:28:55 AM CEST       Success
Adding USER SSH_EQUIVALENCE              June 9, 2022 10:28:55 AM CEST       June 9, 2022 10:28:56 AM CEST       Success
Creating wallet for DB Client            June 9, 2022 10:29:34 AM CEST       June 9, 2022 10:29:34 AM CEST       Success
Patch databases by RHP                   June 9, 2022 10:29:34 AM CEST       June 9, 2022 10:36:53 AM CEST       Success
updating database metadata               June 9, 2022 10:36:53 AM CEST       June 9, 2022 10:36:53 AM CEST       Success
Set log_archive_dest for Database        June 9, 2022 10:36:53 AM CEST       June 9, 2022 10:36:57 AM CEST       Success
Patch databases by RHP                   June 9, 2022 10:36:57 AM CEST       June 9, 2022 10:44:57 AM CEST       Success
updating database metadata               June 9, 2022 10:44:58 AM CEST       June 9, 2022 10:44:58 AM CEST       Success
Set log_archive_dest for Database        June 9, 2022 10:44:58 AM CEST       June 9, 2022 10:45:01 AM CEST       Success
Patch databases by RHP                   June 9, 2022 10:45:01 AM CEST       June 9, 2022 10:49:22 AM CEST       Success
updating database metadata               June 9, 2022 10:49:22 AM CEST       June 9, 2022 10:49:22 AM CEST       Success
Set log_archive_dest for Database        June 9, 2022 10:49:22 AM CEST       June 9, 2022 10:49:25 AM CEST       Success
Update System version                    June 9, 2022 10:49:25 AM CEST       June 9, 2022 10:49:25 AM CEST       Success
TDE parameter update                     June 9, 2022 10:50:17 AM CEST       June 9, 2022 10:50:17 AM CEST       Success

A new DB home has been created and my databases are now linked to this new one:

odacli list-dbhomes


ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
6f78a962-22b9-4dc4-b14f-6e5c8c81f248     OraDB19000_home1     19.12.0.0.210720                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 CONFIGURED
940087c7-feb2-4e51-88f7-77f3dcacd0a7     OraDB19000_home4     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED
adcf2c0d-7082-4ee0-9431-be331107f368     OraDB19000_home6     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_6 CONFIGURED
0f2eed26-e7ca-4021-9329-902a858ce3a1     OraDB19000_home7     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_7 CONFIGURED
534017ac-c521-4929-a8f8-32a64d67fb8e     OraDB19000_home8     19.15.0.0.220419                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_8 CONFIGURED

odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
83c90c99-feb2-4377-8d54-77c288e0ec93     roman      SI       19.14.0.0.220118     false      OLTP     odb1     ACFS       CONFIGURED   adcf2c0d-7082-4ee0-9431-be331107f368
44fa12df-c429-4aa5-ab60-f7f0d7856e18     wra        SI       19.14.0.0.220118     false      OLTP     odb1     ASM        CONFIGURED   940087c7-feb2-4e51-88f7-77f3dcacd0a7
e6be45b9-fe42-4035-9f15-3ee0ef8998e0     DHE        SI       19.15.0.0.220419     false      OLTP     odb1     ASM        CONFIGURED   534017ac-c521-4929-a8f8-32a64d67fb8e
dd6442a8-8593-4a17-8a7b-7430c828ad96     MAW        SI       19.15.0.0.220419     false      OLTP     odb1     ACFS       CONFIGURED   534017ac-c521-4929-a8f8-32a64d67fb8e
48a9de9d-18cc-4d7a-a84f-73d1f3f37973     TSYCDB1    SI       19.15.0.0.220419     true       OLTP     odb1s    ASM        CONFIGURED   534017ac-c521-4929-a8f8-32a64d67fb8e
fb8ec0cb-d9df-426c-9034-908100beba0b     TSYCDB2    SI       19.15.0.0.220419     true       OLTP     odb1s    ASM        CONFIGURED   534017ac-c521-4929-a8f8-32a64d67fb8e

The old DB home can now be removed safely:

odacli delete-dbhome -i 0f2eed26-e7ca-4021-9329-902a858ce3a1

If your databases were created with 19.11 or earlier versions, a parameter needs to be changed:

su - oracle
. oraenv <<< DBITST
sqlplus / as sysdba
alter system set "_enable_numa_support"=true scope=spfile sid='*';
exit
srvctl stop database -d DBITST_SITE1
srvctl start database -d DBITST_SITE1

This only concerns multi-processor ODAs (not S ones) and it will force an instance to use local memory modules, those associated to the processor where the instance is running. This should improve overall performance.

Patching the other DB homes is done the same way.

Remember that patching standby databases may raise an error, as datapatch cannot be applied on a mounted or read only database.

I would recommand to check on each primary the patch level after patching each DB home:

su – oracle
. oraenv <<< TSYCDB1
sqlplus / as sysdba
set serverout on
exec dbms_qopatch.get_sqlpatch_status;
...

Patch Id : 33808367
    Action : APPLY
    Action Time : 09-JUN-2022 10:44:46
    Description : OJVM RELEASE UPDATE: 19.15.0.0.220419 (33808367)
    Logfile :
/u01/app/odaorabase/oracle/cfgtoollogs/sqlpatch/33808367/24680225/33808367_apply
_TSYCDB1_CDBROOT_2022Jun09_10_39_57.log
    Status : SUCCESS

Patch Id : 33806152
    Action : APPLY
    Action Time : 09-JUN-2022 10:44:46
    Description : Database Release Update : 19.15.0.0.220419 (33806152)
    Logfile :
/u01/app/odaorabase/oracle/cfgtoollogs/sqlpatch/33806152/24713297/33806152_apply
_TSYCDB1_CDBROOT_2022Jun09_10_39_57.log
    Status : SUCCESS

PL/SQL procedure successfully completed.
exit
Final checks

Let’s get the final versions:

odacli describe-component | grep -v ^$
System Version
---------------
19.15.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.15.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK
                                          19.15.0.0.0           up-to-date
GI
                                          19.15.0.0.220419      up-to-date
DB {
[OraDB19000_home1]
                                          19.12.0.0.210720      19.15.0.0.220419
[OraDB19000_home4 [wra]]
                                          19.14.0.0.220118      19.15.0.0.220419
[OraDB19000_home6 [roman]]
                                          19.14.0.0.220118      19.15.0.0.220419
[OraDB19000_home8 [DHE,MAW,RTSCDB1,
RTSCDB2]]                                 19.15.0.0.220419      up-to-date
}
DCSCONTROLLER
                                          19.15.0.0.0           up-to-date
DCSCLI
                                          19.15.0.0.0           up-to-date
DCSAGENT
                                          19.15.0.0.0           up-to-date
DCSADMIN
                                          19.15.0.0.0           up-to-date
OS
                                          7.9                   up-to-date
ILOM
                                          5.0.2.24.r141466      up-to-date
BIOS
                                          52050300              up-to-date
LOCAL CONTROLLER FIRMWARE {
[c3]                                   80000681               up-to-date
[c4]                                   8000A87E               up-to-date
} 
SHARED CONTROLLER FIRMWARE VDV1RL04 up-to-date 
LOCAL DISK FIRMWARE 1132 up-to-date 
SHARED DISK FIRMWARE 1132 up-to-date 
HMP 2.4.8.0.601 up-to-date

Everything is fine. I kept other DB homes in their old version.

Cleanse the old patches

The old patches will never be used again, so don’t forget to remove previous patch files from the repository if your ODA has already been patched:

odacli cleanup-patchrepo -cl -comp db,gi -v 19.14.0.0.0
odacli describe-job -i 7eb35bf8-f4dc-4afa-bd85-4193a574996a
Job details
----------------------------------------------------------------
          ID:  7eb35bf8-f4dc-4afa-bd85-4193a574996a
          Description:  Cleanup patchrepos
          Status:  Success
          Created:  June 9, 2022 11:03:14 AM CEST
          Message:
Task Name Start Time End Time Status
Cleanup Repository June 9, 2022 11:03:14 AM CEST June 9, 2022 11:03:14 AM CEST Success
Cleanup JRE Home June 9, 2022 11:03:14 AM CEST June 9, 2022 11:03:14 AM CEST Success
Cleanup old ASR rpm June 9, 2022 11:03:14 AM CEST June 9, 2022 11:03:14 AM CEST Success

Put back your own settings
  • add your additional rpms manually if needed
  • put back your profile scripts for grid and oracle users
Patching an existing DB System

If you use DB Systems on your ODA, meaning that some of your databases are running in dedicated VMs, you will need to apply the patch inside each DB System.

odacli list-dbsystems
Name                  Shape       Cores  Memory      GI version          DB version          Status           Created                  Updated
--------------------  ----------  -----  ----------  ------------------  ------------------  ---------------  -----------------------  -----------------------
WSDBSYSTEMHD          odb2        10     16.00 GB    19.14.0.0.220118    19.14.0.0.220118    CONFIGURED       2022-05-05 11:57:47      2022-05-05 12:27:10
                                                                                                              CEST                     CEST
WSDBSYSTEM38          odb2        10     16.00 GB    19.14.0.0.220118    19.14.0.0.220118    CONFIGURED       2022-05-05 11:18:28      2022-05-05 11:47:36
                                                                                                              CEST                     CEST

In this example, I will need to connect to these 2 DB Systems and do the same upgrade I did on Bare Metal.

Remember that using multiple DB Systems is nice but it’s much more work when you need to patch.

Conclusion

This release is easy to apply coming from 19.14. So keep your ODA up-to-date, or eventually use the new DPR feature if you come from one of these old supported releases.

L’article Patch 19.15 is available for your ODA est apparu en premier sur dbi Blog.

First look at SQL Server 2022 Contained Availability Groups

Wed, 2022-06-08 17:21
Introduction

SQL Server 2022 introduces the new concept of Contained Availability Groups. This is something that DBAs have been waiting for since Availability Groups were introduced 10 years ago.

Contained Availability Groups enhance the Availability groups by providing the ability to replicate system objects (like SQL Agent jobs, Logins and Linked Servers) between your database replicas.

In this blog post, using SQL Server 2022 CTP2.0 we will have a first look at the upcoming Contained Availability Groups.

Table of contents

Contained Availability Groups

Since the introduction of Availability Groups with SQL Server 2012, synchronization between multiple replicas only concerns user databases.
There are challenges when applications also rely on objects such as users, logins, permissions, agent jobs, etc., which are stored in the system databases (master or msdb).
This type of object must be manually replicated by a DBA, or scripted, for example with dbatools.

Contained Availability Groups solve this problem by automatically creating a master database and an msdb database for each Availability Group which automatically replicates objects created in its context.

Creating a Contained Availability Group

The SSMS Wizard contains a new Checkbox that is not checked by default.

There’s a new T-SQL keyword for the CREATE AVAILABILITY GROUP command:

CREATE AVAILABILITY GROUP [ContainedAG02]
WITH (
	AUTOMATED_BACKUP_PREFERENCE = PRIMARY,
	REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = 0,
	CONTAINED
)
FOR DATABASE [appdb02] 
REPLICA ON N'SQL19VM1\SQL2022A' [...]
Contained Availability Group in SSMS

The master and msdb databases are visible both under the Availability Databases folder in SSMS and also in the main “User” Databases list.

Adding another Contained AG brings of course more of these databases.

DMV change

Notice there’s a new column to the sys.availability_groups DMV called is_contained.

Connecting through the Listener

What’s interesting too is that when connecting through the listener you only see the databases related to the Contained AG related to that listener.

This is not the behavior with a not Contained “Normal” Availability Group.

Creating an Agent Job

I have created 2 Agent Jobs with different scopes. The scope is defined at the connection time as there are no changes for now in the “New Job” wizard to set the Job to be a Contained AG or an instance level related Job.

-- Connect to primary replica on default database
USE msdb
GO
DECLARE @jobId BINARY(16)
EXEC  msdb.dbo.sp_add_job @job_name=N'test_ContainedAG', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=2, 
		@notify_level_page=2, 
		@delete_level=0, 
		@category_name=N'[Uncategorized (Local)]', 
		@owner_login_name=N'sa', @job_id = @jobId
GO
-- Connect to ContainedAG01 on default database
USE msdb
GO
DECLARE @jobId BINARY(16)
EXEC  msdb.dbo.sp_add_job @job_name=N'test_ContainedAG', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=2, 
		@notify_level_page=2, 
		@delete_level=0, 
		@category_name=N'[Uncategorized (Local)]', 
		@owner_login_name=N'sa', @job_id = @jobId
GO

The result, as shown in SSMS is that the Contained AG shows only its related Jobs. What I find disturbing is that as a sysadmin connected to my instance I don’t see the ContainedAG Jobs but only the instance scoped one.

I can see this causing a lot of confusion when administrating instances with multiple ContainedAG and dozens of Jobs.

Same thing with SQL.

select name
from msdb..sysjobs
go

select name
from ContainedAG01_msdb..sysjobs
Creating a Login

The same thing goes for Logins and Users; no change to the UI. They have to be created in the correct scope in T-SQL.

Logins when connected to the listener
Logins when connected to the instance
Performing a Failover

The Failover wizard is showing the master and msdb databases as affected by the Failover operation.

All objects related to the Contained AG (only Logins and Jobs here) are also available after a failover on the secondary replica without the need for a “Manual”/”DBA” object synchronization.

SELECT @@SERVERNAME AS ServerName
	, ag.name AS AgName
	, dc.[database_name]
	, rs.is_primary_replica
	, rs.synchronization_state_desc 
FROM sys.dm_hadr_database_replica_states AS rs
	INNER JOIN sys.availability_databases_cluster AS dc
		ON rs.group_id = dc.group_id 
		AND rs.group_database_id = dc.group_database_id
	INNER JOIN sys.availability_groups AS ag
		ON ag.group_id = rs.group_id
WHERE is_primary_replica = 1

select name, sysadmin
from master.sys.syslogins
where name not like '##%' and name not like 'NT%'

select name
from msdb..sysjobs
Contained AG objects after a Failover
Deleting a Contained Availability Group

Deleting the Contained Availability Group will not drop the master and msdb databases.

Reuse old Contained Availability Group system databases

I can recreate a new Availability Group now by reusing the old msdb and master databases.
This is when the “Reuse System Database” can be checked.

As you can see just above I tried to use a new Contained Availability Group name with the suffix “_bis”.
It didn’t go as planned. The master and msdb databases were not detected as such but were considered as simple user databases.

The second attempt with the original name did not work any better.

Actually, the master and msdb need to be unselected on this Wizard panel. Only User databases have to be selected.
The msdb and master databases will be reused based on their name matching with the Contained AG Name.

This is something that could also be improved in the Wizard to make it clear these databases are of a special kind and maybe should be added as User databases in a Contained AG when selecting the “Reuse System database” option.

Final words

This blog post was just a basic introduction to Contained Availability Group playing with SQL Server 2022 CTP2.0 and SSMS 19 preview. The feature Contained AG feature seems to be working as expected but might need some important improvements to both SSMS and DMVs to make the scope (Contained or instance) of objects (Logins, Agent Jobs, etc..) more clear, and easier to manage.


L’article First look at SQL Server 2022 Contained Availability Groups est apparu en premier sur dbi Blog.

ODA X9-2: it’s finally here!

Wed, 2022-06-08 08:17

Introduction

It’s been nearly 3 years that Oracle Database Appliance X8-2 reached the market, and in 2022 it’s still a great performer. But now it’s time for a refresh: X9-2 specs have just been published by Oracle, and machines should be soon available. So what’s new?

What is Oracle Database Appliance?

ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicate software distribution including Linux, Oracle database software, a Command Line Interface (CLI) and a Browser User Interface (BUI). The goal is to simplify database lifecycle and maximize performance.

Changes on the hardware side

If you remember, ODA X8-2 was available in 3 flavors:

  • X8-2S with 1 CPU, 192GB of RAM and 2x 6.4TB NVMe SSDs
  • X8-2M with 2 CPUs, 384GB of RAM and 2x to 12x 6.4TB NVMe SSDs
  • X8-2HA with 2 nodes (similar to X8-2M without the disks) and one or two disk enclosures with various configurations (SSDs or spinning disks)

Obviously, previous Intel Xeon Gold 5218 is replaced by a more modern CPU, the Xeon Silver 4314 with basically the same number of cores (16) and a slightly upgraded base speed (2.4GHz). According to Intel’s specsheets, there is not that many differences between these two, apart from a bigger cache, a 10nm vs 14nm technology, but you may also notice a lower maximum frequency down from 3.9GHz to 3.4GHz. In the real world, don’t expect something really significant on the CPU side.

Internal SSDs for system are surprisingly smaller than previous generation, from 480GB to 240GB. It shouldn’t be an issue because Oracle software now mainly resides on ACFS volumes and no more on local disks.

X9-2S will replace X8-2S with a nice upgrade of base memory from 192GB to 256GB, upgradable with a single expansion to reach 512GB. X9-2S is still limited to 2 data disks without any expansion, thus limited to small needs.

X8-2M is not replaced by another M iteration, but a new X9-2L instead. This name is much more suitable for this very capable model, which fit for 80% of our customers. Basically, X9-2L is very similar to X8-2M apart from an upgraded base memory (512GB) and maximum memory (1TB).

For these 2 models, X9-2S and X9-2L, disks are slightly bigger compared to old generation, from 6.4TB to 6.8TB. Only X9-2L could go beyond 2 disks and up to 12 disks for a maximum raw capacity of 81TB. But as for CPUs, it shouldn’t change anything when sizing your new ODA infrastructure.

X9-2HA is not that different compared to X8-2HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being composed of SSDs only, the second one being a mix of SSDs and HDDs. Only the HC get a storage bump thanks to bigger HDDs: from 14TB to 18TB each.

Regarding the network interfaces, nothing is new here. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). Remember that SFP28 won’t connect to 1Gbps fiber network.

What’s new regarding the 19.15 software bundle?

As you may know, latest software bundle 19.15 associated with this new piece of hardware is for X9-2 as well as for the older ODAs, so everyone will benefit from this software update (oldest supported ODA is X5-2).

The most important new feature brought by this new version is the “Data Preserving Reprovisioning”. Everyone working on ODA since years know how it’s sometimes tough to patch an ODA, and quite long if you need intermediate patches. For sure you could always do a reimaging, but all databases need to be restored. This new feature mix both advantages: patching is non-destructive, reimaging is clean. Why not reimaging without erasing the data disks? This is now possible and it may replace traditional patching for good.

The other improvements are:

  • more flexibility on memory size for DB Systems, if you use these cloud-like virtualized databases
  • OVM to KVM migration, for those still using not yet migrated virtualized ODAs
  • Data Guard configuration registration, if you manually configured Data Guard and would like to register this configuration in the ODA registry as if it were created with odacli

What are the differences between the 3 models?

The X9-2S is an entry price point for a small number of small databases. The X9-2L is much more capable and can get disk expansions. Even a big infrastructure with hundreds of databases can easily fit on several X9-2L. The third one is for RAC users, because High Availability is sometimes mandatory. The disk capacity being much higher, big infrastructure can be consolidated with a very small number of HA ODAs.

ModelDB EditionnodesURAMRAM maxRAW TBRAW TB maxbase priceODA X9-2SSE2/EE12256GB512GB13.613.619’980$ODA X9-2LSE2/EE12512GB1024GB13.681.632’400$ODA X9-2HA HPSE2/EE28/122x 512GB2x 1024GB4636883’160$ODA X9-2HA HCSE2/EE28/122x 512GB2x 1024GB39074083’160$

Which one should you choose?

If your databases can comfortably fit in the S model, don’t hesitate as you will probably never need more. ODA X9-2S is the perfect choice for those using Standard Edition 2. Take a second one with Dbvisit Standby and it’s a real bargain for a disaster protected Oracle database environment.

Most interesting model is the new L, like the M was before. L is quite affordable, and extremely dense regarding the TB available (81TB in 2U). And it’s upgradable in case you don’t buy it fully loaded at the beginning.

If you still want/need RAC and the associated complexity, the HA is for you and will leverage your Enterprise Edition databases with their options.

Don’t forget that you should better order at least 2 ODAs for Disaster Recovery purpose, using Data Guard (EE) or Dbvisit Standby (SE2). Disaster Recovery setup is nowadays mostly used for non-Disaster Recovery scenarios: patching with minimal downtime, server maintenance, load balancing, …

My personal thought: I would prefer 2x ODA X9-2L compared to 1x ODA X9-2HA. NVMe speed, no RAC and single box is definitely better. And extreme consolidation may not be the best solution.

What about the licenses and the support?

ODA is not sold with the database licenses: you need to bring yours or buy them at the same time. With Standard Edition 2, you’ll need 1x license per ODA S and 2x per ODA L. 4x licenses are required for HA model but it does not make sense using SE2 and X9-2HA.

If you’re using Enterprise Edition, you’ll need at least 1 license on a S and M models (2 activated cores) and at least 2 licenses on HA (2 activated cores per node). Enabling your EE license on a ODA will actually decrease the number of cores on the server to make sure you are compliant but it doesn’t prevent you to use unlicensed options. You can also use CPU pools to keep remaining CPUs available for other purpose, running application VMs for example.

Regarding the support, as other hardware vendors you’ll have to pay for your ODA to be supported, in case of hardware or software failure. 1st year of support will usually be part of your initial order but is not included in the server price.

Support for the database licenses is the same as the other platforms. Don’t forget that only 19c databases are now supported with Premier Support.

Conclusion

ODA X9-2 is a little bit disappointing when looking at the specs but X8-2 was already perfectly balanced. So this refresh is nice and small improvements are welcome. Prices are not much higher compared to X8-2 prices 2 years and half ago, this is something everyone will appreciate.

When comparing software features, improvement is much bigger. Oracle made a huge work on this part: ODA is now a real appliance with distinctive features compared to other on premise solutions. And this can be seen among our customers: most of those using these engineered system renew their old ODAs with newer ones.

L’article ODA X9-2: it’s finally here! est apparu en premier sur dbi Blog.

SQL Server 2022 Parameter Sensitive Plan optimization

Wed, 2022-06-01 09:35
Introduction

The Intelligent Query Processing (IQP) feature family is extended with SQL Server 2022.
One of the most anticipated features is the Parameter Sensitive Plan optimization.

I started to test this new feature. In this post, you will find some information to understand how it works and make your first tests too.

The issue with Parameter Sensitive Plan

Parameter Sensitive Plan, also known as “Parameter Sniffing” is a scenario caused by non-uniform data distribution where a single cached execution plan for a parameterized query performs poorly for some parameter values.

A few options are available to deal with a Parameter Sensitive Plan query:

  • Use the RECOMPILE query hint to force a new plan compilation for all executions
  • Use the OPTIMIZE FOR hint to generate an execution plan for a specific parameter value
  • For the last known good plan with Query Store

All the methods mentioned below require manual intervention, either at the level of the SQL code to add a query hint or by a DBA to force a particular execution plan.

New feature: PSP Optimization

This new PSP optimization feature will be introduced with SQL Server 2022 and enabled by default with Compatibility Level 160.

Even if the Query Store will be enabled by default with 2022, PSP optimization does not require to have Query Store enabled, unlike some other IQP features.

This feature introduces 2 new concepts. To quote the documentation:

For eligible plans, the initial compilation produces a dispatcher plan that contains the PSP optimization logic called a dispatcher expression. A dispatcher plan maps to query variants based on the cardinality range boundary values predicates.

The idea is as follows; an eligible query will get a dispatcher plan containing the dispatcher expression. Each significant set of parameters has its query variant, an execution plan optimized for these parameters

PSP Optimization Demo Prerequisites SQL Server 2022 and Compatibility Level 160

For this demo, you obviously need SQL Server 2022. I’m now using the first public preview, CPT 2.0.
As just mentioned above the prerequisite for this feature is the Compatibility Level 160.

Enable the feature

The PSP Optimization feature is enabled by default. You can enable/disable it with the following command:

ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SENSITIVE_PLAN_OPTIMIZATION = ON
Reason for PSP optimization being skipped

I had difficulty producing a scenario that triggers PSP optimization.
Using the documented XE events I found some reasons for PSP skipping my queries: SkewnessThresholdNotMet, UnsupportedComparisonType or ConjunctThresholdNotMet

I do not know what these thresholds are. I just used a larger table and a simpler query for the demo.
There are currently 32 reasons listed in the XE “psp_skipped_reason_enum” that you can get with this query:

SELECT name, map_value
FROM sys.dm_xe_map_values 
WHERE name ='psp_skipped_reason_enum' 
ORDER BY map_key
Demo

I used the bigTransactionHistory table that I slightly modified to get the following data distribution producing a parameter sniffing scenario.

I run the following query twice with different parameters and PSP optimization enabled.

EXEC sp_executesql 
	N'
	select TransactionId, Quantity, ActualCost, TransactionDate
	from dbo.bigTransactionHistory
	where TransactionDate = @date'
	, N'@date datetime'
	, '2004-06-01 00:00:00';
GO
EXEC sp_executesql 
	N'
	select TransactionId, Quantity, ActualCost, TransactionDate
	from dbo.bigTransactionHistory
	where TransactionDate = @date'
	, N'@date datetime'
	, '2022-06-01 00:00:00';
GO

I get 2 different execution plans, without forcing a Recompile or forcing a plan myself.

The paramter_sensitive_plan_optimization Extended Event was fired during both executions of the query. We can notice the variant_id information.

The execution plan shows a new hint option “PLAN PER VALUE” added to the query text:

select TransactionId, Quantity, ActualCost, TransactionDate
	from dbo.bigTransactionHistory
	where TransactionDate = @date option (PLAN PER VALUE(QueryVariantID = 1, predicate_range([AdventureWorks].[dbo].[bigTransactionHistory].[TransactionDate] = @date, 100.0, 10000.0)))
select TransactionId, Quantity, ActualCost, TransactionDate
	from dbo.bigTransactionHistory
	where TransactionDate = @date option (PLAN PER VALUE(QueryVariantID = 3, predicate_range([AdventureWorks].[dbo].[bigTransactionHistory].[TransactionDate] = @date, 100.0, 10000.0)))

Based on the parameter value provided when running the query SQL Server will choose the plan to be used at runtime.

There’s a new “Dispatcher” section in the XML execution plan containing the dispatcher “expression”.

Even though the Query Store is not required for PSP to be working it is useful to have it enabled because you will get information about your query variants in a new DMV: sys.query_store_query_variant

The Query Store report doesn’t show an aggregated view of all variants at once. Looking for query_id 4 doesn’t show anything. That’s something that could be useful in the next versions of SSMS.

The query_hash in DMV sys.dm_exec_query_stats is common to all variants, so it’s possible to determine aggregate resource usage for queries that differ only by input parameter values.

The plan cache shows the plan for each variant and the dispatcher.

SELECT 
	p.usecounts, p.cacheobjtype
	, p.objtype, p.size_in_bytes
	, t.[text]
	, qp.query_plan
FROM sys.dm_exec_cached_plans p
	CROSS APPLY sys.dm_exec_sql_text(p.plan_handle) t
	CROSS APPLY sys.dm_exec_query_plan(p.plan_handle) AS qp 
WHERE t.[text] like '%TransactionDate%'
  AND p.objtype = 'Prepared'
ORDER BY p.objtype DESC

Although the Dispatcher plan is the largest, it only contains the XML dispatcher section mentioned above.

Conclusion

The Parameter Sensitive Plan optimization is working as described in SQL Server 2022 CTP2.0. There’s a lot to learn about this feature.
We do not yet know precisely what are the conditions for a request to be eligible for this feature. We don’t know yet what the side effects are if there are any.
This is a very promising feature that could help stabilize and make database performance more predictable in some cases.

L’article SQL Server 2022 Parameter Sensitive Plan optimization est apparu en premier sur dbi Blog.

Automated patching at Idorsia with AWS Systems Manager

Wed, 2022-06-01 03:01

Servers patching is very important but can be very challenging. One of our customers, Idorsia Pharmaceuticals Ltd implemented successfully AWS Systems Manager (SSM) to automate patching not only of instances running in AWS Cloud but also covering on-premises instances no matter if it’s bare-metal or virtual machines.

Logo Idorsia Pharmaceuticals Ltd What is AWS Systems Manager? Logo AWS Systems Manager

AWS Systems Manager is built with many capabilities sorted in different categories and not all were used in the scope of that project. The different components involved in our projects are:

  • Maintenance Windows
  • Patch Manager and patch baselines
  • AWS SSM agent
  • Fleet Manager
  • Automation
Why using AWS Systems Manager?

Idorsia was looking for a solution covering different OS types, it was necessary to be able to patch Windows Server as well as different Linux distributions on a regular basis. The second point was to have a central solution managing both cloud instances and on-premises servers.

SSM is supporting a wide range of Operating System supporting the first requirement, see the list of supported OS

On-premises instances can be managed very easily once the AWS SSM agent is installed. To work in hybrid environment, you need to create an activation code and use it to register the server as a managed instance in the Fleet Manager.

Implementation

Some Amazon Machine Images (AMIs) are provided with the agent already installed. So, the first step was to deploy the AWS SSM agent on all the remaining servers in scope.

For hybrid environment, the server is associated to an IAM service role during registration. Whereas an IAM instance profile can be attached directly to EC2 instances. In both cases, the policy “AmazonSSMManagedInstanceCore” should be part of the role at minimum and can be customized based on customer needs.

Once all servers are available as managed instances in the Fleet Manager, we can sort the instances in different patch groups. We don’t want to apply patches on all servers exactly at the same date and time. It’s better to apply even security patches on test environments before moving to Production.

Some patches may be released in the middle of a patch campaign. To avoid installing new patches not yet installed in Test, we created different patch baselines for Test and Prod.  All the magic happens within several Maintenance Windows. Those maintenance windows are defining the date/time and the different steps for patching.

After patching, the SSM agent reports the status to the AWS Console. We can check the compliance status for each server in the Patch Manager.

Some challenges

We had different challenges during the implementation of the whole process. In some cases, the instances are stopped at the time of patching, we added a task to start the instances in the maintenance window as a first step. However, it was sometimes not sufficient, the customer was getting timeouts preventing the patching to complete on time for some instances. We created a new document based on “AWS-RunPatchBaseline” to customize the timeouts.

We also had an issue with CentOS default repositories where we got some patches installed in Production even if not yet installed in Test. This is covered in a separate blog from Daniel: Attaching your own CentOS 7 yum repository to AWS SSM (https://www.dbi-services.com/blog/attaching-your-own-centos-7-yum-repository-to-aws-ssm)

What’s next

The customer is now considering using AWS Systems Manager for reporting patching compliance on other machines like AWS Workspaces.

L’article Automated patching at Idorsia with AWS Systems Manager est apparu en premier sur dbi Blog.

SQL Server 2022 public preview available

Tue, 2022-05-31 13:15

This is a small blog post to share that the SQL Server 2022 public preview is available for download.

The Microsoft announcement blog post and a summary of the new feature are available here :

Announcing SQL Server 2022 public preview: Azure-enabled with continued performance and security innovation

I have already installed it. No crazy change regarding the installation Wizard.

Just one thing, this new “SQL Server extension for Azure” shared Feature is checked by default.

If you are just using SQL Server on-premise you will have to disable it because it requires credentials laters in the Setup steps.

Welcome to 2022 !

I have already noticed not less than 10 new database scoped options. They are so many new things coming with this release!

Notice also that the recommended version of SSMS to use with 2022 is the preview version number 19 available for download using the following link.
Download SQL Server Management Studio (SSMS) 19 (Preview)

I will now start to play with the new features. The ones that interest me in priority are the new features related to performance and Query Store, especially the “Parameter Sensitive Plan optimization” feature.

L’article SQL Server 2022 public preview available est apparu en premier sur dbi Blog.

Automated patching at Idorsia with AWS Systems Manager

Mon, 2022-05-30 09:40

Servers patching is very important but can be very challenging. One of our customers, Idorsia Pharmaceuticals Ltd implemented successfully AWS Systems Manager (SSM) to automate patching not only of instances running in AWS Cloud but also covering on-premises instances no matter if it’s bare-metal or virtual machines.Logo Idorsia Pharmaceuticals Ltd

What is AWS Systems Manager?

Logo AWS Systems Manager

AWS Systems Manager is built with many capabilities sorted in different categories and not all were used in the scope of that project. The different components involved in our projects are:

  • Maintenance Windows
  • Patch Manager and patch baselines
  • AWS SSM agent
  • Fleet Manager
  • Automation
Why using AWS Systems Manager?

Idorsia was looking for a solution covering different OS types, it was necessary to be able to patch Windows Server as well as different Linux distributions on a regular basis. The second point was to have a central solution managing both cloud instances and on-premises servers.

SSM is supporting a wide range of Operating System supporting the first requirement, see the list of supported OS

On-premises instances can be managed very easily once the AWS SSM agent is installed. To work in hybrid environment, you need to create an activation code and use it to register the server as a managed instance in the Fleet Manager.

Implementation

Some Amazon Machine Images (AMIs) are provided with the agent already installed. So, the first step was to deploy the AWS SSM agent on all the remaining servers in scope.

For hybrid environment, the server is associated to an IAM service role during registration. Whereas an IAM instance profile can be attached directly to EC2 instances. In both cases, the policy “AmazonSSMManagedInstanceCore” should be part of the role at minimum and can be customized based on customer needs.

Once all servers are available as managed instances in the Fleet Manager, we can sort the instances in different patch groups. We don’t want to apply patches on all servers exactly at the same date and time. It’s better to apply even security patches on test environments before moving to Production.

Some patches may be released in the middle of a patch campaign. To avoid installing new patches not yet installed in Test, we created different patch baselines for Test and Prod.  All the magic happens within several Maintenance Windows. Those maintenance windows are defining the date/time and the different steps for patching.

After patching, the SSM agent reports the status to the AWS Console. We can check the compliance status for each server in the Patch Manager.

Some challenges

We had different challenges during the implementation of the whole process. In some cases, the instances are stopped at the time of patching, we added a task to start the instances in the maintenance window as a first step. However, it was sometimes not sufficient, the customer was getting timeouts preventing the patching to complete on time for some instances. We created a new document based on “AWS-RunPatchBaseline” to customize the timeouts.

We also had an issue with CentOS default repositories where we got some patches installed in Production even if not yet installed in Test. This is covered in a separate blog from Daniel: Attaching your own CentOS 7 yum repository to AWS SSM

What’s next

The customer is now considering using AWS Systems Manager for reporting patching compliance on other machines like AWS Workspaces.

Cet article Automated patching at Idorsia with AWS Systems Manager est apparu en premier sur Blog dbi services.

SQL Server 2022 public preview now available

Tue, 2022-05-24 10:58

This is a small blog post to share that the SQL Server 2022 public preview is available for download starting today.

The Microsoft announcement blog post and a summary of the new feature are available here :
Announcing SQL Server 2022 public preview: Azure-enabled with continued performance and security innovation

I have already installed it. No crazy change regarding the installation Wizard.
Just one thing, this new “SQL Server extension for Azure” shared Feature is checked by default.

If you are just using SQL Server on-premise you will have to disable it because it requires credentials laters in the Setup steps.

 

Welcome to 2022 !
I have already noticed not less than 10 new database scoped options. They are so many new things coming with this release!

I will now start to play with the new features. The ones that interest me in priority are the new features related to performance and Query Store, especially the “Parameter Sensitive Plan optimization” feature.

 

 

Cet article SQL Server 2022 public preview now available est apparu en premier sur Blog dbi services.

DevOpsDays Geneva – 2022

Sun, 2022-05-22 18:12

Finally… this young event (3rd edition) for romandie is back after 2 years!

It has brought together more than 400 people involved in DevOps culture for 2 intensive days, allowing people to share knowledge and networking in a very great atmosphere.

It started first day in a workshop, with Nicolas Thomas from Uleska, talking about Application Security Testing Orchestration (ASTO), the idea is to implement security at each step, each iteration of a product lifecycle and conduct by a function called AppSec, of course driven by the security team and through all standards Application Security Test (AST), for exemple DAST, IaC, SAST, aso …

One important fact when exposing an application to the internet is that his surface attack can’t be void and the more feature is deployed the more you have risks.

 

After that, I followed the talk about Tekton with Giovanni Galloro from Google who did some CI/CD promotion of the product, Tekton is well known from Jenkins-X users because it is embedded with other tool, but for this presentation, the focus was on functionality it offers, with tasks, pipelines, triggers and he ended with a demo.

 

Before lunch, we had another talk with Dennis Jannot from Solo.io who described how to manage communication outside pods (ingress controller) and kubernetes in general (gateway), in clearer words about service mesh. Solo.io company provides Gloo Edge product, an equivalent for it is Istio. Their solution simplifies components in comparison when using only Istio, thus allowing easy management of mesh functionalities like external auth server or rate limiting server and also additional capabilities with gloo filter in ingress Gateway (WAF, Data loss prevention, JWT, transformation, …).

Why re-inventing the wheel?

 

After a great lunch, it was time to do some networking with the contest organized by the event, I was happy to find many people from my past experiences in Geneva or Lausanne.

 

Back at talks for the beginning of the afternoon, I followed the “show” from Charles Laziosi and Vincent Journel who started with a real life situation when you use an application with your mobile and you experience a network outage, what a mess when you get the famous t-rex with the desperate message “There is no Internet connection”

This is where their topic started with “How offline first-architecture could save your apps!” which change the paradigm, when you expect no outage, they take connectivity as an opportunity to sync all your change with server with providing an offline architecture, brilliant!

 

Next talk, right after, was with David Pilato from Elastic, he presented the topic “Hunting (and stopping!) threats with Elastic Security”, where he showed the power of “observability” when using the Elastic Search Platform to collect and compute metrics which can give some interesting insight about threatenings and how to manage them. David made a demo and shows an interesting point of view of how to use the Elastic suite.

 

Before ending this first day, I followed Hervé Schweitzer, our CTO’s workshop, with the topic “Infrastructure-as-Code in a multi-cloud environment”, this project, the YaK, is about to provision quickly and repeatably infrastructure in any popular cloud (AWS, Azure, Oracle cloud) or on-prem without being afraid by complexity with all different cli. This has been made possible with the great knowledge of dbi services experts on different cloud providers and Ansible, reducing management of your servers to only configuration part.

This is for the public part, because the product will be divided in 2 parts, the other one will be what is called DMK packages, allowing you to deploy an oracle instance in any cloud!

Fun story told by Hervé during his demo, he asked “where do you think your oracle instance will be more efficient once it will be installed regarding the 3 cloud plaftorms?”, the natural answer is AWS, but no, it’s better optimized in oracle cloud and this is due to S3 storage which is less well managed.

The audience where very impressed regarding numerous smart questions at the end of this workshop.

This very well ended this first day of event.

 

The evening was very pleasant with a very nice cocktail dinner organized by the fantastic team of DevOpsDay Geneva.

2nd day

For this second day, the agenda was a little bit different with main room dedicated on talks and Open-spaces sessions (4 times during this second day) to discuss freely on topics discussion posted by participants on first day. I personally didn’t followed these Open-spaces sessions but only the talks in the main room.

 

Talks for this second day started at 9:00 AM with Ankur Marfatia with “Turning an Enterprise to a Learning Community”. It was all about bringing or improving all kind of sharing knowledge with many types of examples. What’s cool is that it was not intended for big structure but also for smallest infrastructure. The main idea is starting you, who is part of a community, a group, a company, a post, a tech article, in fact everywhere and anytime to share. In return, the community will also share, help, discuss with you so that you’ll learn something from it. One of the example given was very relevant to me, yes, when example are made with food, it’s the best!

Think of knowledge as a pizza, share it, so that anyone can get a piece of that knowledge!

In that way writing this blog is the exactly the purpose of this topic and in the DNA of dbi services. #sharing

 

Second talk of the morning was with Aurélie Vache, a french punchy talk about the Impostor syndrome.

Wow, I loved the way she make everyone’s feeling better in finding people place in a such high technology and principle that is DevOps. I would recommend anyone to hear her talk and start to do talks, write blogs, articles about their personal knowledge, experiences, aso…

 

After a coffee break, the morning continues with 2 other talks, first one was with Courtney Heba coming from Microsoft directly from the US and talked about “Building Mastery into your Daily Practice”. It was very interesting to hear how we can integrate mastery in every action we do in our every day role. A nice exemple given was with driving a car :

  • first when you learn how to drive, all is Cognitive, you have to think all your moves and understand them
  • then it can become Emotional, you start feeling your moves and you start getting some assurance doing them
  • Finally, it’s all Physical, in other word, all your moves are natural, your drive is smooth

 

And last talk of the morning was with Dr. Joe Perez, about how to “Driving decisions with Data: Delight or Disaster”. He was very demonstrative showing how people react in a specific situation regarding their cognitive biases.

 

Once again, we had the lunch break to do some nice networking.

 

The afternoon started with Doc-as-code with Sandro Cirulli from The Scale Factory. Everyone has ever heard his responsible asking for documentation and everyone has also his “own way to do it” and to formalize it even if there is an enterprise solution, Sandro was pointing common problems with documentation, like :

  • no documentation
  • too much documentation
  • documentation out dated
  • aso…

and all this leading to people working slowly, creating micromanagement to understand why is it so… Sandro gave in his talk some hints to help writing docs, where and how to store them. In conclusion, treat documentations as you treat code and instil this culture in your organization are the best  advice.

 

After that, Stéphanie Fischer talked about her own experience with agility in many situations, her dreams versus reality and how she managed to improve everywhere she was asked to coach.

 

Last talk for this event was with Scott Graffius and his Journey to more productive DevOps teams through multiple phases from forming people by providing the “big picture” of what is expected, to storming with some strategies to help team to grow and move forward by encouraging  and honoring commitments, then to Norming which is more on individual focus and monitoring the team, then 4th phase was about performing and celebrating success, while last phase was about adjourning and the individual and team recognition efforts before releasing all people involved together.

 

This last one concluded two intensive days of knowledge, best practices, sharing spirit and very good feelings to be confident in our DevOps role.

Thanks for that and I’m confident that there will be the 4th edition next year from what I heard from Matthieu Robin!

Cet article DevOpsDays Geneva – 2022 est apparu en premier sur Blog dbi services.

DevOpsDays Geneva – 2022

Sun, 2022-05-22 15:40

We started the DevOpsDays Geneva Thursday May 12th , with dbi services colleagues Pascal ZANETTE, Pierre-Yves BREHIER, Jean-Philippe CLAPOT and Chay TE, with the  registration and a few cup of coffee, to prepare this first day.

After the Welcome speech provided by the event hosts, Matteo MAZZERI and Matthieu ROBIN, I follow the first main stream session, provided by Julia GIACINTI and Xavier NICOLOVICI, from PICTET, on “How to support the emergence of a DevOps culture within a large company”. In a well-defined sketch, Julie and Xavier explained the DevOps discovery of a traditional Prod manager and the incomprehension of this new way of work. They detailed this journey within their company, gave us a few tips and tricks, shared the challenges they faced and conclude with the current situation as well as the next steps.

It was then to David BARBARIN to present “Why we migrated the DB monitoring stack to Prometheus and Grafana”. David first detailed the current Migros Online architecture and explained the constraints and challenges which leads to the decision to use these tools. He gave a lots of details and explanation through his technical demonstration and conclude with the achievements and results of this migration.

The next presentation was done by Giovanni GALLORO, who gave a deep technical demonstration on Tekton pipelines named : “Tekton : from source to production inside Kubernetes”.

The last session of this busy morning was provided by Denis JANNOT who demonstrate how to implement “Advanced authentication patterns from the edge”. Denis shows the new challenges faced to properly secure a K8s cluster and detail several solution available which can be evaluate, based on infrastructure needs and constraints (Envoy proxy / Gloo Edge, API server…)

After a well deserved lunch break, we had a very interesting talk from Max ANDERSSON, who explain the audience how to “close the feedback loop for infrastructure development. Max recall the well know “3 ways” DevOps pillar and provide a refreshing and dynamic interactive session with the attendees.

We then reached the last session of this first by following Hervé SCHWEITZER, dbi services founder and CTO, session around YAK, a powerful internally developed tool for multi-cloud deployment. YAK is a derivation from IAC acronym (Infrastructure As Code) and was designed around Ansible and Docker, to allow a host deployment with the same setup independently of the destination (on prem AWS, Azure,…), using only one specific command. Hervé conclude with a Geneva DevOpsDays exclusive announcement : the YaK core component will be share with the community in the next months, probably around September this year. Stay tuned on this Blog website for the next announcements on the topic.

We ended this well filled first day with a cocktail diner, where passionate discussion and exchanges continued until late at night, in a very good ambiance.

After this short night (and a few cup of coffee), we started the second and last day of this event with Ankur MARFATIA, who clarify how to “Turn an enterprise to a learning community”. Ankur provide a real case example of learning share session put in place in his company, and underline the benefits of such internal events. He also linked these kind of internal practices with what can be found in external events, such as the DevOps Days. Interestingly, the importance of having food during these kind of meet-up was agreed by the whole assemble, which proves we are all the same.
He then explained some key points to create a safe and global learning environment for everyone, and reminds the two golden points for successful coaching sessions: Middle/Top Management implication and people wish to follow the training leaders.
Ankur conclude his talk with three important points to keep in mind :
1) learning is a never-ending journey
2) we all have different learning curves
3) change culture takes team and effort.

The next session was “Tips to fight the imposter syndrome”, by Aurélie VACHE a brilliant talk around this perception biases which leads a person to thing he/she does not belongs to his/her role or position. The person think his/her position is linked to luck and not hard work or knowledge. It leads to a self-deprecation feeling and the fear that other people will “realize” this imposture, soon or later. Aurélie gave a very frank and dynamic talk, with a lot of examples and tips to work around this syndrome and received a well deserved round of applause from everyone present in the room.

Right after, we listen carefully Courtney HIBA on how to “build mastery into your daily practice”. Courtney explained what was Mastery, why it was crucial for personal fulfillment and the different Mastery categories (Emotional, Business, Wealth and Relationship). She conclude by proposing an action plan to the audience : define one or two goals to master this year, identify the compelling reasons (why do you want to master it) and take action to start your mastery journey to achieve your goals.

Dr. Joe PEREZ followed, with a real show around “Driving decision with data : delight or disaster”. He gave a very energic talk, on the difference between the data’s values and their usage or pertinence. He gave us key points to help us to get enough materials to use data as accurately as possible, in order to improve our data-driven decisions, with all the necessary rules and safe guards to make it really useful and relevant.

We jumped into the “Docs-as-code : fix a poor document culture in your organization” presentation, dispensed by Sandro CIRULLI. Sandor first list the consequences of a poor documentation, which could leads to big issues such as : new employees onboarding slowness, productivity decrease, production outage recovery increased time, technical debt, new feature implementation slowness, poor communication,…
He then explained the documentation as code and why we should treat our documentation as we do for our code : versioned and trackable.
He shows the assembly a few tools which can be used for this purpose, such as Hugo, Red the Docs and Antora. He conclude that despite it need efforts from everyone at first, there is huge benefits to apply documentation-as-code and that we need to choose the tool that fits the company needs.

Stéphanie FISHER then share her personal experience and “lesson learned during an Agile transformation”. She highlights five key points she learned during her Agile coach career :
1st lesson : Avoid the word “Agile” to avoid people to block on a single word instead of embracing the concept and idea. 2nd : Resist the urge to fill the gaps : The risk is to substitute the root cause resolution against taking a role in the company. 3rd: Adapt yourself to the client context : you need to listen to your customer needs rather than pushing your own. 4th: Accept the frustration in the change : embrace the conflict if needed. Accept the tension as part of a necessary step in the change. 5Th : use your own advises and “Agile” yourself ! The world is changing and we need to be flexible, accept the incertitude, not over-analyze, prefer the testing to the thinking, prefer learning objective over that performance objective. It requires patience and resilience but it worst it.

Scott GRAFFIUS, remotely from the USA, detailed the Tuckman’s model to explain the five phases of a team development, Forming/Storming/Norming/Performing/Adjourning, and provide advice and guidance to help team members during those phases.
As a conclusion, Scott mentioned that all these steps are inevitable and seen in most of the teams, regardless of their activity or technical knowledge. Following the guidance shared will help team and individuals to prevent frustration and keep a good work spirit along the organization.

This presentation was followed by a participative open session with Matteo and Matthieu on what we thought about the organization of this event, if we had some improvement ideas or proposal, what we liked the most during these two days, the less,…
This conclude this DevOps Days Geneva and we hope we will see you there next year, we will be present for sure !

 

Cet article DevOpsDays Geneva – 2022 est apparu en premier sur Blog dbi services.

dbi services at the DevOpsDays Geneva 2022

Sun, 2022-05-22 15:37

dbi services was at the DevopsDays Geneva 2022 this year, which took place at the Haute Ecole de Gestion campus, in Carouge. Pierre-Yves Brehier, Pascal Zanette, Emmanuel Wagner, Chay Te and myself were present.

Opportunity to learn, discover new topics or technologies, doing in-person meetings and initiate business, this 2-days DevOps major event in the Romandie part of Switzerland gathered more than 400 people, 18 companies on site, and 16 additional sponsors.

 

With a mix of small dedicated rooms and more global sessions, this event covered a wide range of topics, technical or organizational.

Among all sessions we all attended to, here a small subset.
The first session I had the pleasure to attend was an interesting session Etienne Studer from Gradle who presented us the Developer Productivity Engineering.
This new engineering approach aim to increase developer productivity. Using automation and acceleration technologies, this approach is based on five pillars :

  • Faster feedback cycles
  • Faster Troubleshootings
  • Reliable Builds and tests
  • Continuous Learning and Improvement
  • CI Cost and Resource efficiency

 

Another good session was the one performed by Giovanni Galloro, from Google Gloud, Specialist Customer Engineer.
He presented us Telkon. Open-source framework, Tekton is used to create CI/CD systems such as Jenkins, Jenkins X the one we use daily, Skaffold or more. In his session, he described all the basic components, such as Tasks, Pipelines, Steps, …
It was followed by a demo of the tool. Even if we knew it, it was very interesting to see Tekton, shown and explained by somebody from Google.

 

At the end of the first day, we were all there to support our CTO, Herve Schweitzer. He was presenting a new product from dbi services, YaK.

YaK, is a IaaC product. IaaC stands for an Infrastructure as a Code. It helps you to describe and deploy your infrastructure on specified targets. But in fact it’s way more : by using Ansible as a back-end but also all of our knowledge and expertise, it helps you to deploy virtual machines (Linux or Windows based) and databases (such as MariaDB, Postgres, Oracle, …) pain-free on Cloud providers (AWS, Azure or Oracle cloud for instance) or on-premise.

The beauty of that is changing where your DBs or VMs are deployed, so moving from one provider to another, is just a matter of  setting the correct target! The room was packed for that session and the product and its concept was very well received by all the audience, concluded by a set of constructive discussions on the product itself and its application on the customer. dbi services is acting in favor of the community : a part of the product will be released public in September 2022 !

On the second day, we all attended in the amphitheater of the campus to a session of Aurelie Vache, from OVH Cloud.
With a warm support from the whole assistance, Aurelie didn’t talk about any technical topic here. No fancy DevOps tools here, “just” a talk about who we are: humans. Instead of being technical, she transported us into the discovery of a psychological pattern, called the impostor syndrome.

People affected feel that they don’t deserve their success or their position in a company. They feel the situation as if it were only due to chance. She gave some clues, like accepting we have knowledge, sharing and contributing, getting feedback, being positive. Thanks Aurelie for this very nice and refreshing presentation.

We can’t summarize this event to a set of sessions we attended or topics we had the pleasure to learn. It was also the opportunity, after last year cancellation, to meet people and share coffee or talks with customers, potential new employees or other DevOps fans like us !

 

Cet article dbi services at the DevOpsDays Geneva 2022 est apparu en premier sur Blog dbi services.

Helvetia used AWS SCT & DMS to migrate to AWS RDS for PostgreSQL

Thu, 2022-05-19 04:32

One of our long term-time customers, Helvetia, successfully migrated on-prem Oracle databases to AWS, not only because of the licenses, but more importantly: to deploy faster, innovate faster, and use the state-of-the-art open source database system.


When you plan such a project, you need to know which tools you want to use and what the target architecture shall look like. There are several options to choose from but finally Helvetia decided to use the AWS native services AWS DMS and AWS RDS for PostgreSQL.

AWS DMS gives you the option to initially populate the target instance from the source, and right afterwards logically replicates ongoing changes from the source to the target. However, before you can do that, you need the schema to be ready in the target. To prepare this, there is AWS SCT. This is not an AWS service, but a free tool you can use to convert a schema from a database system to another. If you want to go from Oracle to PostgreSQL, this tool also performs an automatic conversion from Oracle’s PL/SQL to PostgreSQL’s PL/pgSQL. Although this tool does a great job, you have to be very careful with the result, and invest a good amount of time in testing. Autonomous transactions, for example, do not exist in PostgreSQL and the AWS schema conversion utility implements a workaround using database links. This can be fine if you rarely use it (because it needs to establish a new connection) but if you rely heavily on this feature, you’d better re-implement in a way that is native to PostgreSQL.

Another area you need to pay attention to are the data types. PostgreSQL comes with many of them. A NUMBER in Oracle can mean many things in PostgreSQL. It could be an integer or a numeric in PostgreSQL. Depending on what you go for, this comes with space and performance impacts in PostgreSQL. PostgreSQL comes with a boolean data type. In Oracle, this is usually implemented as a character or a numeric value. Do you want to keep it that way or do you want to convert to a boolean? Converting means that you also need to adjust the business logic in the database.

Another issue that took quite some to solve was this. The very simplified test case attached to the initial Email showed massive performance drops in PostgreSQL compared to Oracle. The reason is that Oracle’s PL/SQL is a compiled language and PostgreSQL’s PL/pgSQL is interpreted. If you have a case that more or less matches what is described in the thread linked above, you need to re-write this. The same applies when you have commits or rollbacks in PL/SQL functions. PostgreSQL does not allow you to commit or rollback in a function. You need to use procedures for that.

These are just a few hints of what might come along the way when migrating to AWS RDS for PostgreSQL. Once you have solved all this, the migration can be really smooth and will most probably be a success. Here are some posts that describe how to set this up using an Oracle sample schema as the source:

If you follow that, you should have enough knowledge to get started with your journey to AWS RDS.

Cet article Helvetia used AWS SCT & DMS to migrate to AWS RDS for PostgreSQL est apparu en premier sur Blog dbi services.

Pages