miércoles, 8 de julio de 2020

Instalando SQL Server 2019 en Ubuntu 18.04.4 LTS


Sigue los siguientes pasos para instalar Microsoft SQL Server en un servidor de Ubuntu Linux. Es realmente sencillo.

1. Conectarse al servidor Ubuntu con el usuario que tengas configurado.

login as: capacitacion
capacitacion@192.168.0.15's password:
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 5.3.0-28-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage


 * Canonical Livepatch is available for installation.
   - Reduce system reboots and improve kernel security. Activate at:
     https://ubuntu.com/livepatch

222 packages can be updated.
163 updates are security updates.

Your Hardware Enablement Stack (HWE) is supported until April 2023.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

2. Ejecutar el siguiente comando

root@capacitacion-VirtualBox:~# wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

OK

3. Actualizar el repositorio con los paquetes de la última versión de SQL SERVER disponible

root@capacitacion-VirtualBox:~# sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/18.04/mssql-server-2019.list)"

Hit:1 http://cr.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://cr.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://cr.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
Get:5 https://packages.microsoft.com/ubuntu/18.04/mssql-server-2019 bionic InRelease [10,5 kB]
Get:6 https://packages.microsoft.com/ubuntu/18.04/mssql-server-2019 bionic/main amd64 Packages [6?808 B]
Get:7 https://packages.microsoft.com/ubuntu/18.04/mssql-server-2019 bionic/main armhf Packages [1?521 B]
Get:8 https://packages.microsoft.com/ubuntu/18.04/mssql-server-2019 bionic/main arm64 Packages [1?521 B]
Fetched 20,3 kB in 1s (19,5 kB/s)
Reading package lists... Done
root@capacitacion-VirtualBox:~# apt-get updte
E: Invalid operation updte
root@capacitacion-VirtualBox:~# apt-get update
Hit:1 http://cr.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://cr.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://cr.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 https://packages.microsoft.com/ubuntu/18.04/mssql-server-2019 bionic InRelease
Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease
Reading package lists... Done

4. Instalar la paquetería de SQL SERVER

root@capacitacion-VirtualBox:~# apt-get install -y mssql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  gawk libc++1 libc++abi1 libpython-stdlib libpython2.7 libpython2.7-minimal libpython2.7-stdlib libsasl2-modules-gssapi-mit libsigsegv2
  libsss-nss-idmap0 python python-minimal python2.7 python2.7-minimal
Suggested packages:
  gawk-doc clang python-doc python-tk python2.7-doc binfmt-support
The following NEW packages will be installed:
  gawk libc++1 libc++abi1 libpython-stdlib libsasl2-modules-gssapi-mit libsigsegv2 libsss-nss-idmap0 mssql-server python python-minimal python2.7
  python2.7-minimal
The following packages will be upgraded:
  libpython2.7 libpython2.7-minimal libpython2.7-stdlib
3 upgraded, 12 newly installed, 0 to remove and 214 not upgraded.
Need to get 232 MB of archives.
After this operation, 1?077 MB of additional disk space will be used.
Get:1 http://cr.archive.ubuntu.com/ubuntu bionic/main amd64 libsigsegv2 amd64 2.12-1 [14,7 kB]
Get:2 http://cr.archive.ubuntu.com/ubuntu bionic/main amd64 gawk amd64 1:4.1.4+dfsg-1build1 [401 kB]
Get:3 https://packages.microsoft.com/ubuntu/18.04/mssql-server-2019 bionic/main amd64 mssql-server amd64 15.0.4033.1-2 [227 MB]
Get:4 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpython2.7 amd64 2.7.17-1~18.04ubuntu1 [1?053 kB]
Get:5 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpython2.7-stdlib amd64 2.7.17-1~18.04ubuntu1 [1?915 kB]
Get:6 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpython2.7-minimal amd64 2.7.17-1~18.04ubuntu1 [335 kB]
Get:7 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python2.7-minimal amd64 2.7.17-1~18.04ubuntu1 [1?294 kB]
Get:8 http://cr.archive.ubuntu.com/ubuntu bionic/main amd64 python-minimal amd64 2.7.15~rc1-1 [28,1 kB]
Get:9 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python2.7 amd64 2.7.17-1~18.04ubuntu1 [248 kB]
Get:10 http://cr.archive.ubuntu.com/ubuntu bionic/main amd64 libpython-stdlib amd64 2.7.15~rc1-1 [7?620 B]
Get:11 http://cr.archive.ubuntu.com/ubuntu bionic/main amd64 python amd64 2.7.15~rc1-1 [140 kB]
Get:12 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsasl2-modules-gssapi-mit amd64 2.1.27~101-g0780600+dfsg-3ubuntu2.1 [35,5 kB]
Get:13 http://cr.archive.ubuntu.com/ubuntu bionic/universe amd64 libc++abi1 amd64 6.0-2 [56,7 kB]
Get:14 http://cr.archive.ubuntu.com/ubuntu bionic/universe amd64 libc++1 amd64 6.0-2 [183 kB]
Get:15 http://cr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsss-nss-idmap0 amd64 1.16.1-1ubuntu1.6 [20,1 kB]
Fetched 232 MB in 23s (10,1 MB/s)
Preconfiguring packages ...
Selecting previously unselected package libsigsegv2:amd64.
(Reading database ... 129918 files and directories currently installed.)
Preparing to unpack .../libsigsegv2_2.12-1_amd64.deb ...
Unpacking libsigsegv2:amd64 (2.12-1) ...
Setting up libsigsegv2:amd64 (2.12-1) ...
Selecting previously unselected package gawk.
(Reading database ... 129925 files and directories currently installed.)
Preparing to unpack .../0-gawk_1%3a4.1.4+dfsg-1build1_amd64.deb ...
Unpacking gawk (1:4.1.4+dfsg-1build1) ...
Preparing to unpack .../1-libpython2.7_2.7.17-1~18.04ubuntu1_amd64.deb ...
Unpacking libpython2.7:amd64 (2.7.17-1~18.04ubuntu1) over (2.7.17-1~18.04) ...
Preparing to unpack .../2-libpython2.7-stdlib_2.7.17-1~18.04ubuntu1_amd64.deb ...
Unpacking libpython2.7-stdlib:amd64 (2.7.17-1~18.04ubuntu1) over (2.7.17-1~18.04) ...
Preparing to unpack .../3-libpython2.7-minimal_2.7.17-1~18.04ubuntu1_amd64.deb ...
Unpacking libpython2.7-minimal:amd64 (2.7.17-1~18.04ubuntu1) over (2.7.17-1~18.04) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../4-python2.7-minimal_2.7.17-1~18.04ubuntu1_amd64.deb ...
Unpacking python2.7-minimal (2.7.17-1~18.04ubuntu1) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../5-python-minimal_2.7.15~rc1-1_amd64.deb ...
Unpacking python-minimal (2.7.15~rc1-1) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../6-python2.7_2.7.17-1~18.04ubuntu1_amd64.deb ...
Unpacking python2.7 (2.7.17-1~18.04ubuntu1) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../7-libpython-stdlib_2.7.15~rc1-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.15~rc1-1) ...
Setting up libpython2.7-minimal:amd64 (2.7.17-1~18.04ubuntu1) ...
Setting up python2.7-minimal (2.7.17-1~18.04ubuntu1) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python-minimal (2.7.15~rc1-1) ...
Selecting previously unselected package python.
(Reading database ... 130129 files and directories currently installed.)
Preparing to unpack .../0-python_2.7.15~rc1-1_amd64.deb ...
Unpacking python (2.7.15~rc1-1) ...
Selecting previously unselected package libsasl2-modules-gssapi-mit:amd64.
Preparing to unpack .../1-libsasl2-modules-gssapi-mit_2.1.27~101-g0780600+dfsg-3ubuntu2.1_amd64.deb ...
Unpacking libsasl2-modules-gssapi-mit:amd64 (2.1.27~101-g0780600+dfsg-3ubuntu2.1) ...
Selecting previously unselected package libc++abi1:amd64.
Preparing to unpack .../2-libc++abi1_6.0-2_amd64.deb ...
Unpacking libc++abi1:amd64 (6.0-2) ...
Selecting previously unselected package libc++1:amd64.
Preparing to unpack .../3-libc++1_6.0-2_amd64.deb ...
Unpacking libc++1:amd64 (6.0-2) ...
Selecting previously unselected package libsss-nss-idmap0.
Preparing to unpack .../4-libsss-nss-idmap0_1.16.1-1ubuntu1.6_amd64.deb ...
Unpacking libsss-nss-idmap0 (1.16.1-1ubuntu1.6) ...
Selecting previously unselected package mssql-server.
Preparing to unpack .../5-mssql-server_15.0.4033.1-2_amd64.deb ...
Unpacking mssql-server (15.0.4033.1-2) ...
Setting up libc++abi1:amd64 (6.0-2) ...
Setting up libsss-nss-idmap0 (1.16.1-1ubuntu1.6) ...
Setting up gawk (1:4.1.4+dfsg-1build1) ...
Setting up libsasl2-modules-gssapi-mit:amd64 (2.1.27~101-g0780600+dfsg-3ubuntu2.1) ...
Setting up libpython2.7-stdlib:amd64 (2.7.17-1~18.04ubuntu1) ...
Setting up libc++1:amd64 (6.0-2) ...
Setting up python2.7 (2.7.17-1~18.04ubuntu1) ...
Setting up libpython-stdlib:amd64 (2.7.15~rc1-1) ...
Setting up libpython2.7:amd64 (2.7.17-1~18.04ubuntu1) ...
Setting up python (2.7.15~rc1-1) ...
Setting up mssql-server (15.0.4033.1-2) ...

+--------------------------------------------------------------+
Please run 'sudo /opt/mssql/bin/mssql-conf setup'
to complete the setup of Microsoft SQL Server
+--------------------------------------------------------------+

Processing triggers for gnome-menus (3.13.3-11ubuntu1.1) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for desktop-file-utils (0.23-1ubuntu3.18.04.2) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

5. Configuración versión del SQL SERVER, habilitación del servicio y definición password para usuario SA.

root@capacitacion-VirtualBox:~# /opt/mssql/bin/mssql-conf setup
usermod: no changes
Choose an edition of SQL Server:
  1) Evaluation (free, no production use rights, 180-day limit)
  2) Developer (free, no production use rights)
  3) Express (free)
  4) Web (PAID)
  5) Standard (PAID)
  6) Enterprise (PAID) - CPU Core utilization restricted to 20 physical/40 hyperthreaded
  7) Enterprise Core (PAID) - CPU Core utilization up to Operating System Maximum
  8) I bought a license through a retail sales channel and have a product key to enter.

Details about editions can be found at
https://go.microsoft.com/fwlink/?LinkId=2109348&clcid=0x409

Use of PAID editions of this software requires separate licensing through a
Microsoft Volume Licensing program.
By choosing a PAID edition, you are verifying that you have the appropriate
number of licenses in place to install and run this software.

Enter your edition(1-8): 3

The license terms for this product can be found in
/usr/share/doc/mssql-server or downloaded from:
https://go.microsoft.com/fwlink/?LinkId=2104294&clcid=0x409

The privacy statement can be viewed at:
https://go.microsoft.com/fwlink/?LinkId=853010&clcid=0x409

Do you accept the license terms? [Yes/No]:yes

Enter the SQL Server system administrator password:
Confirm the SQL Server system administrator password:

Configuring SQL Server...

The licensing PID was successfully processed. The new edition is [Express Edition].
ForceFlush is enabled for this instance.
ForceFlush feature is enabled for log durability.
Created symlink /etc/systemd/system/multi-user.target.wants/mssql-server.service ? /lib/systemd/system/mssql-server.service.
Setup has completed successfully. SQL Server is now starting.

6. Validar que el servicio este corriendo

root@capacitacion-VirtualBox:~# systemctl status mssql-server --no-pager

? mssql-server.service - Microsoft SQL Server Database Engine
   Loaded: loaded (/lib/systemd/system/mssql-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-06-12 09:26:06 CST; 47s ago
     Docs: https://docs.microsoft.com/en-us/sql/linux
 Main PID: 6082 (sqlservr)
    Tasks: 119
   CGroup: /system.slice/mssql-server.service
           +-6082 /opt/mssql/bin/sqlservr
           +-6104 /opt/mssql/bin/sqlservr

jun 12 09:26:10 capacitacion-VirtualBox sqlservr[6082]: [318B blob data]
jun 12 09:26:10 capacitacion-VirtualBox sqlservr[6082]: [78B blob data]
jun 12 09:26:10 capacitacion-VirtualBox sqlservr[6082]: [84B blob data]
jun 12 09:26:10 capacitacion-VirtualBox sqlservr[6082]: [145B blob data]
jun 12 09:26:11 capacitacion-VirtualBox sqlservr[6082]: [96B blob data]
jun 12 09:26:11 capacitacion-VirtualBox sqlservr[6082]: [66B blob data]
jun 12 09:26:11 capacitacion-VirtualBox sqlservr[6082]: [96B blob data]
jun 12 09:26:11 capacitacion-VirtualBox sqlservr[6082]: [100B blob data]
jun 12 09:26:11 capacitacion-VirtualBox sqlservr[6082]: [71B blob data]
jun 12 09:26:11 capacitacion-VirtualBox sqlservr[6082]: [124B blob data]

Ahora puedes instalar el Microsoft SQL Management Studio 18 en tu ambiente de sistema operativo Windows y logear en la dirección del servidor con la autenticación de servidor.


La lógica de negocio, en la aplicación o en el lado de los datos.?

A ver, que opinan ustedes.?

¿A dónde pertenece la lógica para mostrar un montón de números en un gráfico circular?
En la aplicación.

¿Dónde está la lógica para asegurar que si "X" = 0, entonces "Y" debe ser >= 0?
Eso pertenece justo al lado de los datos.

Tom Kyte - ASKTOM- escribió hace unos años atrás:

"Piense en esto: el año es 1996, las aplicaciones web son completamente nuevas, nunca antes existieron. Completamente nueva forma de abordar las cosas. La gente quería acceso web a sus datos.

Lástima que las aplicaciones heredadas se escribieran como transacciones CICS en un mainframe, con un panel frontal de pantalla verde ISPF.

OBSERVACIÓN: nunca deja de divertirme cuán muertas son las transacciones CICS similares a las aplicaciones de nivel medio y cuán similar es el panel de pantalla verde ISPF a un navegador web. Están casi muertos en el mismo

Entonces, cuando intentaron moverse a la web, fue realmente muy difícil,

¿por qué? Debido a que la lógica de los datos, las reglas de integridad de los datos, la seguridad, todo, estaba vinculado a las transacciones CICS (escritas principalmente en Cobol, por ejemplo) y la construcción de una aplicación que accedía a los datos directamente estaba PROHIBIDA, por la sencilla razón de que no podía hacerlo con seguridad.

Oracle Autonomous Database Now Available in Customer Datacenters

Press Release

Addresses data sovereignty, security, and performance concerns that prevent some enterprise workloads from moving to the public cloud

Crédit Agricole, Entel, and Samsung SDS welcome Autonomous Database on Exadata Cloud@Customer

Redwood Shores, Calif.—Jul 8, 2020

Building on the success of Oracle’s Exadata Cloud@Customer service over the last three years, Oracle announced the availability of Oracle Autonomous Database on Exadata Cloud@Customer. This new offering combines the latest Oracle Database with the fastest Oracle Database platform—Exadata—delivered as a cloud service in customer datacenters. It eliminates database management and capital expend

itures while enabling pay-per-use and elastic consumption of database cloud resources. Now, Autonomous Database is available to run in customer data centers both as a standalone offering and as part of Oracle Dedicated Region Cloud@Customer, the industry’s first on-premises cloud region, which was also announced today. Get started here.
Oracle Autonomous Database on Exadata Cloud@Customer is the simplest and fastest transition to a cloud model with typical deployments taking less than a week. Existing applications in a datacenter can simply connect and run without requiring any application changes—while data never leaves the customer’s datacenter. This is ideal for enterprises that find it challenging to move their mission-critical database workloads to the public cloud due to data sovereignty and regulatory requirements, security and performance concerns, or because their on-premises applications and databases are tightly coupled. 

“Exadata Cloud@Customer has been successfully deployed at hundreds of customers, including large financial services companies, telecoms, healthcare providers, insurers, and pharmaceutical companies worldwide to modernize their infrastructure and lower costs by up to 50 percent,” said Juan Loaiza, executive vice president, mission-critical database technologies, Oracle. “We are now bringing Oracle Autonomous 

Database to customer datacenters—freeing DBAs and developers from mundane maintenance tasks and enabling them to innovate and create more business value.”
Oracle Autonomous Database on Exadata Cloud@Customer enables organizations to move to an environment where everything is automated and managed by Oracle. Autonomous operations include: database provisioning, tuning, clustering, disaster protection, elastic scaling, securing and patching, which eliminates manual processes and human error while reducing costs and increasing performance, security and availability. The serverless architecture automatically scales to match changing workloads, providing true pay-per-use.

“Oracle Autonomous Database on Exadata Cloud@Customer combines the game changing capabilities of the revolutionary Exadata X8M platform with Oracle’s most advanced machine-learning-powered database and its second-generation cloud control plane for a true enterprise-grade database cloud experience on-premises,” said Carl Olofson, Research Vice President, Data Management Software, IDC. “Ev
ery business has a set of ISV and home grown applications that they depend on to run all aspects of their business from finance to manufacturing, HR, orders, procurement, and operations. For companies serious about running these types of critical Oracle-based applications in an on-premises enterprise database cloud, Oracle Autonomous Database on Exadata Cloud@Customer is currently the most advanced offering in the market today.”

Customers can leverage Oracle Autonomous Database on Exadata Cloud@Customer to consolidate thousands of databases and run the converged, open Oracle Database for multiple data types and workloads including Machine Learning, JSON, Graph, spatial, IOT and In-Memory, instead of deploying fragmented special-purpose databases. With Oracle Autonomous Database on Oracle Exadata Cloud@Customer, organizations can work with up to 7x larger databases, achieve greater database consolidation, and improve performance with up to 12x more SQL IOPS, 10x more SQL throughput, and 98 percent lower SQL latency than RDS on AWS Outposts. Oracle Autonomous Database on Exadata Cloud@Customer reduces customers’ infrastructure and database management by up to 90 percent because they only h
ave to focus on the schemas and data inside their databases, not on running the underlying database infrastructure.
In addition to the new Cloud@Customer offerings, Oracle continues to enhance the capabilities of the Autonomous Database. Oracle today announced the certification of Oracle’s Siebel, PeopleSoft, and JD Edwards running on Oracle Autonomous Database. By using Autonomous Database, Oracle’s Siebel, PeopleSoft, and JD Edwards customers will lower their costs while improving security, performance, and availability. The company also announced Oracle Autonomous Data Guard which delivers an autonomously managed high availability and disaster recovery solution protecting against database and site failures. Oracle Autonomous Data Guard provides near zero data loss (RPO) and recovery time (RTO) objectives in the face of catastrophic failures.
Global Organizations Welcome New Cloud@Customer Offerings

Samsung SDS is the largest enterprise cloud solutions provider in Korea, delivering data-driven digital innovations to customers in 41 countries worldwide. “Back in 2010, we adopted the first Exadata platform to improve a display manufacturing system,” said Dr. WP Hong, CEO, Samsung SDS. “Now 10 years later, we have implemented nearly 300 Exadata systems for our customers in manufacturing, financial services, construct
ion and engineering, and public and private sector services. Aligning with our digital innovation strategy and our journey to enterprise cloud, we have now adopted the first Exadata Cloud@Customer in one of our datacenters and look forward to deploying Autonomous Database.”
NTT DoCoMo is the number one mobile carrier in Japan with the largest customer base. “Oracle Exadata is implemented as our core engine to process the call, communication, and billing information of 80M users in real-time,”
 said Taku Hasegawa, Senior Vice President, General Manager of Information Systems Department, NTT DoCoMo. “Thanks to Exadata, we could cut operation and maintenance costs in half, while realizing 10x performance. As the core infrastructure for DoCoMo’s digital transformation and further business growth, I look forward to the continuous evolution of Oracle Exadata and the novel technology innovation driven by Autonomous Database on Exadata Cloud@Customer.”
Crédit Agricole CIB is the Corporate and Investment Banking arm of the Crédit Agricole Group, one of the world’s largest banks. “Moving to Exadata Cloud@Customer has significantly improved our accounting information systems performance, which has enabled us to carry out our accounting closing process with much greater agility and to reduce our operational costs,” said Pierre-Yves Bollard, Global Head of Finance IT, Crédit Agricole Corporate & Investment Bank. “The high value provided by the Exadata Cloud@Customer infrastructure has been recognized by all IT and business teams.”

Entel is the largest te
lecom provider in Chile and the third largest in Peru. “We have used Exadata systems for the past five years to support many applications across dozens of lines of business, including crucial billing and network management systems,” said Helder Branco, Head of IT Operations, Entel. “By using Exadata, we improved mission-critical Oracle Database performance by up to 3x, and reduced our security exposure. We are taking our digital transformations to the next level by moving over 30 databases to Oracle Autonomous Database on Exadata Cloud@Customer and improving their security with its self-securing capabilities.”
RKK Computer Service is an IT consultancy based in Japan, focusing on local governments and financial institutions. “RKK Computer Service selected Oracle Exadata Cloud@Customer to host our shared platform that runs core business systems for 100 municipalities,” said Chihiro Sato, Deputy General Manager, Public Sector Planning and Development Division, RKK Computer Service. “Compared to our previous on-premises solution, we have 24 percent cost savings and more than 70 percent IO performance improvement, which enables us to run concurrent batch processes for multiple municipalities. High availability is achieved with RAC and Data Guard. We believe that Oracle’s second-generation Exadata Cloud@Customer is a promising cloud platform for municipalities. RKKCS will continuously enhance our cloud infrastructure for municipalities by exploring Autonomous Database on Exadata Cloud@Customer to improve operational efficiency.”

The State of Queretaro is located 
in central Mexico. “Based on a directive from the state governor and state secretary to address the COVID-19 crisis, we were asked to develop an application that would allow the citizens and patients of the State of Querétaro, Mexico, to carry out a self-diagnosis to help avoid the spread of infections,” said Pedro Gonzalez, Director CIAS, Queretaro State Government, Mexico. “With Oracle Database on Exadata Cloud@Customer, we were able to react quickly and develop a mobile application in less than three weeks — plus we were able to adhere to state regulations to maintain the sensitive data of citizens and patients in our facilities. We look forward to investing in Oracle Autonomous Database this year, which will free up our staff and resources to focus on developing new business applications without spending any time on patching, tuning, and maintaining the database.” 
Siav is an enterprise content management software and IT services company based in Italy. “We chose Oracle Exadata Cloud@Customer to help us manage the constant growth of our business in cloud services and solutions,” said Nicola Voltan, CEO, Siav S.p.A. “Exadata Cloud@Customer provides the performance, scalability and security we need to offer the highest quality service to our customers. It’s managed by Oracle in our datacenter, enabling us to comply with the Italian legislation related to the geographical location of the service provided.”
New Exadata Cloud@Customer Enhancements
In addition to the Autonomous Database, Oracle is announcing the following Exadata Cloud@Customer enhancements:

  • Oracle Exadata Database Machine X8M Technology, which combines Intel® Optane™ DC Persistent Memory and 100 gigabit remote direct memory access (RDMA) over Converged Ethernet (RoCE) to remove storage bottlenecks and dramatically increase performance for the most demanding workloads such as Online Transaction Processing (OLTP), IoT, fraud detection, and high frequency trading. Direct database access to shared persistent memory increases peak performance to 12 million SQL read IOPS, 2.5X greater than the prior generation offering powered by Exadata Database Machine X8. Additionally, Exadata X8M dramatically reduces the latency of critical database IOs by enabling remote IO latencies below 19 microseconds—more than 10X faster than the prior generation offering. These ultra-low latencies are achieved even for workloads requiring millions of IOs per second.
  • Multiple VM Clusters per Exadata Rack, which enables organizations to share an Exadata system for production, DR and dev/test and provide isolation across departments and use cases.
  • PCI-DSS Certification: Exadata Cloud@Customer now supports and meets Payment Card Industry Data Security Standard requirements and can be implemented as part of a highly secure financial processing environment.
Contact Info

Nicole Maloney
Oracle PR
+1.650.506.0806

domingo, 28 de junio de 2020

El PROFILE DEFAULT en la Preview de Oracle Database 20.3

PROFILE DEFAULT en la versión Oracle Database 20.3, no es el clásico "PROFILE" que conocíamos en las versiones previas, en donde los límites para los recursos eran ilimitados.


Casí un 50% de los recursos tienen asignados nuevos valores de facto, que limitan a los usuarios que tienen dicho perfil asignado.

El "PERFIL DEFAULT", es el perfil otorgado de facto a cada usuario que es creado en la base de datos.

En este caso, tenemos gestión de claves para períodos de vencimiento de 60 días, 3 fallos en la clave y automáticamente bloqueado el usuario. Un clave se podrá reutilizar hasta que haya transcurrido 365 días y utiliza verificación de clave basada en complejidad de la función de la versión 12c ( un dolor de cabeza ).

Sin embargo, si es posible modificar el "PROFILE DEFAULT" en el recurso de manejo de complijidad y hacerlo "NULL", para facilitar el trabajo en ambientes de pruebas.

Agregar leyenda


domingo, 21 de junio de 2020

Oracle Forms Goes APEX 20.1 and Autonomous Database



Oracle’s Autonomous Database helps companies eliminate manual tuning and human error, and reduce cost and complexity, while ensuring higher reliability, security, and more operational efficiency through reporting, batch, Internet of Things (IoT), and machine learning.

Things come full circle in this webcast, as we show you an amazing way to shift your Forms application into Oracle APEX, the official alternative to Oracle Forms, so you can actually make use of all of the possibilities the Autonomous Database has to offer.

Join PITSS’ Pierre Yotti, Oracle ACE and trainer, as well as Giuseppe Facchetti, Autonomous Cloud Business Development director at Oracle, and learn among other things:

https://pitss.com/webcast-oracle-forms-goes-apex-20-1-and-autonomous-database/
  • How your business can benefit from Oracle Autonomous Database
  • Why Oracle APEX is a perfect substitute to your Oracle Forms application
  • How to most efficiently transition from Forms to a “Powered by Autonomous” APEX 20.1 app


Feliz día del Padre, para todos aquellos que decidieron por cuenta propia, no cometer los mismos errores, que cometieron con ustedes.

Lo que escribo a continuación, no es una novela, ni es un cuento inventado.

"Mi papá sabe que existo, pero no sabe nada de mi, de mi esposa, ni de sus nietos, de mis gustos, de mis conocidos, de mis sueños, nada de nada."

El tomó la decisión de desertar, cuando le pareció que se estaba haciendo muy viejo, como para seguir siendo un padre. Creo que fue parte de lo que pensó, porque tomó la decisión de seguir su vida al lado de una persona que era 2 años menor que yo.

Pero antes de ello, su trabajo, su ambiente y sus pensamientos, estuvieron por largos períodos, lejos de mi.

Después de 34 años apareció, exigiendo su derecho como "Padre" a una pensión. En el momento enfurecí y tuve el deseo de nunca haberlo conocido.

Cuando lo ví nuevamente en el juzgado, no sentí nada, absolutamente nada.

No había memorias de él, no había sentimientos hacia él, ya no había más rencor a sus decisiones tomadas.

Fue muy simple, no puedes tener rencor, hacia alguién que no conoces.

Desde los 18 años, mi padre ha estado ausente.

Desde hace unos 25 años, simplemente lo aislé de mi mente para evitar más el vacío que había dejado en mi.

En los últimos 15 años, se convirtió en un fantasma.

Y en el último año, no quedó residuo de él. La división había dado como conciente, un número exacto. La ecuación había concluído.

La responsabilidad de ser Padre no es difícil, el término no es el adecuado. Es aterrador. Siempre he llevado conmigo el pensamiento y terror durante años de cometer los errores que cometió mi padre o cometer nuevos y mayores errores.

Hoy, con mis hijos mayores de edad y con su formación profesional universitaria avanzada, desearía que recuerdarán de mi, que siempre intenté ser lo mejor.

No fuí el padre perfecto, eso es casi imposible, pero que recuerden, que siempre intenté ser el mejor.

Feliz día del Padre, para todos aquellos, que decidieron por cuenta propia, no cometer los mismos errores, que cometieron con ustedes.

lunes, 1 de junio de 2020

Creando un laboratorio con Oracle Database 19c en una instancia de Oracle Always Free Services.

Te comparto el video editado iniciando con la charla directamente.

Su prueba gratuita ha finalizado. Puede continuar utilizando los recursos Siempre gratis. El resto de recursos se eliminarán, lo que puede provocar la pérdida de datos, a menos que cambie de versión.
Que hay más allá del período de prueba de 30 días de Oracle Always Free Services?

Segundo capítulo de la serie."Oracle Database Ethical Hacking: El incomprendible mundo del lenguaje SQL", La venganza de los Sith ...

Buenos días a todos y todas.

Los que no alcanzaron a estar presentes en vivo el día de ayer, pueden volver a ver la charla a continuación.

Agradezco la participación de mi amigo Cesar Chavez Martinez más conocido en el mundo de la ciberseguridad, como @peruhacking, quién es representante del Comite Latinoamericano de Informática Forense/Hack&Founders para Perú, con el cuál realizamos un conversatorio al final de la charla.

Agradezco todos sus comentarios.

domingo, 31 de mayo de 2020

Uso y ejecución de Flashback en Oracle Database 19c 19.3.0.0

Listo, veamos como funciona esta bien conocida característica en bases de datos Oracle, pero esta vez en la última versión en producción liberada.

Empecemos por conectarnos a la base de datos

[oracle@lab1 ~]$ sqlplus /nolog

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 5 19:46:35 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

SQL> connect / as sysdba
Connected.
SQL> show sga

Total System Global Area 1912599952 bytes
Fixed Size                  8897936 bytes
Variable Size             436207616 bytes
Database Buffers         1459617792 bytes
Redo Buffers                7876608 bytes

Pueden apreciar, que la base de datos, esta en estado OPEN y que se encuentra bajo el rol de instancia primaria.

SQL> select instance_name, status, logins, INSTANCE_ROLE from v$instance;

INSTANCE_NAME    STATUS       LOGINS     INSTANCE_ROLE
---------------- ------------ ---------- ------------------
lab1             OPEN         ALLOWED    PRIMARY_INSTANCE

Al igual que en las anteriores versiones, no es posible poner la base de datos en modo archivelog, sin tener la instancia en estado MOUNT.

SQL> alter database archivelog;
alter database archivelog
*
ERROR at line 1:
ORA-01126: database must be mounted in this instance and not open in any
instance

Bajamos la instancia de base de datos de modo consistente y únicamente la montamos para activar el modo archivelog.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1912599952 bytes
Fixed Size                  8897936 bytes
Variable Size             436207616 bytes
Database Buffers         1459617792 bytes
Redo Buffers                7876608 bytes
Database mounted.
SQL> select instance_name, status, logins, INSTANCE_ROLE from v$instance;

INSTANCE_NAME    STATUS       LOGINS     INSTANCE_ROLE
---------------- ------------ ---------- ------------------
lab1             MOUNTED      ALLOWED    PRIMARY_INSTANCE

SQL> alter database archivelog;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/app/oracle/product/19.3.0/dbs/arch
Oldest online log sequence     3
Next log sequence to archive   5
Current log sequence           5
SQL> SELECT log_mode FROM v$database;

LOG_MODE
------------
ARCHIVELOG

Verificado que la base de datos esta en modo archivelog, vamos a force el registro de logging para todas las operaciones en la base de datos.

SQL> ALTER DATABASE FORCE LOGGING;

Database altered.

Verificamos que la funcionalidad se encuentra encendida y que no nos haga falta algo más. En este caso, nos hace falta configurar el área de Fast Recovery.

SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-38706: Cannot turn on FLASHBACK DATABASE logging.
ORA-38709: Recovery Area is not enabled.


SQL> show parameter recovery

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
db_recovery_file_dest                string
db_recovery_file_dest_size           big integer 0
recovery_parallelism                 integer     0
remote_recovery_file_dest            string
SQL> ^C

Configuramos el parámetro del tamaño de FAST RECOVERY AREA y lo hacemos persistente en el archivo de parámetros.

SQL> alter system set db_recovery_file_dest_size=5G scope=both;

System altered.

SQL> host
[oracle@lab1 ~]$ mkdir /opt/app/oracle/fast_recovery_area
[oracle@lab1 ~]$ exit
exit

SQL> alter system set db_recovery_file_dest='/opt/app/oracle/fast_recovery_area' scope=both;

System altered.

SQL> alter database flashback on;

Database altered.

SQL> host ls -la /opt/app/oracle/fast_recovery_area
total 4
drwxr-xr-x.  3 oracle oinstall   18 Nov  5 20:45 .
drwxr-xr-x. 10 oracle oinstall 4096 Nov  5 20:41 ..
drwxr-x---.  3 oracle oinstall   23 Nov  5 20:45 LAB1

SQL> host ls -la /opt/app/oracle/fast_recovery_area/LAB1
total 0
drwxr-x---. 3 oracle oinstall 23 Nov  5 20:45 .
drwxr-xr-x. 3 oracle oinstall 18 Nov  5 20:45 ..
drwxr-x---. 2 oracle oinstall 60 Nov  5 20:45 flashback

SQL> host ls -la /opt/app/oracle/fast_recovery_area/LAB1/flashback
total 409616
drwxr-x---. 2 oracle oinstall        60 Nov  5 20:45 .
drwxr-x---. 3 oracle oinstall        23 Nov  5 20:45 ..
-rw-r-----. 1 oracle oinstall 209723392 Nov  5 20:45 o1_mf_gw4dxt5o_.flb
-rw-r-----. 1 oracle oinstall 209723392 Nov  5 20:45 o1_mf_gw4dy1kf_.flb


SQL> show sga

Total System Global Area 1912599952 bytes
Fixed Size                  8897936 bytes
Variable Size             436207616 bytes
Database Buffers         1459617792 bytes
Redo Buffers                7876608 bytes
SQL> select instance_name, status, logins, INSTANCE_ROLE from v$instance;

INSTANCE_NAME    STATUS       LOGINS     INSTANCE_ROLE
---------------- ------------ ---------- ------------------
lab1             MOUNTED      ALLOWED    PRIMARY_INSTANCE

Vamos a abrir la base de datos y hacer algunas pruebas con el usuario HR de ejemplo.

SQL> alter database open;

Database altered.

SQL> alter user hr account unlock;

User altered.

SQL> alter user hr identified by hr;

User altered.

SQL> connect hr/hr
Connected.
SQL> select count(*) from employees;

  COUNT(*)
----------
       107

SQL> show user
USER is "HR"

Vamos a crear una tabla para la prueba de la habilitación del FLASHBACK RECOVERY.

SQL> create table employees_drop as select * from employees;

Table created.

SQL> select count(*) from employees_drop;

  COUNT(*)
----------
       107

Verificamos la existencia de la tabla en la vista de catálogo del usuario.
SQL> select * from cat;

TABLE_NAME           TABLE_TYPE
-------------------- -----------
EMPLOYEES_DROP       TABLE
REGIONS              TABLE
COUNTRIES            TABLE
LOCATIONS            TABLE
LOCATIONS_SEQ        SEQUENCE
DEPARTMENTS          TABLE
DEPARTMENTS_SEQ      SEQUENCE
JOBS                 TABLE
EMPLOYEES            TABLE
EMPLOYEES_SEQ        SEQUENCE
JOB_HISTORY          TABLE
EMP_DETAILS_VIEW     VIEW

12 rows selected.

Borramos la tabla.

SQL> drop table EMPLOYEES_DROP;

Table dropped.

SQL> select * from cat;

TABLE_NAME           TABLE_TYPE
-------------------- -----------
BIN$lqXBbrnuY8DgUwEA TABLE
AApjnQ==$0

REGIONS              TABLE
COUNTRIES            TABLE
LOCATIONS            TABLE
LOCATIONS_SEQ        SEQUENCE
DEPARTMENTS          TABLE
DEPARTMENTS_SEQ      SEQUENCE
JOBS                 TABLE
EMPLOYEES            TABLE
EMPLOYEES_SEQ        SEQUENCE
JOB_HISTORY          TABLE
EMP_DETAILS_VIEW     VIEW

12 rows selected.

Y ahora probamos que podamos recuperar la tabla con el comando FLASHBACK.

SQL> flashback table EMPLOYEES_DROP to before drop;

Flashback complete.

SQL> select * from cat;

TABLE_NAME           TABLE_TYPE
-------------------- -----------
EMPLOYEES_DROP       TABLE
REGIONS              TABLE
COUNTRIES            TABLE
LOCATIONS            TABLE
LOCATIONS_SEQ        SEQUENCE
DEPARTMENTS          TABLE
DEPARTMENTS_SEQ      SEQUENCE
JOBS                 TABLE
EMPLOYEES            TABLE
EMPLOYEES_SEQ        SEQUENCE
JOB_HISTORY          TABLE
EMP_DETAILS_VIEW     VIEW

12 rows selected.

Recuerde que si borramos la tabla agregando la cláusula PURGE, no será posible realizar una recuperación de la tabla borrada.

SQL> drop table EMPLOYEES_DROP purge;

Table dropped.

SQL> select * from cat;

TABLE_NAME           TABLE_TYPE
-------------------- -----------
REGIONS              TABLE
COUNTRIES            TABLE
LOCATIONS            TABLE
LOCATIONS_SEQ        SEQUENCE
DEPARTMENTS          TABLE
DEPARTMENTS_SEQ      SEQUENCE
JOBS                 TABLE
EMPLOYEES            TABLE
EMPLOYEES_SEQ        SEQUENCE
JOB_HISTORY          TABLE
EMP_DETAILS_VIEW     VIEW

11 rows selected.

SQL> flashback table EMPLOYEES_DROP to before drop;
flashback table EMPLOYEES_DROP to before drop
*
ERROR at line 1:
ORA-38305: object not in RECYCLE BIN

Rápida y sencillamente, hemos configurado y probado la características de FLASHBACK a nivel de tabla en una base de datos Oracle versión 19c.

Oracle Always Free, se actualiza en los próximos 8 días.


Recién acaba de recibir un correo de parte del equipo de Oracle Autonomous Database, en donde se me indica que el próximo 8 de junio, mi instancia del servicio Always Free Autonomous Database, va a ser actualizada a Oracle Database 19c.

Una gran noticia, ya que uno de los pocos "peros", que había indicado en mi charla de hace unas semanas atrás, es que la base de datos aprovisionada era versión 18c.

Con la actualización, no sólo viene el tema de la versión, sino de todas las características alrededor de 19c, entre las que sobresale, la gestión y manejo de los índices autonomos.

En los laboratorios realizados hasta el momento, es una de las características mejor cementadas en las últimas versiones de base de datos.

Hay que tener paciencia eso sí, ya que no es rápida la adopción de mejoras a los planes de ejecución.
La característica de autonomía para el manejo de índices, se toma su tiempo para analizar los beneficios que tendra el plan de ejecución de la sentencia, antes de que el indice pase de ser un índice invisible a un índice visible.

En mis pruebas, tomo más de 6 horas el ver dicho comportamiento y la verdad que el resultado, fue excelente.

Durante estas 6 horas, el advisor de creación de índices autonomos, ejecutó más de 59 análisis, buscando la mejor alternativa de rendimiento.

SQL> col execution_name format a40

  1  select execution_name, execution_start,execution_end, status from dba_auto_index_executions
  2* order by execution_end
SQL> /

EXECUTION_NAME                           EXECUTION EXECUTION STATUS
---------------------------------------- --------- --------- -----------
SYS_AI_2019-10-20/11:38:36               20-OCT-19 20-OCT-19 COMPLETED
SYS_AI_2019-10-20/11:53:50               20-OCT-19 20-OCT-19 COMPLETED
SYS_AI_2019-10-20/12:09:06               20-OCT-19 20-OCT-19 COMPLETED
SYS_AI_2019-10-20/12:24:21               20-OCT-19 20-OCT-19 COMPLETED

59 rows selected.

Conforme iba pasando el tiempo, era posible observar las estadísticas desde la vista dba_auto_index_statistics. Logré darme cuenta que en realidad estaba funcionando, cuando por primera vez, logré ver, que existían dos índices candidatos a ser utilizados y que ya había creado un índices invisible.

SQL> select * from dba_auto_index_statistics where execution_name='SYS_AI_2019-10-20/12:39:42';

EXECUTION_NAME                           STAT_NAME                          VALUE
---------------------------------------- ----------------------------- ----------
SYS_AI_2019-10-20/12:39:42               Index candidates                       2
SYS_AI_2019-10-20/12:39:42               Indexes created (visible)              0
SYS_AI_2019-10-20/12:39:42               Indexes created (invisible)            1
SYS_AI_2019-10-20/12:39:42               Indexes dropped                        0
SYS_AI_2019-10-20/12:39:42               Space used in bytes            134217728
SYS_AI_2019-10-20/12:39:42               Space reclaimed in bytes               0
SYS_AI_2019-10-20/12:39:42               SQL statements verified                0
SYS_AI_2019-10-20/12:39:42               SQL statements improved                0
SYS_AI_2019-10-20/12:39:42               SQL statements managed by SPM          0
SYS_AI_2019-10-20/12:39:42               SQL plan baselines created             0
SYS_AI_2019-10-20/12:39:42               Improvement percentage                 0

11 rows selected

Despues de unas horas de activado el monitoreo, logré validar, que ya se había concretado, la creación de 3 índices, para las consultas que había dejado en un ciclo de repetición.
OWNER           INDEX_NAME               INDEX_TYPE     TABLE_OWNER     TABLE_TYPE
--------------- ------------------------ -------------- --------------- -----------
USER_TEST       SYS_AI_38a4rpz9aydwy     NORMAL         USER_TEST       VENENO
USER_TEST       SYS_AI_8j1m1y4m3rg1v     NORMAL         USER_TEST       VENENO
USER_TEST       SYS_AI_grrbd3k2d8ufq     NORMAL         USER_TEST       VENENO

SQL> COL INDEX_OWNER FORMAT A12
SQL> COL COLUMN_NAME FORMAT A20

SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,COLUMN_POSITION 
from ALL_IND_COLUMNS where index_name='SYS_AI_grrbd3k2d8ufq';

Como lo afirma la documentación, los indices tenían como prefijo SYS_AI, o en pocas palabras, "Sistema, Inteligencia Artificial".

SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,COLUMN_POSITION 
from ALL_IND_COLUMNS where index_name='SYS_AI_grrbd3k2d8ufq';

INDEX_OWNER  INDEX_NAME                     COLUMN_NAME          COLUMN_POSITION
------------ ------------------------------ -------------------- ---------------
USER_TEST    SYS_AI_grrbd3k2d8ufq           EMPLOYEE_ID                        1

SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,COLUMN_POSITION 
from ALL_IND_COLUMNS where index_name='SYS_AI_8j1m1y4m3rg1v';

INDEX_OWNER  INDEX_NAME                     COLUMN_NAME          COLUMN_POSITION
------------ ------------------------------ -------------------- ---------------
USER_TEST    SYS_AI_8j1m1y4m3rg1v           MANAGER_ID                         1

SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,COLUMN_POSITION 
from ALL_IND_COLUMNS where index_name='SYS_AI_38a4rpz9aydwy';

INDEX_OWNER  INDEX_NAME                     COLUMN_NAME          COLUMN_POSITION
------------ ------------------------------ -------------------- ---------------
USER_TEST    SYS_AI_38a4rpz9aydwy           JOB_ID                             1


En la vista de acciones, era posible observar los comandos ejecutados por el programa de ML que gestiona dicha característica autonoma.


SQL> select index_owner, index_name, table_owner, command from dba_auto_index_ind_actions order by start_time;

INDEX_OWNER  INDEX_NAME                     TABLE_OWNER       COMMAND
------------ ------------------------------ ----------------- --------------------
USER_TEST    SYS_AI_grrbd3k2d8ufq           USER_TEST         CREATE INDEX
USER_TEST    SYS_AI_grrbd3k2d8ufq           USER_TEST         REBUILD INDEX
USER_TEST    SYS_AI_grrbd3k2d8ufq           USER_TEST         ALTER INDEX VISIBLE
USER_TEST    SYS_AI_8j1m1y4m3rg1v           USER_TEST         CREATE INDEX
USER_TEST    SYS_AI_38a4rpz9aydwy           USER_TEST         CREATE INDEX
USER_TEST    SYS_AI_38a4rpz9aydwy           USER_TEST         REBUILD INDEX
USER_TEST    SYS_AI_8j1m1y4m3rg1v           USER_TEST         REBUILD INDEX
USER_TEST    SYS_AI_8j1m1y4m3rg1v           USER_TEST         ALTER INDEX VISIBLE


8 rows selected.

Las verificaciones realizadas por la base de datos sobre los índices creados automáticamente se podían visualizar en la vista: DBA_AUTO_INDEX_VERIFICATIONS

SQL> select sql_id, original_buffer_gets, auto_index_buffer_gets,status from dba_auto_index_verifications;

SQL_ID        ORIGINAL_BUFFER_GETS AUTO_INDEX_BUFFER_GETS STATUS
------------- -------------------- ---------------------- ---------
64mhzgsdq5cnt              58292.5                     97 IMPROVED
4mf2rxa1jr5ah                58255                      1 IMPROVED
a68nukn6w407b           58133.1154                      3 IMPROVED
azsswcb98ffvc                58255                     97 IMPROVED
cxdp6hffuf5nh                58255                     89 IMPROVED
a68nukn6w407b           3.08201439                      3 UNCHANGED
d7dj44yburdbz                  107                     96 UNCHANGED
d06rmnjvm5smf                12009                  12009 UNCHANGED
2tryzm0v5xsc7              .000235                        FAILED
a90k7j52rrmr6                    3                        FAILED
c595m0us6g543            58234.995                      1 IMPROVED
drzb4vzkmxg6a                    3                        FAILED
c595m0us6g543           3.00069603                      1 UNCHANGED

De igual manera, pude comprobar las distintas tareas ejecutadas por los distintos consejeros de la base de datos.

SQL> select TASK_NAME,ADVISOR_NAME,EXECUTION_END,STATUS,ACTIVITY_COUNTER from dba_advisor_tasks;


Y finalmente, puede también, generar un reporte de actividad en formato HTML, para pode documentar y entender mejor, todo lo que había pasado.

SET LONG 1000000 PAGESIZE 0SELECT DBMS_AUTO_INDEX.report_activity(type => 'HTML') FROM dual;

Ahora será tiempo a partir del 08 de junio, de volver a repetir nuevamente el laboratorio realizado en mi VM, pero esta vez, en la plataforma Always Free y compartirles mis hallazgos.


Todos los Sábados a las 8:00PM

Ahora a suscribirse y seguir el contenido de este blog.

Optimismo para una vida Mejor

Optimismo para una vida Mejor
Noticias buenas que comentar