miércoles, 2 de abril de 2014

Oracle duplica la velocidad del manejo de queries de MySQL

Oracle duplica la velocidad del manejo de queries de MySQL

Joab Jackson, IDG News Service – CIOPeru.pe
oraclePara el próximo release de su MySQL de código abierto, Oracle está realizando varios cambios diseñados para mejorar en gran medida la velocidad del sistema de administración de bases de datos relacionales.
Un desempeño tan notable podría ayudar a las organizaciones a ahorrar dinero en compra de servidores, ya que requerirían de menos servidores para correr grandes trabajos. O también podría permitirles correr queries complejos que podrían haber tomado mucho tiempo en las versiones anteriores del sistema de base de datos, sostuvo Tomas Ulin, vicepresidente de ingeniería de MySQL de Oracle.
El lunes, la compañía lanzó la versión más recientemente desarrollada del software, MySQL Development Milestone 5.7.4, junto con varios programas asociados para administrar la base de datos. La última gran versión de MySQL, la versión 5.6, fue lanzada en febrero del 2013.
Esta nueva versión ha mostrado su capacidad para responder a 512 mil consultas de solo lectura por segundo (qps), más del doble de las 250 mil que MySQL 5.6 era capaz de realizar.
El desempeño también ha sido mejorado para los usuarios del plug in de caching Memcached, que va alrededor del motor de datos por defecto de MySQL, InnoDB, para acceder directamente a las filas de la base de datos, para conseguir un mejor desempeño. Este enfoque puede ahora ofrecer un rendimiento de solo lectura de más de un millón de QPS.
No existe una revisión única que Oracle haya hecho que haya mejorado el desempeño por sí sola, más bien es el efecto acumulado de los muchos cambios individuales, sostuvo Ulin.
Las mejoras en el desempeño son especialmente oportunas, dada la cambiante naturaleza de los servidores sobre los que se corre MySQL, de acuerdo a Ulin.
Históricamente, MySQL fue diseñado para correr en servidores comerciales con núcleos simples de procesador. Los clientes en la actualidad están comprando servidores con 16, 32 e incluso 64 núcleos. Gran parte del trabajo en el desempeño de MySQL se ha producido en torno a la mejora en el manejo de los múltiples hilos que operan sobre la misma estructura de datos.
“Tenemos que evolucionar hacia donde va la corriente”, afirmó Ulin. “Las personas no estarán contentas si se pasan de una máquina de 16 núcleos a otra de 32 núcleos y no consiguen ningún beneficio”.
Las mejoras en el desempeño también fueron realizadas en torno a otras partes del sistema de administración de la base de datos. Por ejemplo, el software también reduce la cantidad de tiempo que se necesita para establecer una conexión con la base de datos, gracias al trabajo con el que contribuyó Facebook.
Más allá del desempeño, Oracle está mejorando también a MySQL en varias otras formas.
La compañía ha ampliado el esquema del desempeño del software, el cual define las métricas usadas para medir el desempeño de la base de datos. La base de datos recoge el desempeño por sí misma a través de varias investigaciones internas. El esquema puede usarse para extraer y resumir esa información a través de la base de datos o con herramientas externas, que pueden ser útiles para diagnosticar los problemas de desempeño.
El esquema ofrece ahora mucha más información acerca de qué está ocurriendo dentro de la memoria del servidor. Puede usarse para señalar los problemas en torno al locking de los metadatos y otros problemas escurridizos. Un usuario, por ejemplo, podría usar SQL para extraer todas las métricas de desempeño de la memoria en torno a una tabla específica de la base de datos.
MySQL Workbench 6.1, que Oracle también lanzó el lunes, incluye un conjunto de herramientas de diagnóstico gráficas sobre estas nuevas indagaciones a la memoria.
La compañía también se encuentra preparando varias otras nuevas funciones que aún no se encuentran lo suficientemente maduras para este release, pero podrían ser parte de la versión 5.7. Una característica podría ser la primera en tener replicación multimaster.

CIO Data Center Summary of the week


CIO Data Center


New Cisco Switches Take Aim at Big Data Centers, Data Applications 
Cisco this week is unveiling two new configurations of its recently-launched Nexus 9000 switches, a new 40G Nexus switch.A In addition, Cisco is celebrating the fifth anniversary of its UCS server. Read More 

WHITE PAPER: Druva

Top 10 Endpoint Backup Mistakes 
When considering endpoint backup options, be sure you're making the right decisions. Protecting data on endpoints has become more challenging because of recent trends like exponential data growth, the rise in endpoints, BYOD, and SaaS applications. Read Now!

RESOURCE COMPLIMENTS OF: Open Business Conference 

Registration now open: Open Business Conference, San Fran 
May 5-6, The Palace Hotel, San Francisco. Open Business Conference is ground zero for learning how to put data to use while networking with the people who are building and deploying the latest in enabling technologies and solutions. Register now.
The Ultimate Guide to Proper SSD Management 
From file management to housecleaning, these tips and tricks will help you get the most out of your supercharged storage. Read More 
So You Think You Know the IBM Mainframe? Try Our Quiz 
On the 50th anniversary of the Big Iron, see how well you know this iconic computing workhorse. Read More 

WHITE PAPER: Symantec

HP Barcelona "Protecting Mobile Data" 
Today's mobile workforce puts additional strain on the security of your IT environment. Let HP and Symantec show you how to maintain the security data without putting boundaries on your mobile environment. Learn more>>
Cisco Revamps Enterprise Product Pricing 
In an effort to simplify enterprise customer procurements, Cisco is implementing a licensing model for data center, WAN and access product purchases. Read More 
IT Provides Business Edge in Battle of Archrivals 
In retail, manufacturing and logistics there are no bigger rivals than Home Depot vs. Lowe's, GM vs. Ford, FedEx vs. UPS, respectively. We look at how these fierce competitors are exploiting IT to gain an advantage. Read More 

Calmest Man: Oracle Flashback Database Feature

The Calmest Man in the World understands what it takes to recover quickly from human errors. Learn more about Oracle Flashback recovery features and Oracle Database 11g maximum availability solutions. 

martes, 1 de abril de 2014

Storage Replication for Oracle Database and Licensing

Les comparto este interesante artículo de nuestro amigo Alex Gorbachev sobre el manejo de licenciamiento en caso de una replicación utilizando un método físico de un tercero.

Conclusión, todos los ambientes se licencian.

May 9, 2012 / By Alex Gorbachev
While doing my high availability deep dive at Collaborate 12 few weeks ago, I stated that storage replication qualifies for the cold failover licensing rules (see slide #128).
During the collaborate, I spoke to one person at Oracle who definitely knows the rules. Simon Haslamalso reached out to me by email pointing out that things might not be that rosy. After my session, Arjen Visser from Dbvisit also noted that they’ve seen Oracle sales pushing for a different strategy.

Simon referred to Oracle’s Software Investment Guide:
Remote Mirroring – This method involves the mirroring of the storage unit or
shared disks arrays. Remotely mirrored storage units may be geographically
dispersed and not in the same location as the primary unit, but they share the
same disk array. To setup a remote mirroring environment, the Oracle data
files, executables, binaries and DLLs are replicated to the mirrored storage
unit. Solutions like Veritas Volume Replicator, EMC SRDF, Legato Replistor, and
EMS StoreEdge are used to mirror the data stored in on the disk arrays. In this
environment, both the primary and the remote mirrored databases must be
fully licensed. Additionally, the same metric must be used to license both
databases. If the Oracle Database is accessing the data from the primary disk
array and it is not accessing the mirrored disk array, but it is installed on the
mirrored network storage unit, then both database must be fully licensed and
the same metric must be used. If a failure occurs in the primary storage unit
and the Oracle Database can no longer access the data from the primary disk
array, however it is still installed on the primary unit, and data can only be
accessed from the remote mirrored disk array, then both databases must still
be fully licensed and the same metric must be used. In this environment,
Oracle must be fully licensed at the primary site, and if it is ever installed
and/or run at the secondary site, it must also be fully licensed there.
The key there is that “the Oracle data files, executables, binaries and DLLs are replicated”. If you only replicate data files, then you can likely avoid a licensing mirrored site. However, it will, of course, impact your speed when getting back in business on the mirrored site. On the flip side, the way to avoid remote site licensing is to make sure it qualifies as a backup. From the same document:
Backup – In this method, a copy of the physical database structures of the
database is made. When the original data is lost, the backup files can be used
to reconstruct the lost information that constitutes the Oracle Database. This
backup copy includes important parts of the physical structures such as
control files, redo logs and data files. These physical files can be stored on a
server, storage array, disk drive(s), or Compact Disc(s). Solutions like Recovery
Manager/RMAN (included with Oracle Database EE or SE) and Oracle Secure
Backup or operating system utilities are used to create copies of physical files.
Oracle permits customers to store a back up copy of the database physical
files on storage devices, such as tapes, without purchasing additional licenses.
In an event of failure, when the Oracle data is restored from tape or media,
and the Oracle Database is installed on the recovery server, licensing is
required. See illustration #3.
Note how it specifically distinguishes the situation with restore in the new hardware that requires installing the Oracle binaries. So, if you install or restore Oracle binaries in the new hardware, you have to license it separately unless you decommission your old hardware. This means that it could really only work if you lose your primary site completely and move to the DR site for good. It does fit some business continuity scenarios, but it’s definitely different from cold failover scenarios, and the 10 days rule is not applicable.
I also wanted to find the reference for RAC One Node and one stating that Cold Failover licensing rules can be applied. Oracle Database Licensing Guide doesn’t have this information (or at least I couldn’t locate it), but here is the statement from Oracle’s presentation. (Note that it doesn’t really have the legal status as does your licensing agreement):
All nodes on which RAC One Node is installed must be licensed for RAC One Node
* Exception: One spare node for cold failover/Online Database Relocation need not be licensed under the 10-day use rule
* Example: In a two-node cluster, customers can license RAC One Node on ONE node; 10-day rule applies for spare node
I used to be able to find standard OLSA (Oracle License and Service Agreement) at oracle.com in the past, but these days I only get here and can’t locate the OLSA. In any case, make sure to read your OLSA — it will have the definition of cold failover similar to what you see in the SIG I referred above.
Failover – In this type of recovery, nodes are arranged in a cluster and share
one disk array. A Failover cluster is a group of systems, bound together into a
common resource pool. In this type of recovery method, the Production node
acts as the primary node. When the primary node fails, one of the surviving
nodes in the cluster acts as the primary node. Solutions like Oracle Failsafe
(included with Oracle Database EE or SE, SE1), or third party vendor solutions
(e.g. Veritas, HP Service Guard, HACMP, Linux HA – Heartbeat) are used to
manage Failover environments. In this type of environment, Oracle permits
licensed Oracle customers to run some Technology Programs on an
unlicensed spare computer for up to a total of ten separate days in any given
calendar year. Once the primary node is repaired, you must switch back to the
primary node. Once the failover period has exceeded ten days, the failover
node must be licensed. In addition, only one failover node per clustered
environment is at no charge for up to ten separate days even if multiple nodes
are configured as failover. Downtime for maintenance purposes counts
towards the ten separate days limitation. Any other use requires the
environment to be fully licensed. . In a failover environment, the same license
metric must be used for the production and failover nodes when licensing a
given clustered configuration. Additionally, when licensing options on a
failover environment, the options must match the number of licenses of the
associated database.
I will update the slides accordingly. In any case, please do your own homework and don’t trust my conclusions here. Don’t take this as licensing advice by any means. It’s been on my TO-DO list for a couple weeks now, and while I wanted to put a bit more effort before I blog about it, the reality is that the more I delay, the less likely I post it at all. That would be devastating. Your comments are more than welcome, whether they’re pointing out any errors, adding some info, or sharing your experience.

Oracle Database 12c Administrator Certified Master Approved Course List Released

By Brandye Barrington on Apr 01, 2014

Start Preparing Today For Your Oracle Database 12c Administrator Certified Master (OCM) Exam

Oracle Database 12c Administrator Certified Professionals - consider taking your certification and your career to the next level with the Oracle Database 12c Administrator Certified Master (OCM). Two advanced hands-on courses are required to obtain the Database OCM certification. Prepare yourself to be one of the first to hold this new certification when it is released by starting your training path today.

Oracle Database 11g Administrator Certified Masters, you already know the benefits of OCM certification. Plan on upgrading your certification to 12c when the exam is released, to keep your certification as well as your skills and knowledge current.

The approved course list for the Oracle Database 12c Administrator Certified Master Exam is now available. Courses that fulfill the training requirement are listed below.

Nuestra solidaridad con el pueblo Chileno, especialmente con la zona de Iquique

Un fuerte abrazo y nuestras condolencias a los hermanos y hermanas afectados y afectadas por el terremoto del día hoy.



Earn heading to Oracle?: SQL Server 2014, First Take: Powerful and flexible, with added in-memory support

Fuente: ZDNET.com

SQL Server 2014, First Take: Powerful and flexible, with added in-memory support

Summary: A new version of Microsoft's database has been released, with enhanced in-memory support and Azure-hosted backup.

Microsoft has been running SQL Server 2014 through its CTP programme for a while, and it's now time for the company's latest database release to reach general availability. We recently spent some time at Microsoft's Redmond campus trying out its key new features.

SQL Server 2014 is, at heart, very much the familiar SQL Server. It uses the same familiar management tools, the same T-SQL language, and the same APIs that connect it to your applications. That means you should be able to upgrade existing databases in place, to take advantage of its performance and scaling improvements. But that's only part of the story, as Microsoft has been looking at the ways we use data in modern applications, and added new features that should dramatically improve performance — and that also bring on-premises databases and the cloud closer together.
sql-server-2014-admintools
SQL Server 2014's Management Studio looks much like its predecessors, so you can get started managing databases as soon as you've installed the database and tools. Image: Microsoft

In-memory support: Hekaton

The biggest change is the launch of Hekaton, SQL Server's new in-memory OLTP and data warehousing tools. It's an important new feature because it adds in-memory support out the box, focusing purely on performance. You don't need to put your whole database in-memory, either — just the tables that will get a performance boost. Smaller databases built using SQL Server 2014 Standard won't get access to these new features: they're only part of the Enterprise edition.
sql-server-2014-inmemorytable
You can get quite a performance boost from in-memory tables in SQL Server 2014 — this sample app gets a 14x speed-up. Image: Microsoft
In-memory OLTP makes a lot of sense. It's not a problem that can be solved by throwing parallel cores at a database — and as CPU performance is static, you can take advantage of lower memory costs by shifting to in-memory operations. The Hekaton engine has been designed from scratch to work with modern memory, and is a lot more than just a cache, building on techniques developed for Azure. There's a new query engine for in-memory operations, which doesn't use locks; instead it uses independent threads with low-level interlocks to ensure data integrity. Where a traditional lock can take thousands of CPU cycles, a Hekaton interlock takes just 10 or 20.

SQL Server 2014 provides an analysis, migration and reporting (AMR) tool. In the SQL Server Management Studio, right-click on a table and choose Memory Optimisation Advisor to check the table and validate whether it can be converted to an in-memory table. The process builds the appropriate filegroup and copies the data to an in-memory table — and you should end up with a 20x to 40x speed-up.
sql-server-2014-storedprocedureinmemory
You can use built-in diagnostic tools to determine what elements of a database will benefit from a switch to in-memory, including stored procedures. Image: Microsoft
sql-server-2014-columnstore
Convert large tables to column stores to speed up access — and to make them easier to manage. Image: Microsoft
Massive databases, with millions of rows, can take advantage of the new Clustered Column Store Index, which improves compression, reduces I/O and fits more data in memory. Staged data is moved into a column store, from where it can be accessed more quickly.

The Azure connection


Microsoft's Azure cloud platform mixes its own SQL Azure database service with SQL Server running on virtual machines as part of its IaaS (Infrastructure-as-a-Service) offering. Although SQL Server 2014 is still at heart an application, rather than a service, it's been designed to take advantage of the cloud, using Azure's storage and IaaS capabilities to give businesses of all sizes access to cloud-hosted disaster recovery.

Large databases can mean expensive, and often slow, backups. Using Azure as a subscription-based backup, there's no need for CAPEX, and you can use your existing backup techniques — just with Azure as a target. It's arguably more secure than a traditional backup: Azure holds three copies of your data, so it's always available. Getting started can take time, so Azure offers the option of letting you make your initial backup on a local disk, that's then mailed to Microsoft and stored in Azure, ready for the rest of your backups over the wire. Backups can be encrypted, and there's even support for older versions of SQL Server.

Managed backup tools automate the process. All you need to do is define the Azure account you're using and a retention period. SQL Server will then backup logs every 5MB, every day, or 1GB. If you accidently delete a log backup, the system will detect that you no longer have a consistent backup chain, and will take a full backup.
Although SQL Server 2014 is still at heart an application, rather than a service, it's been designed to take advantage of the cloud, using Azure's storage and IaaS capabilities to give businesses of all sizes access to cloud-hosted disaster recovery.
Azure and SQL Server can also be used as a disaster recovery (DR) solution, with an Azure IaaS SQL Server designated as an always-on replica. As soon as an on-premises server fails, you're switched to a cloud-hosted SQL Server instance preloaded with the last backup. It's not the cheapest approach, but it does mean you don't need to invest in running your own DR site. You can use any Azure region, and all you need to pay for is the IaaS VM and the storage you need. The backup tools validate the environment, and handle failures.

One cheaper option is to use SQL Server's Azure cloud backup as the basis of a cold-start DR service. Hosting a suspended SQL Server instance on Azure IaaS (which only costs you when your server runs), you can use your cloud backup data to update the databases associated with your cloud DR server, bringing you back online after a failure. It's not as fast as a failover onto an always-running DR server, but it's an economical approach that will work well for smaller businesses.

With hybrid cloud scenarios in mind, there's also tooling that will migrate a SQL Server database from an on-premises server to a virtual machine running on Azure. It's not just for SQL Server 2014, either, as the wizard will migrate SQL Server 2008, 2008 R2 and 2012, with support for VMs running SQL Server 2012 and 2014. It's an approach that makes it easier to handle database migrations, or to use Azure as a development platform for new applications — or, of course, to move from on-premises to cloud.

Deploying SQL Server 2014 in Azure is simplified by Microsoft providing VM images with SQL Server already installed. All you need to do is pick the image you want, deploy it, and you're ready to go. Once it's instantiated you can open SQL Server 2014's Management Studio, and use the Deploy Database to Windows Azure VM option to launch the wizard. Connect to the remote server, sign in to Azure, publish to a database in your VM, and (once the data has uploaded) away you go.

Conclusion

Microsoft's latest SQL Server is a product of a new way of working in the company's server business. Rather than a big-bang release, it's more of an incremental improvement with support for new ways of working and for new ways of securing your data. You don't need to take advantage of all its new features straight away; you can add them to your applications and management processes as and when you want to use them. The result is a powerful and flexible database that can get the most from modern hardware, and at the same time give you a route to delivering on the hybrid cloud promise, working with local and cloud data using familiar tools.
Simon Bisson

About Simon Bisson

Simon Bisson is a freelance technology journalist. He specialises in architecture and enterprise IT. He ran one of the UK's first national ISPs and moved to writing around the time of the collapse of the first dotcom boom. He still writes code.

Disfruta en demanda el Webcast Introduciendo Oracle GoldenGate 12c- Rendimiento Extremo Simplificado

Oracle SPARC Roadmap Producto

La hoja de ruta del procesador Oracle SPARC, presume que para finales del 2016 podríamos contar con Oracle Solaris 12.

Para el próximo año se espera que las series M & T ingresen en períodos de prueba, que incluirían en el core del sistema operativo características que permitan entre otras cosas optimizar el uso de JAVA, encriptación, etc.



Competencia sigue moviéndose: CommVault integra su software Simpana con la plataforma SAP HANA® y ayuda a proteger entornos analíticos en tiempo real

CommVault ( http://www.commvault.com), compañía especializada en el desarrollo de soluciones para la gestión unificada de datos corporativos, anuncia que su software Simpana® 10 ha obtenido la integración certificada con la plataforma SAP HANA®. El software de CommVault permite que, de una manera más fácil y global, las empresas gestionen, protejan y recuperen grandes volúmenes de datos analíticos y transaccionales de alto rendimiento.
A medida que los entornos TI y el "Internet de las cosas" siguen transformando la manera en que las compañías se comprometen con sus clientes, las grandes organizaciones incluidas en el ranking Fortune 500 necesitan adecuarse a los requerimientos del mercado que cambian rápidamente con el fin de seguir siendo competitivas. La gestión del gran crecimiento de los datos y de su constante cambio también implica que los departamentos TI de dichas organizaciones se centren en la eficacia y en el aseguramiento de la integridad de los datos, así como en la protección y disponibilidad de su información estratégica. En este entorno evitar el tiempo de inactividad es clave.
SAP HANA está siendo empleado por las empresas como una piedra angular a la hora de proporcionar análisis simplificados, planificación y evaluaciones de predicción. El software Simpana 10 ofrece copias de seguridad consistentes y globales, así como recuperación de datos para entornos que ejecutan SAP HANA, con la velocidad, facilidad de uso y prestaciones que requieren los análisis de predicción en un mundo en rápido movimiento en tiempo real.
La nueva integración certificada por SAP con la plataforma SAP HANA es particularmente oportuna debido a la creciente demanda por parte de los clientes de CommVault para una protección más intensa de sus aplicaciones de bases de datos corporativas desde SAP. Además con el soporte recientemente disponible de CommVault para FlexFlame deFujitsu, una plataforma operativa de principio a fin para infraestructuras que se ejecutan sobre soluciones SAP, los clientes pueden tener más confianza en que dichas soluciones soportadas por la tecnología de Fujitsu son muy escalables y cuentan con plenas garantías de protección y recuperación de los datos.
Gracias a su arquitectura basada en una plataforma única, el software Simpana puede ayudar a las empresas a afrontar sus requerimientos de negocio de manera más eficaz y sencilla. De manera anticipada los clientes pueden reducir costes, riegos y las complicaciones asociadas a la protección y acceso a la información para entornos que ejecutan software de SAP.
CommVault Simpana 10 aprovecha las capacidades distintivas de SAP HANA para ofrecer beneficios clave incluyendo:
  • Simplificación de la administración a través de la integración con el estudio de SAP HANA y la automatización basada en políticas.
  • Simplificado de copia de seguridad de principio a fin para un entorno SAP HANA de múltiples nodos, de un solo nodo y en gran escala.
  • Copia de seguridad y recuperación robusta e integral con backups de registro automático, soporte de línea de comandos y la gestión y reporte del trabajo más eficaz.
(IberoNews)

Oracle Advisor Webcasts Recordings for March 2014 Sessions

By Paul Anderson -Oracle on Apr 01, 2014

Did you miss one of the recent Business Analytics Advisor Webcasts?
The recordings and presentations (PDF) are available for the following recently hosted (March 2014) Advisor Webcast sessions:

  • Embedding OBIEE Content in WebCenter Portal
    ... recommended for technical users who are interested in integrating OBIEE with WebCenter Portal. The objective is to provide an overview of the system requirements and the steps required to perform a basic integration.
         Visit:
    Doc ID 1613815.1 to obtain recording and presentation.
    MOS Community for additional discussion about this session.
  • Installation and Configuration of Webservices for HPCM
    ... recommended for technical and functional users who need or would like to know how to install/configure and use Web Services to fully automate tasks in Hyperion Profitability and Cost Management (HPCM).
         Visit:
    Doc ID 1623123.1 to obtain recording and presentation.
    MOS Community for additional discussion about this session.
  • Interpreting the EPM Registry
    ... recommended for technical and functional users who have to support, install or maintain Hyperion Enterprise Performance Management (EPM) installations. This is a guide that will help users navigate the registry to find Hyperion EPM configuration settings.
         Visit:
    Doc ID 1623837.1 to obtain recording and presentation.
    MOS Community for additional discussion about this session.

To view Advisor Webcasts scheduled sessions & archived recordings for Business Analytics (EPM & BI) visit:
Oracle Business Analytics Advisor Webcasts Doc ID 1456233.1

PartnerCast: Oracle Database 12c for Partners

Nick Kritikos, Vice President of Partner Enablement, hosts Gordon Smith, Director of Database Product Management, at Oracle to discuss Oracle Database 12c for partners 

Todos los Sábados a las 8:00PM