jueves, 29 de marzo de 2018

Oracle Hot Topics: OCR MANUAL BACKUP FAILS WITH U_CHECK_BKUP: BACKUPFILE COULD NOT BE VERIFIED


Bugs

Bug
Product Area
Bug ID
Last Updated
Oracle Database - Enterprise Edition
26261671
Thu, 29 Mar 2018 00:28 GMT-08:00

Knowledge Articles

Knowledge Article
Product Area
Last Updated
Oracle Database - Enterprise Edition
Thu, 29 Mar 2018 03:46 GMT-08:00

Oracle Redefines the Cloud Database Category with World’s First Autonomous Database


Press Release 
Oracle Redefines the Cloud Database Category with World’s First Autonomous Database 
Delivers industry-leading performance, security capabilities, and availability at half the cost of Amazon Web Services 

Redwood Shores, Calif.—Mar 27, 2018 

At an Oracle event today, Oracle Executive Chairman and CTO Larry Ellison announced the availability of the first service based on the revolutionary new Oracle Autonomous Database. The world’s first self-managing, self-securing, self-repairing database cloud service, Oracle Autonomous Data Warehouse Cloud, uses machine learning to deliver industry-leading performance, security capabilities, and availability with no human intervention, at half the cost of Amazon Web Services. 

“This technology changes everything,” said Ellison. “The Oracle Autonomous Database is based on technology as revolutionary as the Internet. It patches, tunes, and updates itself. Amazon’s databases cost more and do less.” 

Oracle Autonomous Data Warehouse Cloud delivers all of the analytical capabilities, security features, and high availability of the Oracle Database without any of the complexities of configuration, tuning, and administration—even as warehousing workloads and data volumes change. The autonomous database is an entirely new class of offering which requires zero operational administration on the customer’s part, enabling cloud data warehousing that is: 

· Easy. The industry’s first one-step warehouse provisioning spins up a secure data warehouse with automatic backup, encryption, and a high availability architecture in mere seconds. Migration to cloud is simple due to full compatibility with existing on-premises databases. 

· Fast. Industry-leading query performance with no tuning required. Oracle Autonomous Data Warehouse Cloud is so fast that Oracle guarantees the same workload at half the cost of Amazon Web Services[i]

· Elastic. Independent, online scaling of compute and storage. The ability to dynamically grow or shrink resources enables true pay-per-use, dramatically lowering costs. 

The world’s most popular data warehouse database is now the world’s simplest and safest. Leveraging decades of experience and technology leadership to transform how companies benefit from database services, Oracle Autonomous Data Warehouse Cloud is the first of many Oracle Autonomous Database Cloud services. Other services in development include Oracle Autonomous Database for Transaction Processing, Oracle Autonomous NoSQL Database for fast, massive-scale reads and writes (commonly demanded by the Internet of Things), and Oracle Autonomous Graph Database for network analysis. Each of these offerings is tuned to its specific workload, and shares the defining characteristics of Oracle Autonomous Database services: 

· Self-managing. Eliminates human labor and human error to provision, secure, monitor, backup, recover, troubleshoot, and tune the database. Automatically upgrades and patches itself while running. 

· Self-securing. Protects from external attacks and malicious internal users. Automatically applies security updates while running to protect against cyberattacks, and automatically encrypts all data. 

· Self-repairing. Provides automated protection from all planned and unplanned downtime with up to 99.995 percent availability, resulting in less than 2.5 minutes of downtime per month, including planned maintenance. 

The Oracle Autonomous Data Warehouse is built on Oracle Database 18c, the first release in Oracle’s new annual database software release model. A hotbed of innovation with over 100 new features, Oracle Database 18c is now available on Oracle Cloud Services, Oracle engineered systems, and livesql.oracle.com

Today’s announcement follows on the heels of Oracle’s recently announced expansion of its Oracle Cloud Platform Autonomous Services. During this calendar year, Oracle plans to deliver Oracle Autonomous Analytics, Oracle Autonomous Mobility, Oracle Autonomous Application Development and Oracle Autonomous Integration services. 

[i] Oracle will cut your Amazon bill in half when you run the same data warehouse workload on Oracle Autonomous Data Warehouse Cloud Service as compared to running on Amazon AWS. The minimum workload is one hour for this offer. Offer valid through May 31, 2019. Terms and conditions apply. 


Follow Oracle Database via Blog, Facebook and Twitter

Contact Info 

Nicole Maloney
Oracle
650.506.0806

Jessica Moore
Oracle
650.506.3297

Research on Emerging Tech Aims to Make Work Easier


By Jake Kuramoto, Oracle Senior Director, Emerging Technologies

Recently, at Oracle CloudWorld in New York, Oracle President Thomas Kurian talked about how emerging technologies can change the way you do your job for the better.

Investigating this impact is the primary focus of the AppsLab, the Oracle Applications User Experience (OAUX) Emerging Tech team, and it is what my team and I explore every day.

A recent post on Forbes.com, "3 Examples Of How Emerging Tech Will Change Your Work," examined three specific ways emerging technology could be incorporated into enterprises. In this post, we're going to tell you about the research we've done on a few of those technologies, specifically artificial intelligence (AI), autonomous experiences, ambient interfaces, and the Internet of Things (IoT), and discuss our findings on how these technologies might affect your work and improve your user experience.

Artificial Intelligence

The term AI covers a lot of territory, but generally speaking, it refers to intelligence demonstrated by machines. Like many people, we’re also fascinated by the efficiencies AI could produce in the workplace. Rather than building a giant artificial brain, we’ve decided to start smaller—with chatbots.

We began thinking about ways to automate tasks using bots in 2010, but we didn’t begin developing exploratory projects in earnest until late 2015. As with most of our projects, we started by asking questions and researching ways people thought chatbots could help them do their jobs more efficiently.

By Oracle OpenWorld 2016 we had a working prototype, just in time for the big announcement of the Oracle Intelligent Bot Cloud Service. Our chatbot focused on tasks that Oracle Cloud Applications users wanted to streamline, such as simple human resources (HR) tasks like payroll and vacation queries that could be performed in applications already familiar to the user, using tools they already know how to use such as text and instant messaging.


We continue to gather research by showing chatbots in the Cloud UX Labs at Oracle HQ, and we now have several bots for different domains within Cloud Applications, including HCM, Sales, and ERP Cloud.

The feedback from people who see these chatbot demos typically is very positive; they can easily see the value of texting a chatbot with a simple inquiry like, “Did I get paid my bonus?” or a more complex one like, “Can I take a vacation this Easter?” Getting instant answers to questions like these creates a valuable efficiency that resonates with all Oracle users.

Autonomous Experiences

Through our research on chatbots, we discovered many cases where the AI we were building could save time by making basic assumptions for a user and automatically composing something in the system.

At Oracle OpenWorld 2017 last September in San Francisco, we demonstrated an expense bot. Usually, expense reports require a fair amount of manual work, but we found simple ways to automate the process.

For example, the demo shows the bot automatically uploading and parsing pictures of receipts. It then determines the amount of the expense and the time of day, makes an assumption on the type of expense, and creates an expense report. Several steps in creating such a report are now automated.

Another feature of the expense bot monitors the user’s email inbox for pdf receipts, such as hotel folios or rental car receipts, then uploads and parses them to itemize and categorize the expenses automatically.

This was all done via a chatbot that simply notified the user when expense items were created and ready for use. When the user was ready, an expense report was created and submitted.

We also built in an option where, if any policy violations occurred on the expense report, the user could log in to Oracle ERP Cloud to review and rectify them.

No one likes doing expenses, so people who see this demo are happy to offload the pain to a chatbot. Plus, we found new insights that can be valuable, such as automatic itemization of hotel folios, which allows for local tax comparisons that could lead to savings.

Ambient Interfaces

For several years, ambient interfaces have interested us. At CloudWorld, Kurian referred to “ambient human interfaces” as virtual assistants that users interact with by voice. 

Our investigations into ambient interfaces have included virtual assistants, as described in the previous sections, but we also view them as the next iteration of smartphone notifications, passively showing only the most important information.


At Oracle OpenWorld 2016, we showed an ambient visualization; the original goal was to create a piece of art that changed based on information gathered from the room. Anyone who understood why the visualization changed could get real-time data on where people were in the room, but to everyone else, it was just a piece of dynamic art.

As with many of our projects, this was a research initiative. By giving an example of what we meant by an ambient interface, people could apply their own knowledge to give us valuable examples of how this type of interface would help them with their work.

Internet of Things

The connected world around us has been an area of personal interest for our team for nearly a decade, and a few of our team members have been building internet-connected projects at home since before the term Internet of Things (IoT) was coined.

This long history has led to several IoT projects, including a real-world Smart Office in our lab at Oracle’s headquarters in Redwood City, California, and its portable equivalent that has been shown around the world. We also developed an IoT-based Community Quest with Oracle Developers at Oracle OpenWorld 2015, and our IoT workshops have been part of Oracle Code events since the program began in 2017. And at last year’s Oracle OpenWorld, we collaborated with the Oracle IoT Cloud Service team, Oracle Developers, Relayr, an Oracle partner, and Alpha Acid Brewing Company to create IoT Cloud Brewed Beer.

Our focus recently has been on making the sensor-filled environments around us more valuable. Now that physical spaces and objects are collecting data, what efficiencies can we uncover in that data to benefit our work? As always, we’ll be conducting research first, asking questions, and listening to people.

Follow our work: Follow our progress on our blog and in the Emerging Technologies section of our team website. Find us on Twitter (@theappslab), Facebook, and Instagram

5 Benefits of Shifting to Smart Manufacturing


By: John Barcus, Vice President, Oracle Manufacturing 

Manufacturers are adopting smart technologies to improve efficiencies in their factories. But many companies are stuck in the early stages of adoption. They often discover that initiating a smart-manufacturing project is labor-intensive, time-consuming and costly. A common hurdle manufacturers encounter is the integration of multiple technologies. 

Cloud applications help manufacturers reduce costs related to these challenges and shorten project timelines. In the following sections, we’ll take a closer look at how smart technologies are increasing opportunities for manufacturers and best practices for efficient implementations. 

The Smart Factory Opportunity 

In the next five years, smart factories may contribute as much as $500 billion in added value to the global economy, according to Capgemini’s Digital Transformation Institute. The reason: Smart factories can produce more with lower costs, according to a recent report. In fact, manufacturers expect smart technologies to drive a sevenfold increase in annual efficiency gains by 2022. Some industries can expect to nearly double their operating profit and margin with smart technologies. 

Smart factories increase output, quality, and consistency. Additional benefits include: 
Streamlined and automated data: Smart technologies automate data collection and provide advanced production analytics, so managers can make more informed decisions. In a smart operating environment, manufacturers can tie their operations technology with business systems to measure their key performance indicators against business goals. 

Predictive maintenance: With better visibility, manufacturers can predict and resolve maintenance issues before they lead to downtime or product-quality issues. For example, sensors affixed to machines or devices may send condition-monitoring or repair data in real time, so manufacturers can identify problems more efficiently. 

Significant cost reductions: Manufacturers can identify waste and increase forecast accuracy when their operations and enterprise systems are connected. They have better insight into supply chain issues, such as inventory levels and delivery status, as well as demand cycles. With this information, they can reduce costs related to excessive inventory or unexpected production volume. 

Reduce workforce challenges: Automation helps manufacturers launch and complete projects with fewer workers. Having real-time access to data across multiple platforms frees workers to focus on their core responsibilities. This allows manufacturers to innovate faster without investing in additional resources. 

Enhanced productivity: Smart, connected systems help factories improve throughput. In a connected enterprise, manufacturers have seamless visibility into bottlenecks, machine performance, and other operational inefficiencies. With this data, manufacturers can make adjustments to increase yields, improve quality, and reduce waste. 

Efficient IoT Integration and Implementation 

The smart manufacturing transformation begins with the Internet of Things (IoT). IoT-enabled devices send performance-related information via sensors over a network. During the early-adoption phase, many manufacturers struggle to build IoT applications and integrate them with disparate business applications. 

According to the previously cited survey from Capgemini’s Digital Transformation Institute, digitally mature manufacturers apply the following best practices to overcome these challenges: 
  • 85% say they hire consulting firms to help build a business case and roadmap 
  • 67% say they partner with technology providers for feasibility studies 
  • 63% use end-to-end technology solutions 
One of the ways Oracle is helping manufacturers address the integration challenge is through a partnership it formed with Mitsubishi Electric. The companies are providing manufacturers with an open platform for developing IoT applications that integrate seamlessly with Oracle cloud apps. 

IoT apps developed on the Mitsubishi Electric FA-IT Open Platform collect data from factory equipment for visualization and analysis. Oracle IoT Cloud, which is designed to integrate with other Oracle cloud-based business systems, receives this information from these IoT apps in real time. 

“Mitsubishi Electric’s new FA-IT Open Platform is based on edge computing to accelerate IoT utilization for smart manufacturing,” says Toshiya Takahashi, corporate executive group senior vice president, factory automation system, Mitsubishi Electric Corporation. “By adding Oracle Cloud services to this platform, we believe that it will be possible to visualize factories and build an application development environment. To provide the platform to customers early, we will also work with partner companies, including IT companies, to develop applications utilizing the platform." 

Seek IoT Expertise to Speed the Adoption Process 

Smart technologies offer a significant competitive advantage for manufacturers. But they’re not easy to implement. Manufacturers can overcome early-adoption challenges with the help of experienced consultants. When seeking a consulting firm, it’s important to consider advisors who have significant experience helping factories transition to smart technologies. This will help cut the implementation phase and place manufacturers on the right path from the beginning. 

A do-it-yourself implementation is typically costly and time-consuming. Oracle and our partners have the experience and expertise to help manufacturers build a comprehensive cloud-based solution using an efficient, cost-effective process that delivers immediate results. Check out our latest ebook for more information. 

Oracle Opens ‘Phenomenal’ Campus in Texas State Capital


AUSTIN, TEXAS—State capital. One of the world’s premier destinations for live music. And now, with the opening of Oracle’s sprawling 40-acre campus on the south shore of Lady Bird Lake, a global center of cloud computing innovation. 

The centerpiece of the new Oracle complex, a 560,000-square-foot office building, is home initially to 2,500 employees, but that number is expected to grow. At the March 22 launch ceremony, Larry Ellison, Oracle’s executive chairman and chief technology officer, said as many as 10,000 employees could eventually work there. “We have big plans,” Ellison said. 

The state-of-the-art complex—with a full-service restaurant, food truck, Starbucks coffee shop, game rooms, fitness center, and 295-unit apartment compound—was designed to help Oracle recruit top talent, including recent college graduates hired as part of the company’s immersive “Class Of” sales training program. “Oracle is expanding in Austin to attract, hire, and train the best talent to support the unprecedented growth of our cloud business,” CEO Mark Hurd said. 

The building will include the first of Oracle’s Next-Generation Contact Centers, designed to support fast, efficient customer interactions using the latest technologies, including curved, wide-screen monitors, real-time intelligence with contextual data, and click-to-call capability. “It’s a powerful way to expand our reach and make it easier for our reps to do their jobs,” says Downs Deering, senior vice president of the Oracle Digital sales team. “They have information at their fingertips.” 

 

A new Oracle Cloud Solution Hub, staffed by Oracle engineers, will demonstrate innovative projects built with and for customers using emerging technologies such as artificial intelligence, virtual reality, and Oracle Blockchain Cloud Service. The hub, and three more like it at other Oracle locations, will help customers conceive their own ideas for digital transformation. 

Austin is also the first US site of the Oracle Startup Cloud Accelerator program, launched in 2016 and already operating in Brazil, England, France, India, Israel, and Singapore. The six-month program gives startups access to Oracle resources, including mentoring, workspace, and free Oracle Cloud credits, as well as to Oracle customers. 

Location, Location 

With a metro area population of 2 million and growing, Austin thrives on its decades-long heritage of food, music, the outdoors, and a creative, entrepreneurial spirit. Oracle’s campus will help the company not just fit into the local scene, but also compete with other tech companies looking to hire from universities in the state, including Baylor, Texas A&M, and the University of Texas at Austin. 

In his remarks at the opening ceremony, Ellison said it is important for Oracle to be situated in the heart of Austin, and in particular near the water so that employees can kayak and hike. (Lady Bird Lake is a reservoir on the Colorado River.) “Austin is one of the places we want to be because we think that’s where our people want to be,” he said. “We want to develop the kind of facilities where you feel good about coming to work every day.” 

 

Ellison refused to consider sites on the outskirts of the city, and he shared an anecdote about his search for the right location. When the car he and Hurd were in to go view real estate headed miles out of town, Ellison told the driver to turn around: “I said, ‘This isn’t Austin. I’m not getting out of the car.’” 

The site on Lady Bird Lake was just what Ellison and Hurd were looking for. “We think this is a phenomenal facility to house fantastic people, who hopefully will come to Oracle whether they're experienced or right out of college, and be able to develop their careers, learn new technologies, and grow as the company grows,” Ellison said. 

Deering describes Oracle’s new digs—including a rock-climbing wall, outdoor collaboration spaces, and Austin-themed murals—as an “experience” that goes well beyond the new furnishing and workspaces. “This is the first place I’ve ever worked at,” he says, “that my kids will think is cool.” 

John Foley is director of strategic communications for Oracle. 


By: Tansy Brook Director of Product Marketing 

We’ve all received an email that seemed a little suspicious or made an unusual request for financial or personal information. Most consumers know to delete these emails right away because they’re likely a scam. But what if you received an email from your CEO or CFO, and it sounded just like them? What if they asked you do something you were expecting to do anyway—such as pay a bill? What if they mentioned their children’s names and other personal details? 

Welcome to the new world of Business Email Compromise (BEC). In this growing form of cybercrime, fraudsters impersonate a business email—usually someone in an executive position—and then contact an employee to ask for a wire transfer or employee information. These phishing scams increased an astounding 2,370% between 2015 and 2016, and caused $5.3 billion in losses, according to the FBI

“The group at largest risk are small- to medium-size businesses (SMBs),” says Cary Scardina, a supervisory special agent with the Federal Bureau of Investigation’s Cyber Division in Washington, D.C. “I’ve seen small businesses get hit with losses from $45,000 to several million; it can be devastating, depending on the size of the company.” Fortunately, there are steps businesses can take to reduce their risk of becoming a BEC victim—and the work starts with simply being aware. 

Beyond the Usual Threats 

When Scardina describes BEC, he narrows the crime down to one word: Impersonation. At the core of the scam, cybercriminals are simply impersonating an employee’s boss or company finance executive. “But it’s now of a higher quality than in years past,” Scardina says. 

These are not emails from far-away royalty who need your employees’ help. Instead, BEC fraudsters are hacking into employee email accounts and then conducting sophisticated surveillance, sometimes for weeks or more. The attacker will track email traffic to learn how a person talks, how wire transfers and other requests are made—even what nicknames employees might use for each other. 

When it comes time to conduct the actual crime, a fraudulent email may come from either an authentic or spoofed account. With a spoofed account the domain is slightly off. For example, a business name may contain an extra letter or an email might add a period between the first and last name. The attackers then ask the recipient to make a wire transfer payment—and include instructions for how to do so. 

SMBs Are Prime Targets 

Increasingly, the cybercriminals are phishing for company W-2 information, which they use to file fraudulent tax returns. The IRS noted that more than 200 companies—which translates to hundreds of thousands of employees—were compromised by such scams last year. 

Scardina says that SMBs are prime candidates for business email compromise wire transfer and W-2 email fraud. “That’s where you can have the intersection of high-dollar amounts and lower IT security,” he says. The real estate industry has witnessed much of the BEC activity, largely because of the transactions realtors and others involved are conducting. But the criminals aren’t picky. 

Scardina has also seen medical offices, law firms and even pig farms targeted by these spoofed email schemes. In many cases, the companies don’t catch the fraudulent transfer for a few days. These issues are time-sensitive: By then, it can be hard to reverse the transfer or trace the money before it is broken up and divided into multiple overseas accounts. 

Get Ahead of Scammers 

So how do you keep your SMB safe from BEC scams? As with many things, the best defense is a good offense. Scardina and the FBI offered the following guidance for reducing your risk of becoming a BEC victim: 

1. Verify money transfer requests. 

Institute a company policy that requires employees to verify requests for wire transfers—ideally with a phone call authentication. This is especially vital if the transfer request is deemed urgent by the email sender, Scardina says. In addition, advise employees to not discuss the details of wire transfers or bank accounts over email and to confirm any changes in the process with the bank or vendor. 

2. Implement detection systems. 

Task your IT team with creating a system that flags emails from domains that are similar to your own and could be used to create a look-alike domain. Other helpful tips include adding a rule in your email account that automatically flags emails in which the reply address is different from the “from” address. Also, be aware of the external applications your employees are connecting to with their computers by implementing a Cloud Access Security Broker (CASB) application. 

3. Educate your employees. 

Execute some social engineering and ensure that your employees are aware of BEC warning signs. Red flags that an email may be fraudulent include: Any email that provides wire information or requests changes to existing information, requests for expedited payments, asks for W-2 information. “Flagging these should just be automatic,” Scardina says. “Employers should have a policy for how to do so.” 

If you do suspect you’ve been a victim of BEC, Scardina says the first thing to do is to call the financial institution that sent the wire. In some cases, the bank can initiate a recall of the funds. Then call the FBI and file a report at IC3.gov. That way the FBI can track the details of your case. Lastly, have your employees change their passwords to their email and any other company networks. 

4. Adopt a passphrase. 

Using longer passwords and changing them on a regular basis seems like a given. But, the traditional standards for passwords encourage people to use a single, difficult to remember password across all of their accounts. Great news! New research shows that rather than having a complicated mixture of special characters, numerals and capitalizations, using a passphrase is more secure and easer to remember. Longer passwords containing multiple upper and lower-case words are more secure. Consider choosing something relevant to you (like a book title) that wouldn’t be public knowledge. This lightens the “memory burden” on users, making them more inclined to follow this security best practice. 

Change your passphrases on a regular basis. The new version can be similar to the previous phrase, for example from “thesunalsorisesinJAN” to “thesunalsorisesinFEB.” 

Business email compromise remains on the rise—and the cyber criminals are only getting smarter. Take these precautions to educate your employees against threats and prevent your business from losing time, money and more to an email scam. 

4 Ways to Protect Your SMB from BEC 

Business email compromise scams are on the rise, costing $5.3 billion in losses since 2013. To reduce your risk: 

1. Verify email wire transfers and PII requests, even from people you know. 
2. Create fraudulent email detection systems if you have an IT security team. 
3. Educate your employees. 
4. Use long passwords, change them routinely, and do not reuse them for multiple accounts.

Machine Learning Challenges: What to Know Before Getting Started


The rewards of machine learning can be compelling, and it may make you want to get started, now. At the same time, however, you'll want to consider machine learning challenges before you start your own project. 

This article isn’t meant to scare you away; rather, it’s meant to ensure you’re prepared and that you’re carefully thinking about what you’ll need to consider before you get started. 

We spoke with Brian MacDonald, data scientist on Oracle’s Information Management Platform team, about the pitfalls he’s seen and what companies can do to avoid them. 

These machine learning challenges include: 

· Addressing the skills gap 
· Knowing how to manage your data 
· Operationalizing the data 

1. Address the Machine Learning Skills Gap 

The biggest difficulty, of course, is the skills gap that comes with using machine learning in a big data environment. There’s a certain community of people who think that big data makes life beautiful and it will be easy to get started. 

The biggest challenge you’re going to find is discovering the right people. There is a big demand for people who are skilled in machine learning and a small pool to choose from. But as we described in our article about machine learning success, having executive support is key to this. If you have executive support, you’re also going to have the funding to find and recruit those valuable people. 

Here’s something to think about. If you’re in a situation where you’re very sensitive to cost because skilled data scientists are expensive, then you probably don’t have a big enough business problem to make machine learning worth doing. 

Let’s say a skilled data scientist costs your company $300,000 to $400,000 (including all benefits and incentives). If that person can’t help you solve a problem that’s worth at least a million a year, then you probably don’t need that person. Right? 

On the other hand, if you truly believe this person (or team of people) can help you solve a problem in the tens of millions, then what are you waiting for? 

It is difficult to find people. But if it’s truly important to your company, you can find them. 

Here’s another issue to think about: the tools and software. While there are of course tools that will help, you’ll rarely be able to find the exact, perfect machine learning tools you need that are ready to go for you, right out of the box. You’ll have to think about the tooling you’re going to use. 

Python, R, SQL, TensorFlow? And if you use those, how will they work with your data lake? And how will you handle the setup and configuration that can create challenges? Think through the details before you get started and ensure you have enough funding. 

2. Know How to Manage Your Big Data 

Machine learning is a messy process. And just having a big data platform doesn’t automatically mean it will be easier. In fact, it might make it messier, because you’ll have more data. That data enables you to do more, but it also means more data prep that has to be done. 

You’ll have to think holistically about how you’re going to approach the problem. Here are some questions to think about: 

· Where is your data coming from? 
· How are you going to approach the problem? 
· How are you hoping to handle your data preparation? 
· And once that’s done, how will you build your models and operationalize everything? 

If you don’t already have a good BI practice or an analytics practice and if you’re not using data in all the ways you can think of already—well, jumping over to machine learning is really going to be a challenge. Already having data-driven decision making is absolutely critical. If you don’t have that, we recommend having that in place before you get started with machine learning. 

If you do decide to start, here are some other considerations. Think about them carefully before you get started: 

Rapid Change 

In the machine learning world, innovation is coming quickly which means rapid change. What’s good today may not be so good tomorrow, and you can’t always rely on the software because it’s a more volatile space. You might get more issues with different versions and conflicts. 

The Sheer Volume of Data 

With machine learning, you’ll have to deal with data—lots and lots of different kinds of data. Understanding whether you use all of it, the processes, whether to sample, etc.—all of that can be a challenge, especially when you’re getting deeper into your data and dealing with data movement. 

Ensure you’re up to facing that challenge and that you have your plan in place. 

3. Operationalize Your Big Data 

What’s the biggest issue most data scientists face? It’s operationalizing the data. 

Let’s say you’ve built a model and it can predict factors that lead to churn. How do you get that model out to the people who can affect those numbers? How can you get it to the CRM or mobile app? 

If you have a model that predicts equipment failure, how can you get it to the operator in time to prevent that failure? There are many challenges with taking a model and making it actionable. And it’s probably the biggest technical challenge that exists for data scientists these days. 

You can build the most beautiful models in the world. But will your C-suite truly care if it’s not actually making an impact on the company’s bottom line? You might think your part of the bargain is just to make the data available. But it’s not. You have to make sure your data is actually going to be used. Gaining executive support is hugely helpful for this. 

So machine learning isn’t really easy. But it can accomplish big things. To inspire you and remind you of what’s possible, we’re sharing a real-life customer example and their machine learning project. 

Real-Life Machine Learning and Big Data Example 

This company is one of the largest providers of wireless voice and data communications services in the United States. 

Business Challenges: 

· Credit Risk: Their equipment leasing and loan program through their financing arm has to write off large amounts of bad debt every year. They wanted to reduce bad loans and defaults, which will significantly add to their bottom line in millions every year. In addition, ability to impact pending collections will dramatically help with cash flow. 

· Customer Experience and Personalization: Customer churn costs this company millions a year. Early identification and targeting of both potential churn and new high value customers through personalization and segmentation can dramatically increase the number of net new subscribers and reduce churn. 

· Operational Effectiveness: This company sought enhanced targeted marketing and campaign effectiveness through network optimization and data monetization. 

Technology Challenges: 

· This telecom company wanted to detect fraudulent activity much earlier and integrate data from multiple structured and unstructured sources to improve customer scoring. This would enable the company to provide customized offers and reduce risk. 

· They also wanted the ability to store and analyze large volumes of customer data to help the business develop a better ability to segment customers and predict their behavior for personalized offers. 

· They sought to optimize pricing through new advanced what-if analysis. 

In order to accomplish this, the company purchased a wide variety of Oracle big data products including Oracle Golden Gate for Big Data, which is part of Oracle Data Integration Platform Cloud. 

Addressing the skills gap, managing the data, and operationalizing it are challenges that need to be dealt with—but they can handled successfully. And the results can be incredible. Read more on tips on success with machine learning for more information. 

And if you'd like to try building a data lake and use machine learning on the data, Oracle offers a free trial. Register today to see what you can do.

Beginning January 2019, not more Oracle Database 11g & 12c R1 in Cloud Service


Oracle Cloud News

We're updating the selection of available database releases within the Oracle Database Cloud Service. Beginning January 2019, Oracle Database 11g Release 2 and Oracle Database 12c Release 1 will no longer be available for service instance provisioning within the Oracle Database Cloud Service.
How will this change affect me?

These release versions will no longer be available as a selection for provisioning in the service web portal, the API, or the command-line interface (CLI). Existing service instances using these relational database management system (RDBMS) releases will continue to operate and be fully supported through their Premier Support period or their Extended Support period (when applicable), as found in the Oracle Lifetime Support document. At the end of the Premier Support period or the Extended Support period, these RDBMS releases will no longer be supported in the Oracle Database Cloud Service.

Please note that this announcement is specific to the provisioning of new database instances via Oracle Database Cloud Service and that this change has no impact on the use of these RDBMS releases within on-premises environments or within Oracle Cloud IaaS environments.

Do I need to take action?

If you are using, or are planning to use, one of the release versions listed above, Oracle recommends that you plan an upgrade to a supported RDBMS release (e.g., Oracle Database 12c Release 2 or Oracle Database 18c) before services using Oracle Database 11g Release 2 or Oracle Database 12c Release 1 enter the unsupported state.

When will this change take place?

This change is expected to take place within the month of January 2019, at which point Oracle Database 11g Release 2 and Oracle Database 12c Release 1 will no longer be available for service instance provisioning.

sábado, 24 de marzo de 2018

Instalando Oracle Database 18c for EXADATA en una VM con VirtualBox y OEL 7.3

ADVERTENCIA: Este post debe ser utilizado como un laboratorio para aprendizaje y no para poner en producción. La versión ON-PREMISE de Oracle Database 18c, estará disponible para producción en el segundo semestre de este año.

Bueno lo primero que necesitamos para crear este laboratorio de aprendizaje, es tener una máquina virtual configurada y preparada para la instalación del software de Oracle Database, siguiendo los pasos que podrán encontrar en este mismo blog, al lado derecho con el título de e-book "Instalando Oracle Linux UEK y Oracle 12c en una VM en 1 hora o menos."

Si bien es cierto, este documento sirve para la preparación de VM con Oracle 6.x y para Oracle Database 12c o Oracle Forms & Reportes o Oracle Weblogic o cualquier producto de las distintas familias de software On-Premise, también puede ser utilizado como guía para instalar Oracle Linux Versión 7.3.

Esta es la versión en la cuál he preparado el laboratorio y ha funcionado adecuadamente.

Segundo, tal y como lo comentamos en el video en el FB LIVE que hice hace unos días atrás, vamos a proceder a bajar la versión del motor de base de datos 18c para Exadata x86.

El software lo pueden conseguir en el sitio de Delivery Cloud de Oracle. Para ello, necesitan estar previamente registrados en el portal de oracle.com.


Esta descarga es de aproximadamente unos 3.76GB

Vamos a descomprimir el archivo en la ruta que queremos tener como ORACLE_HOME. Para efectos míos, esta ruta corresponde a /opt/app/oracle/product/18.0.0/dbhome_1


Cuando termine el unzip del archivo del software del motor de la base de datos, este habrá creado, toda una jerarquía de directorios.

Modifique manualmente el archivo /etc/oratab e incluya el directorio ORACLE_HOME escogido, con el nombre que deseas darle al servicio de base de datos que vas a crear, ya sea a nivel de contenedor o de una single-instance.

En mi caso, voy a crear en esta primera instalación, un contenedor de base de datos con el nombre de CDB1.

Tengan en cuenta, que el motor que hemos bajado, te va a permitir instalar versiones Oracle Database SE y E.E.

En el caso de la versión SE, es tecnología SingleTenant y en el caso de E.E. Multitenant.

En la versión SE, máximo pueden tener 1 instancia PDB ( Acoplada ) por contenedor de base de datos y en la versión E.E., pueden llegar a tener hasta 4096 instancias PDBs por contenedor.

Como indicamos, vamos a ingresar la línea al archivo oratab.

[oracle@lab1 ~]$ more /etc/oratab
#
# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.

# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
cdb1:/opt/app/oracle/product/18.0.0/dbhome_1:N

Ahora podemos utilizar el archivo de configuración de variables de ambiente, para el usuario "oracle".

[oracle@lab1 ~]$ . oraenv
ORACLE_SID = [oracle] ? cdb1
The Oracle base has been set to /opt/app/oracle

Una vez realizado el paso anterior, van a ir al directorio de descompresión y van a ubicar el archivo clásico "runInstaller". El mismo debe ser ejecutado con el usuario "oracle".

Aquí esta el primer detalle, deben escoger la instalación sólo del software. El DBCA no arranca de manera adecuada en una VM, ya que esta hecho para EXADATA y no para nuestra infraestructura.


Una vez concluida esta parte, es necesario pasar a configurar el archivo de parámetros del contenedor inicial para base de datos.

Para ello, vamos a trasladarnos hasta el directorio ORACLE_HOME del software y vamos a ingresar en el subdirectorio dbs

[oracle@lab1 ~]$ cd $ORACLE_HOME
[oracle@lab1 dbhome_1]$ cd dbs

Ahí vamos a crear el archivo initcdb1.ora, con el siguiente contenido:

db_name='cdb1'
memory_target=1G
processes = 150
db_block_size=8192
open_cursors=300
undo_tablespace='UNDOTBS1'
_exadata_feature_on=true
enable_pluggable_database=true
# You may want to ensure that control files are created on separate physical
# devices
control_files = (ora_control1, ora_control2)

Observen con cuidado, el parámetro _exadata_feature_on=true es el que hace la diferencia en este archivo de configuración para la instancia. Este parámetro oculto, engaña al software de la base de datos y le permite correr en un sistema que no es un EXADATA en realidad.


Ahora vamos a proceder a levantar los servicios de la instancia que vamos a configurar. Debe hacerse en modo nomount.

[oracle@lab1 dbs]$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 Production on Mon Mar 5 20:56:44 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2017, Oracle. All rights reserved.
SQL> connect / as sysdba

Connected to an idle instance.

SQL> startup nomount force
ORACLE instance started.
Total System Global Area 1073741008 bytes
Fixed Size 8903888 bytes
Variable Size 616562688 bytes
Database Buffers 440401920 bytes
Redo Buffers 7872512 bytes

SQL> exit
Disconnected from Oracle Database 18c Standard Edition 2 Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

[oracle@lab1 dbs]$ pwd

/opt/app/oracle/product/18.0.0/dbhome_1/dbs

Ahora nos vamos a pasar al directorio en donde vamos a crear nuestro nuevo contenedor de bases de datos Oracle 18c

El script para crear el contenedor manual, puede ser escrito de la siguiente manera:

CREATE DATABASE "cdb1"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE '/opt/app/oracle/oradata/cdb1/system01.dbf' SIZE 700M REUSE
  AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED
  EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/opt/app/oracle/oradata/cdb1/sysaux01.dbf' SIZE 550M REUSE
  AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/opt/app/oracle/oradata/cdb1/temp01.dbf' SIZE 20M REUSE
  AUTOEXTEND ON NEXT  640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE  '/opt/app/oracle/oradata/cdb1/undotbs01.dbf' SIZE 200M REUSE
  AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('/opt/app/oracle/oradata/cdb1/redo01.log') SIZE 50M,
GROUP 2 ('/opt/app/oracle/oradata/cdb1/redo02.log') SIZE 50M,
GROUP 3 ('/opt/app/oracle/oradata/cdb1/redo03.log') SIZE 50M
USER SYS IDENTIFIED BY "oracle" USER SYSTEM IDENTIFIED BY "oracle"
enable pluggable database
seed file_name_convert=('/opt/app/oracle/oradata/cdb1/system01.dbf','/opt/app/oracle/oradata/cdb1/pdbseed/system01.dbf',                        '/opt/app/oracle/oradata/cdb1/sysaux01.dbf','/opt/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf',                        '/opt/app/oracle/oradata/cdb1/temp01.dbf','/opt/app/oracle/oradata/cdb1/pdbseed/temp01.dbf',                     '/opt/app/oracle/oradata/cdb1/undotbs01.dbf','/opt/app/oracle/oradata/cdb1/pdbseed/undotbs01.dbf');
[oracle@lab1 dbs]$ cd /opt/app/oracle/oradata

Vamos a guardar dicho archivo con el nombre 1.sql

[oracle@lab1 oradata]$ ls -la
total 8
drwxr-x---. 4 oracle oinstall 40 Mar 5 20:54 .
drwxr-xr-x. 9 oracle oinstall 4096 Mar 5 13:57 ..
-rw-r--r--. 1 oracle oinstall 1503 Mar 5 20:54 1.sql
drwxr-xr-x. 3 oracle oinstall 20 Mar 5 15:21 cdb1
drwxr-x---. 4 oracle oinstall 32 Mar 5 15:22 CDB1

Ahora nos vamos a conectar a la instancia inicializada en modo nomount y vamos a ejecutar nuestro archivo 1.sql para crear el contenedor de base de datos.

[oracle@lab1 oradata]$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 Production on Mon Mar 5 20:57:35 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2017, Oracle. All rights reserved.
SQL> connect / as sysdba

Connected.

SQL> @1

Database created.

Bajamos la base de datos y la montamos en modo normal.

SQL> shutdown immediate

Database closed.
Database dismounted.

ORACLE instance shut down.

SQL> startup

ORACLE instance started.

Total System Global Area 1073741008 bytes
Fixed Size 8903888 bytes
Variable Size 616562688 bytes
Database Buffers 440401920 bytes
Redo Buffers 7872512 bytes

Database mounted.

Database opened.

Creamos el archivo de parámetros a partir de la configuración en memoria.

SQL> create spfile from pfile;

File created.

Y reiniciamos la base de datos nuevamente.

SQL> startup force

ORACLE instance started.

Total System Global Area 1073741008 bytes
Fixed Size 8903888 bytes
Variable Size 616562688 bytes
Database Buffers 440401920 bytes
Redo Buffers 7872512 bytes

Database mounted.

Database opened.


Ahora que hemos reiniciado la base de datos, vamos a ejecutar el script de creación del contenedor.

No debe ejecutarse por separado los clásicos archivos de catalog y catalogproc.

Para crear el contenedor, vamos a utilizar el script catcdb. Esto proceso tomará algunos minutos, antes de concluir. Tenga un poco de paciencia, que ya pronto tendremos nuestro ambiente listo para trabajar. Durante la ejecución se le pedirá algunos datos, este al pendiente del ingreso de los mismos.

[oracle@lab1 oradata]$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 Production on Mon Mar 5 21:02:49 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2017, Oracle. All rights reserved.

SQL> connect / as sysdba

Connected.

SQL> @?/rdbms/admin/catcdb

SQL>

SQL> Rem The script relies on the caller to have connected to the DB

SQL>

SQL> Rem This script invokes catcdb.pl that does all the work, so we just need to

SQL> Rem construct strings for $ORACLE_HOME/rdbms/admin and

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl

SQL>

SQL> Rem $ORACLE_HOME

SQL> column oracle_home new_value oracle_home noprint

SQL> select sys_context('userenv', 'oracle_home') as oracle_home from dual;

SQL>

SQL> Rem OS-dependent slash

SQL> column slash new_value slash noprint

SQL> select sys_context('userenv', 'platform_slash') as slash from dual;

SQL>

SQL> Rem $ORACLE_HOME/rdbms/admin

SQL> column rdbms_admin new_value rdbms_admin noprint

SQL> select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual;

old 1: select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual

new 1: select '/opt/app/oracle/product/18.0.0/dbhome_1'||'/'||'rdbms'||'/'||'admin' as rdbms_admin from dual

SQL>

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl

SQL> column rdbms_admin_catcdb new_value rdbms_admin_catcdb noprint

SQL> select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual;

old 1: select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual

new 1: select '/opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin'||'/'||'catcdb.pl' as rdbms_admin_catcdb from dual


SQL>

SQL> host perl -I &&rdbms_admin &&rdbms_admin_catcdb --logDirectory &&1 --logFilename &&2

Enter value for 1: log_cdb1.log

Enter value for 2: log2_cdb1.log

Requested Logging Directory log_cdb1.log does not exist

SQL> @?/rdbms/admin/catcdb

SQL> Rem

SQL> Rem $Header: rdbms/admin/catcdb.sql /main/8 2017/05/28 22:46:01 stanaya Exp $

SQL> Rem

SQL> Rem catcdb.sql

SQL> Rem

SQL> Rem Copyright (c) 2013, 2017, Oracle and/or its affiliates.

SQL> Rem All rights reserved.

SQL> Rem

SQL> Rem NAME

SQL> Rem catcdb.sql -

SQL> Rem

SQL> Rem DESCRIPTION

SQL> Rem invoke catcdb.pl

SQL> Rem

SQL> Rem NOTES

SQL> Rem

SQL> Rem

SQL> Rem PARAMETERS:

SQL> Rem - log directory

SQL> Rem - base for log file name

SQL> Rem

SQL> Rem BEGIN SQL_FILE_METADATA

SQL> Rem SQL_SOURCE_FILE: rdbms/admin/catcdb.sql

SQL> Rem SQL_SHIPPED_FILE: rdbms/admin/catcdb.sql

SQL> Rem SQL_PHASE: UTILITY

SQL> Rem SQL_STARTUP_MODE: NORMAL

SQL> Rem SQL_IGNORABLE_ERRORS: NONE

SQL> Rem END SQL_FILE_METADATA

SQL> Rem

SQL> Rem MODIFIED (MM/DD/YY)

SQL> Rem akruglik 06/21/16 - Bug 22752041: pass --logDirectory and

SQL> Rem --logFilename to catcdb.pl

SQL> Rem akruglik 11/10/15 - use catcdb.pl to collect passowrds and pass them

SQL> Rem on to catcdb_int.sql using env vars

SQL> Rem aketkar 04/30/14 - remove SQL file metadata

SQL> Rem cxie 07/10/13 - 17033183: add shipped_file metadata

SQL> Rem cxie 03/19/13 - create CDB with all options installed

SQL> Rem cxie 03/19/13 - Created

SQL> Rem

SQL>

SQL> set echo on

SQL>

SQL> Rem The script relies on the caller to have connected to the DB

SQL>

SQL> Rem This script invokes catcdb.pl that does all the work, so we just need to

SQL> Rem construct strings for $ORACLE_HOME/rdbms/admin and

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl

SQL>

SQL> Rem $ORACLE_HOME

SQL> column oracle_home new_value oracle_home noprint

SQL> select sys_context('userenv', 'oracle_home') as oracle_home from dual;
SQL>

SQL> Rem OS-dependent slash

SQL> column slash new_value slash noprint

SQL> select sys_context('userenv', 'platform_slash') as slash from dual;

SQL>

SQL> Rem $ORACLE_HOME/rdbms/admin

SQL> column rdbms_admin new_value rdbms_admin noprint

SQL> select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual;

old 1: select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual

new 1: select '/opt/app/oracle/product/18.0.0/dbhome_1'||'/'||'rdbms'||'/'||'admin' as rdbms_admin from dual


SQL>

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl

SQL> column rdbms_admin_catcdb new_value rdbms_admin_catcdb noprint

SQL> select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual;

old 1: select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual

new 1: select '/opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin'||'/'||'catcdb.pl' as rdbms_admin_catcdb from dual


SQL>

SQL> host perl -I &&rdbms_admin &&rdbms_admin_catcdb --logDirectory &&1 --logFilename &&2

Requested Logging Directory log_cdb1.log does not exist

SQL> exit

Disconnected from Oracle Database 18c Standard Edition 2 Release 18.0.0.0.0 - Production

Version 18.1.0.0.0

[oracle@lab1 oradata]$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 Production on Mon Mar 5 21:03:27 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2017, Oracle. All rights reserved.

SQL> connect / as sysdba

Connected.

SQL> @?/rdbms/admin/catcdb

SQL> Rem The script relies on the caller to have connected to the DB

SQL>

SQL> Rem This script invokes catcdb.pl that does all the work, so we just need to

SQL> Rem construct strings for $ORACLE_HOME/rdbms/admin and

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl

SQL>

SQL> Rem $ORACLE_HOME

SQL> column oracle_home new_value oracle_home noprint

SQL> select sys_context('userenv', 'oracle_home') as oracle_home from dual;

SQL>

SQL> Rem OS-dependent slash

SQL> column slash new_value slash noprint

SQL> select sys_context('userenv', 'platform_slash') as slash from dual;

SQL>

SQL> Rem $ORACLE_HOME/rdbms/admin

SQL> column rdbms_admin new_value rdbms_admin noprint

SQL> select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual;

old 1: select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual

new 1: select '/opt/app/oracle/product/18.0.0/dbhome_1'||'/'||'rdbms'||'/'||'admin' as rdbms_admin from dual

SQL>

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl

SQL> column rdbms_admin_catcdb new_value rdbms_admin_catcdb noprint

SQL> select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual;

old 1: select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual

new 1: select '/opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin'||'/'||'catcdb.pl' as rdbms_admin_catcdb from dual

SQL>

SQL> host perl -I &&rdbms_admin &&rdbms_admin_catcdb --logDirectory &&1 --logFilename &&2

Enter value for 1: /opt/app/oracle/oradata

Enter value for 2: log_cdb1.log

Enter new password for SYS: oracle

Enter new password for SYSTEM: oracle

Enter temporary tablespace name: temp

No options to container mapping specified, no options will be installed in any containers

catcon::exec_DB_script: /opt/app/oracle/oradata/catcdb__catcon_16982_exec_DB_script.done did not need to be deleted before running a script
catcon::exec_DB_script: opened Reader and Writer
catcon::exec_DB_script: executed set newpage 1
catcon::exec_DB_script: executed set pagesize 14
catcon::exec_DB_script: executed @@?/rdbms/admin/sqlsessstart.sql
catcon::exec_DB_script: connected
catcon::exec_DB_script: executed set echo on
catcon::exec_DB_script: executed set serveroutput on
catcon::exec_DB_script: executed spool /opt/app/oracle/oradata/log_cdb1.log

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b catalog catalog.sql

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b catproc catproc.sql

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b catoctk catoctk.sql


catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b owminst owminst.plb

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYSTEM -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/sqlplus/admin -n 1 -l /opt/app/oracle/oradata -b pupbld pupbld.sql

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYSTEM -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/sqlplus/admin/help -n 1 -l /opt/app/oracle/oradata -b pupbld hlpbld.sql --p"helpus.sql"

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b catclust catclust.sql

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b catfinal catfinal.sql

catcon::exec_DB_script: executed host perl -I /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catcon.pl -u SYS -w CATCDB_SYSTEM_PASSWD -U SYS -W CATCDB_SYS_PASSWD -d /opt/app/oracle/product/18.0.0/dbhome_1/rdbms/admin -n 1 -l /opt/app/oracle/oradata -b utlrp utlrp.sql

catcon::exec_DB_script: sent

host sqlplus -v > /opt/app/oracle/oradata/catcdb__catcon_16982_exec_DB_script.done to Writer

catcon::exec_DB_script: sent -exit- to Writer
catcon::exec_DB_script: closed Writer

catcon::exec_DB_script: marker was undefined; read and ignore output, if any

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/catalog_catcon_16993.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catalog*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catalog_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/catproc_catcon_17618.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catproc*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catproc_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/catoctk_catcon_22968.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catoctk*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catoctk_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/owminst_catcon_23068.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/owminst*.log] files for output generated by scripts
catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/owminst_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/pupbld_catcon_23678.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/pupbld*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/pupbld_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/pupbld_catcon_23817.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/pupbld*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/pupbld_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/catclust_catcon_23915.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catclust*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catclust_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/catfinal_catcon_24372.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catfinal*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/catfinal_*.lst] files for spool files, if any

catcon.pl: completed successfully

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/opt/app/oracle/oradata/utlrp_catcon_24468.lst]

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/utlrp*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/opt/app/oracle/oradata/utlrp_*.lst] files for spool files, if any
catcon.pl: completed successfully
catcon::exec_DB_script: finished reading and ignoring output
catcon::exec_DB_script: waiting for child process to exit
catcon::exec_DB_script: child process exited
catcon::sureunlink: unlink(/opt/app/oracle/oradata/catcdb__catcon_16982_exec_DB_script.done) succeeded after 1 attempt(s)
catcon::sureunlink: verify that the file really no longer exists
catcon::sureunlink: confirmed that /opt/app/oracle/oradata/catcdb__catcon_16982_exec_DB_script.done no longer exists after 1 attempts
catcon::exec_DB_script: deleted /opt/app/oracle/oradata/catcdb__catcon_16982_exec_DB_script.done after running a script
catcon::exec_DB_script: closed Reader
catcon::exec_DB_script: waitpid returned

Siguiente paso, vamos a ejecutar el paquete de compilación.

SQL> @?/rdbms/admin/utlrp

SQL> Rem

SQL> Rem $Header: rdbms/admin/utlrp.sql /main/23 2017/03/20 12:21:12 raeburns Exp $

SQL> Rem
SQL> Rem utlrp.sql
SQL> Rem
SQL> Rem Copyright (c) 1998, 2017, Oracle and/or its affiliates.
SQL> Rem All rights reserved.
SQL> Rem
SQL> Rem NAME

SQL> Rem utlrp.sql - Recompile invalid objects
SQL> Rem
SQL> Rem DESCRIPTION
SQL> Rem This script recompiles invalid objects in the database.
SQL> Rem
SQL> Rem When run as one of the last steps during upgrade or downgrade,
SQL> Rem this script will validate all remaining invalid objects. It will
SQL> Rem also run a component validation procedure for each component in
SQL> Rem the database. See the README notes for your current release and
SQL> Rem the Oracle Database Upgrade book for more information about
SQL> Rem using utlrp.sql
SQL> Rem
SQL> Rem Although invalid objects are automatically re-validated when used,
SQL> Rem it is useful to run this script after an upgrade or downgrade and
SQL> Rem after applying a patch. This minimizes latencies caused by
SQL> Rem on-demand recompilation. Oracle strongly recommends running this

SQL> Rem script after upgrades, downgrades and patches.
SQL> Rem
SQL> Rem NOTES

SQL> Rem * This script must be run using SQL*PLUS.

SQL> Rem * You must be connected AS SYSDBA to run this script.

SQL> Rem * There should be no other DDL on the database while running the

SQL> Rem script. Not following this recommendation may lead to deadlocks.

SQL> Rem

SQL> Rem BEGIN SQL_FILE_METADATA
SQL> Rem SQL_SOURCE_FILE: rdbms/admin/utlrp.sql
SQL> Rem SQL_SHIPPED_FILE: rdbms/admin/utlrp.sql
SQL> Rem SQL_PHASE: UTILITY
SQL> Rem SQL_STARTUP_MODE: NORMAL
SQL> Rem SQL_IGNORABLE_ERRORS: NONE
SQL> Rem SQL_CALLING_FILE: NONE
SQL> Rem END SQL_FILE_METADATA
SQL> Rem

SQL> Rem MODIFIED (MM/DD/YY)

SQL> Rem raeburns 03/09/17 - Bug 25616909: Use UTILITY for SQL_PHASE

SQL> Rem gviswana 06/26/03 - Switch default to parallel if appropriate

SQL> Rem gviswana 06/12/03 - Switch default back to serial

SQL> Rem gviswana 05/20/03 - 2814808: Automatic parallelism tuning

SQL> Rem rburns 04/28/03 - timestamps and serveroutput for diagnostics

SQL> Rem gviswana 04/13/03 - utlrcmp.sql load -> catproc

SQL> Rem gviswana 06/25/02 - Add documentation

SQL> Rem gviswana 11/12/01 - Use utl_recomp.recomp_serial

SQL> Rem rdecker 11/09/01 - ADD ALTER library support FOR bug 1952368

SQL> Rem rburns 11/12/01 - validate all components after compiles

SQL> Rem rburns 11/06/01 - fix invalid CATPROC call

SQL> Rem rburns 09/29/01 - use 9.2.0

SQL> Rem rburns 09/20/01 - add check for CATPROC valid

SQL> Rem rburns 07/06/01 - get version from instance view

SQL> Rem rburns 05/09/01 - fix for use with 8.1.x

SQL> Rem arithikr 04/17/01 - 1703753: recompile object type# 29,32,33

SQL> Rem skabraha 09/25/00 - validate is now a keyword

SQL> Rem kosinski 06/14/00 - Persistent parameters

SQL> Rem skabraha 06/05/00 - validate tables also

SQL> Rem jdavison 04/11/00 - Modify usage notes for 8.2 changes.

SQL> Rem rshaikh 09/22/99 - quote name for recompile

SQL> Rem ncramesh 08/04/98 - change for sqlplus

SQL> Rem usundara 06/03/98 - merge from 8.0.5

SQL> Rem usundara 04/29/98 - creation (split from utlirp.sql).

SQL> Rem Mark Ramacher (mramache) was the original

SQL> Rem author of this script.

SQL> Rem

SQL>

SQL> Rem ================================================
SQL> Rem BEGIN utlrp.sql

SQL> Rem ================================================
SQL>

SQL> @@utlprp.sql 0

SQL> Rem Copyright (c) 2003, 2017, Oracle and/or its affiliates.

SQL> Rem All rights reserved.

SQL> Rem

SQL> Rem NAME

SQL> Rem utlprp.sql - Recompile invalid objects in the database

SQL> Rem

SQL> Rem DESCRIPTION

SQL> Rem This script recompiles invalid objects in the database.

SQL> Rem

SQL> Rem This script is typically used to recompile invalid objects

SQL> Rem remaining at the end of a database upgrade or downgrade.

SQL> Rem

SQL> Rem Although invalid objects are automatically recompiled on demand,

SQL> Rem running this script ahead of time will reduce or eliminate

SQL> Rem latencies due to automatic recompilation.

SQL> Rem

SQL> Rem This script is a wrapper based on the UTL_RECOMP package.

SQL> Rem UTL_RECOMP provides a more general recompilation interface,

SQL> Rem including options to recompile objects in a single schema. Please

SQL> Rem see the documentation for package UTL_RECOMP for more details.

SQL> Rem

SQL> Rem INPUTS

SQL> Rem The degree of parallelism for recompilation can be controlled by

SQL> Rem providing a parameter to this script. If this parameter is 0 or

SQL> Rem NULL, UTL_RECOMP will automatically determine the appropriate

SQL> Rem level of parallelism based on Oracle parameters cpu_count and

SQL> Rem parallel_threads_per_cpu. If the parameter is 1, sequential

SQL> Rem recompilation is used. Please see the documentation for package

SQL> Rem UTL_RECOMP for more details.

SQL> Rem

SQL> Rem NOTES

SQL> Rem * You must be connected AS SYSDBA to run this script.

SQL> Rem * There should be no other DDL on the database while running the

SQL> Rem script. Not following this recommendation may lead to deadlocks.

.....
.....
SQL> Rem automatic redirection is turned off. This is needed so that utlrp/utlprp
SQL> Rem can be used to recompile objects in Proxy PDB.
SQL> Rem
SQL> alter session set "_enable_view_pdb"=false;

Session altered.

SQL>
SQL> SELECT dbms_registry_sys.time_stamp('utlrp_bgn') as timestamp from dual;

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN 2018-03-06 05:39:42

SQL>
SQL> DOC
DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC> objects in the database. Recompilation time is proportional to the
DOC> number of invalid objects in the database, so this command may take
DOC> a long time to execute on a database with a large number of invalid
DOC> objects.

DOC>
DOC> Use the following queries to track recompilation progress:
DOC> 1. Query returning the number of invalid objects remaining. This

DOC> number should decrease with time.
DOC> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);

DOC> 2. Query returning the number of objects compiled so far. This number

DOC> should increase with time.

DOC> SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;

DOC>
DOC> This script automatically chooses serial or parallel recompilation
DOC> based on the number of CPUs available (parameter cpu_count) multiplied
DOC> by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC> On RAC, this number is added across all RAC nodes.
DOC>
DOC> UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC> recompilation. Jobs are created without instance affinity so that they
DOC> can migrate across RAC nodes. Use the following queries to verify
DOC> whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC> 1. Query showing jobs created by UTL_RECOMP
DOC> SELECT job_name FROM dba_scheduler_jobs
DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC> 2. Query showing UTL_RECOMP jobs that are running
DOC> SELECT job_name FROM dba_scheduler_running_jobs
DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

SQL>

SQL> DECLARE
2 threads pls_integer := &&1;
3 BEGIN
4 utl_recomp.recomp_parallel(threads);
5 END;
6 /

PL/SQL procedure successfully completed.
SQL>

SQL> SELECT dbms_registry_sys.time_stamp('utlrp_end') as timestamp from dual;

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END 2018-03-06 05:39:43

SQL>
SQL> Rem #(8264899): The code to Re-enable functional indexes, which used to exist
SQL> Rem here, is no longer needed.
SQL>
SQL> DOC
DOC> The following query reports the number of invalid objects.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

SQL> select COUNT(*) "OBJECTS WITH ERRORS" from obj$ where status in (3,4,5,6);

OBJECTS WITH ERRORS
-------------------
0

SQL> DOC
DOC> The following query reports the number of exceptions caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC> Note: Typical compilation errors (due to coding errors) are not
DOC> logged into this table: they go into DBA_ERRORS instead.
DOC>#

SQL> select COUNT(*) "ERRORS DURING RECOMPILATION" from utl_recomp_errors;
ERRORS DURING RECOMPILATION
---------------------------
0

SQL>

SQL> Rem ================================================
SQL> Rem Reenable indexes that may have been disabled, based on the

SQL> Rem table SYS.ENABLED$INDEXES

SQL> Rem ================================================
SQL>

SQL> @@?/rdbms/admin/reenable_indexes.sql

SQL> Rem

SQL> Rem $Header: rdbms/admin/reenable_indexes.sql /main/3 2015/02/04 13:57:27 sylin Exp $

SQL> Rem

SQL> Rem reenable_indexes.sql

SQL> Rem

SQL> Rem Copyright (c) 2014, 2015, Oracle and/or its affiliates.

SQL> Rem All rights reserved.

SQL> Rem

SQL> Rem NAME

SQL> Rem reenable_indexes.sql -

SQL> Rem

SQL> Rem DESCRIPTION

SQL> Rem

SQL> Rem

SQL> Rem NOTES

SQL> Rem

SQL> Rem

SQL> Rem BEGIN SQL_FILE_METADATA

SQL> Rem SQL_SOURCE_FILE: rdbms/admin/reenable_indexes.sql

SQL> Rem SQL_SHIPPED_FILE: rdbms/admin/reenable_indexes.sql

SQL> Rem SQL_PHASE: REENABLE_INDEXES

SQL> Rem SQL_STARTUP_MODE: NORMAL

SQL> Rem SQL_IGNORABLE_ERRORS: NONE

SQL> Rem SQL_CALLING_FILE: rdbms/admin/noncdb_to_pdb.sql

....
....
SQL> SET serveroutput on

SQL> EXECUTE dbms_registry_sys.validate_components;

PL/SQL procedure successfully completed.

SQL> SET serveroutput off

SQL> Rem ================================================
SQL> Rem END utlrp.sql

SQL> Rem ================================================

Listo terminamos con la compilación de paquetes y ahora vamos a confirmar el estado de los componentes de la base de datos.

SQL> select comp_id, comp_name, version_full, status from dba_registry;

SQL> col COMP_NAME format a50
SQL> set linesize 399

SQL> /
COMP_ID COMP_NAME VERSION_FULL STATUS
--------------- ------------------------------------------- ---------------- -----
CATALOG Oracle Database Catalog Views 18.1.0.0.0 VALID
CATPROC Oracle Database Packages and Types 18.1.0.0.0 VALID
RAC Oracle Real Application Clusters 18.1.0.0.0 OPTION OFF
XDB Oracle XML Database 18.1.0.0.0 VALID
OWM Oracle Workspace Manager 18.1.0.0.0 VALID

SQL>

Seguidamente, vamos a crear nuestra primera instancia acoplada para el contenedor CDB1

SQL> create pluggable database pdb1 admin user pdbadmin identified by oracle \
FILE_NAME_CONVERT=('/opt/app/oracle/oradata/cdb1/pdbseed/','/opt/app/oracle/oradata/cdb1/pdb1/');

Pluggable database created.

Validamos su creación y estado.

SQL> select name, open_mode, block_size, pdb_count, max_size from v$pdbs;

NAME OPEN_MODE BLOCK_SIZE PDB_COUNT MAX_SIZE
--------------------------------- ---------- ---------- ---------- ----------
PDB$SEED READ ONLY 8192 0 0
PDB1 MOUNTED 8192 0 0




Vamos a procede a abrir la instancia PDB creada.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> exit

Ahora probamos que podemos conectarnos a la instancia acoplada del contenedor CDB1 y hacemos una simple consulta al diccionario de la base de datos.

Modifique el archivo tnsnames.ora, para permitir el acceso al contenedor y a la instancia PDB usando un cadena de conexión.

[oracle@lab1 admin]$ sqlplus system/oracle@pdb1

SQL*Plus: Release 18.0.0.0.0 Production on Tue Mar 6 05:56:51 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2017, Oracle. All rights reserved.
Last Successful login time: Tue Mar 06 2018 05:51:44 -06:00
Connected to:
Oracle Database 18c Standard Edition 2 Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> select count(*) from dba_objects;
COUNT(*)
----------
22662

SQL> exit

Disconnected from Oracle Database 18c Standard Edition 2 Release 18.0.0.0.0 - Production

Version 18.1.0.0.0

Nos desconectamos y hacemos lo mismo para el contenedor.

[oracle@lab1 admin]$ sqlplus system/oracle@cdb1

SQL*Plus: Release 18.0.0.0.0 Production on Tue Mar 6 05:57:17 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2017, Oracle. All rights reserved.
Last Successful login time: Tue Mar 06 2018 05:56:51 -06:00

Connected to:
Oracle Database 18c Standard Edition 2 Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> select count(*) from dba_objects;

COUNT(*)
----------
22678

Listo ahora ya podemos trabajar con nuestra máquina virtual en Oracle Linux 7.3 y Oracle Database E.E. 18c

Si desean tener más contenedores, deben repetir el paso descrito en esta guía.

Para mayor información sobre características de la nueva versión, pueden consultar la documentación oficial en oracle.com en el siguiente link:


Un abrazo a todos y todas y que la disfruten.


Todos los Sábados a las 8:00PM

Optimismo para una vida Mejor

Optimismo para una vida Mejor
Noticias buenas que comentar