Oracle *daily* TidBits” (#oratidbit) published on Facebook, Twitter, and Google+ during weekdays in July 2017. You will also see these tidbits, one tidbit at a time, for each page refresh on the right side of this blog as well… Hope you find these helpful to learn something new or to remind you of its existence and use.

 

#oratidbit Oracle Database Backup Cloud Service is used to store Oracle Database backups only. To store other types of data, use Oracle Storage Cloud Service.
#oratidbit #orcldb To implement an Information Lifecycle Management (ILM) strategy for data movement in the database, use Heat Map and Automatic Data Optimization (ADO) features.
#oratidbit #orcldb Heat Map and Automatic Data Optimization (ADO) features are enabled by setting HEAT_MAP parameter to ON. For ADO, Heat Map must be enabled at the system level.
#oratidbit #orcldb Using bigger ASM allocation unit size reduces the memory requirements for Oracle ASM and thus improve the ASM startup time.
#oratidbit To monitor #DB12cR2 (12.2), you must be on EM 12c PS4 with Bundled Patch 2 or later applied. The EM agent that is used to monitor the database must be an EM 12c PS4 agent with Agent Bundled Patch 3 or later applied.
#oratidbit #orcldb Oracle Database Clients on 32-bit are not supported to connect to #DB12cR2.
#oratidbit #orcldb JOB_QUEUE_PROCESSES defaults to 1000. The job coordinator process starts only as many job queue processes as are required, based on the number of jobs to run and available resources. You can set JOB_QUEUE_PROCESSES to a lower number to limit the number of job queue processes.
#oratidbit #orcldb If JOB_QUEUE_PROCESSES is 0, both DBMS_JOB and Oracle Scheduler (DBMS_SCHEDULER) jobs are disabled. Both use the same job coordinator to start job slaves.
#oratidbit #orcldb Inside a PDB container, 0 and 1000 are the only valid values for job_queue_processes. 0 (zero) means that no jobs can be run for that PDB. 1000 means that jobs are allowed to run in that PDB.
#oratidbit When Linux HugePages is configured, the Grid Infrastructure Management Repository (GIMR) system global area is installed into HugePages memory. The GIMR SGA occupies up to 1 GB of HugePages memory.
#oratidbit #db12cR2 The CREATE USER and ALTER USER SQL statements lets you set a new profile parameter, INACTIVE_ACCOUNT_TIME, which enables you to automatically lock inactive accounts.
#oratidbit #db12cR2 For multitenant environments, the PDB_OS_CREDENTIAL initialization parameter enables a different operating system user to execute external procedures using the EXTPROC process.
#oratidbit #orcldb #db12cr2 #rac INSTANCE_MODE init parameter indicates whether the instance is read-write, read-only, or read-mostly. Read-only and read-write instances can co-exist within a single database.
#oratidbit #orcldb #db12cR2 MAX_DATAPUMP_JOBS_PER_PDB init parameter determines the maximum number of concurrent Oracle Data Pump jobs per PDB. The default is 100. The main resource Oracle Data Pump uses is shared pool in the SGA for the database.
#oratidbit Oracle #Exadata comes in three flavors – Cloud Services, Cloud at Customer, On Premises.
#oratidbit Active Data Guard extends Data Guard capabilities by providing advanced features for data protection and availability as well as offloading read-only workload and backups from a production database. Active Data Guard is included in the Extreme Performance Database Cloud Service and Exadata Service.
#oratidbit #orcldb Database releases 11.2.0.3 or higher, 12.1.0.1 or higher can upgrade directly to version 12.2. Direct upgrade not supported for 11gR2 database versions 11.2.0.1 and 11.2.0.2.
#oratidbit #orcldb If the PDB COMPATIBLE value is lower than CDB$ROOT COMPATIBLE, the PDB COMPATIBLE parameter value is increased automatically to the same COMPATIBLE setting as CDB$ROOT.
#oratidbit #orcldb Oracle Enterprise Manager Lifecycle Management Pack supports performing database upgrade (12c database and higher).

Oracle *daily* TidBits” (#oratidbit) published on Facebook, Twitter and Google+ during weekdays in June 2017. You will also see these tidbits, one tidbit at a time, for each page refresh on the right side of this blog as well… Hope you find these helpful to learn something new or to remind you of its existence and use.

#oratidbit #orclebs Starting with Release 12.1.3 it is possible to create an application user (FND User) with the responsibility ”Concurrent Manager Operator”. Use this user’s username and password start and stop the application services (use -secureapps option) instead of APPS password.
#oratidbit #orcldb After Applying Oracle Database 12.1.0.2 with July 2016 PSU/BP, Shared Memory segment permission changed to 600 (is no longer readable for the group but the owner). This is the default behavior for 12.2 database. Use the ALLOW_GROUP_ACCESS_TO_SGA to set the default permissions for shared memory segments.
#oratidbit #orcldb #db12cR2 SQL*Loader, expdp, impdp, exp, and imp tools are part of the instant client starting 12.2.
#oratidbit #orcldb #db12cR2 In Database 12.2, a pluggable database can have different characterset if the characterset of CDB is AL32UTF8. Applicable only to PDB clones and unplug/plug operations.
#oratidbit #orcldb The init.ora parameter ENABLE_DDL_LOGGING is licensed as part of the Database Lifecycle Management Pack when set to TRUE. When set to TRUE, the database reports schema changes in real time into the database alert log under the message group schema_ddl.
#oratidbit #orcldb #db12cR2 Memory usage can be controlled at the PDB level using DB_CACHE_SIZE, SHARED_POOL_SIZE, PGA_AGGREGATE_LIMIT, PGA_AGGREGATE_TARGET, SGA_MIN_SIZE, SGA_TARGET initialization parameters.
#oratidbit #orcldb Init parameter NONCDB_COMPATIBLE=TRUE enables you to get behavior similar to a non-CDB when issuing SQL commands inside a PDB in a CDB. You may set this parameter if you are using a single PDB in the CDB configuration.
#oratidbit #orcldb #db12cR2 init parameters MAX_IOPS and MAX_MBPS enable you to limit disk I/O generated by a PDB. MAX_IOPS limits the number of I/O operations, and MAX_MBPS limits the megabytes for I/O operations.
#oratidbit #orcldb DBUA Log File Location: $ORACLE_BASE/cfgtoollogs/dbua/upgradesession_// if ORACLE_BASE is set. If not, $ORACLE_HOME/cfgtoollogs/dbua/upgradesession_//.
#oratidbit #orcldb Direct-path network import of datapump has a new parameter in #DB12cR2 – ENABLE_NETWORK_COMPRESSION. Tells Data Pump to compress data before sending it over the network.
#oratidbit Trace File Analyzer (TFA) facilitates faster one command Service Request Data Collection (SRDC). The SRDC types include ora600, ora7445, ora4030, dbperf, internalerror and so on. For more information about SRDCs, run “tfactl diagcollect –srdc -help”.
#oratidbit Procwatcher (part of TFA Bundle) is a tool to examine and monitor Oracle database and clusterware processes. Procwatcher will collect stack traces of these processes using Oracle tools like oradebug short_stack and/or OS debuggers like pstack, gdb, dbx, or ladebug and collect SQL data if specified.
#oratidbit #orcldb #db12cr2, Local undo is the default when you create a new database using DBCA in 12.2. To change from shared to local undo, you must start the database in UPGRADE mode and do “alter database local undo on;”
#oratidbit #orcldb Oracle ASM disks in a disk group should have similar storage performance characteristics. In storage configurations with mixed speed drives, I/O performance is constrained by the slowest speed drive.
#oratidbit #orcldb Back up, copy, or move an Oracle ASM SPFILE with the ASMCMD spbackup, spcopy, or spmove commands.
#oratidbit #orcldb Oracle Database Gather Statistics AUTO_SAMPLE_SIZE value for ESTIMATE_PERCENT uses a hash-based algorithm that provides deterministic statistics, addressing the two key aspects of accuracy and speed2. It has accuracy close to a 100% sample (“reads all data”) but with the cost of, at most, a 10% sample (memory based).
#oratidbit #orcldb #datapump import PARTITION_OPTIONS parameter options. MERGE: Create a non-partitioned table during import. DEPARTITION: Creates each partition or subpartition as a new individual table.
#oratidbit #orcldb #datapump The TRANSFORM parameter enables you to alter object creation DDL for objects being imported. Turn off archive logging using DISABLE_ARCHIVE_LOGGING:Y. After the data has been loaded, the logging attributes for the objects are restored to their original settings.
#oratidbit #orcldb #datapump The TRANSFORM parameter enables you to alter object creation DDL for objects being imported. SEGMENT_ATTRIBUTES:N makes segment attributes (physical attributes, storage attributes, tablespaces, and logging) not included with DDL (values default to user / tablespace defaults).
#oratidbit #orcldb #datapump The TRANSFORM parameter enables you to alter object creation DDL for objects being imported.SEGMENT_CREATION:N makes the SEGMENT CREATION clause omitted from the CREATE TABLE statement and uses the default segment creation attributes for the table(s) being loaded.
#oratidbit #orcldb #datapump Setting the parameter _OPTIMIZER_GATHER_STATS_ON_LOAD=FALSE might improve the table loading speed. Use this parameter if you include EXCLUDE=STATISTICS, and plan to collect stats after the import.

We all know, on January 23, 2017, Oracle updated the Licensing Oracle Software in the Cloud Computing Environment document. The gist of the announcement was “When counting Oracle Processor license requirements in Authorized Cloud Environments, the Oracle Processor Core Factor Table is not applicable“. So, if you are running Oracle database on x86 Intel platform on-premise, when moving to AWS or Azure cloud, your available licensed CPUs are cut in half. Read my blog on this topic.

Can I run Oracle RAC on Public Cloud?

The answer is “yes!”. On AWS and Azure (for that matter any public cloud), you could use third party product Flashgrid to make local storage shared on all nodes, and have ASM on top. FlashGrid Storage Fabric turns local drives into shared drives accessible from all nodes in the cluster. The local drives shared via FlashGrid Storage Fabric can be block devices of any type including Elastic Block Storage (EBS) volumes, local SSDs, or LVM volumes. The sharing is done at the block level with concurrent access from all nodes. With ASM, you can choose normal redundancy (two copies) or high redundancy (three copies). I believe, this is the limitation of Flashgrid as of today, you can have up to 3 node RAC using Flashgrid. Read more.

Oracle RAC in AWSOn Oracle cloud, you can have a two node RAC natively using Oracle DBaaS service. If you need bigger capacity servers, Oracle Bare Metal Cloud service is the answer. With Bare Metal Cloud, you can have up to 72 OCPU and 1TB of memory per node. With bare metal also, only two node RAC is possible – no more than two nodes.

What about licensing RAC on Public Cloud?

The Licensing Oracle Software in the Cloud Computing Environment document has a link to “eligible” technologies and products on the cloud (this document also updated on Jan 23, 2017). I see Database SE, SE2, EE, even Partitioning, Advanced Security, Advanced Compression listed, but I do not see Oracle RAC. Does this mean, from a licensing standpoint, Oracle RAC on AWS & Azure using Flashgrid is not allowed?

Note, Oracle’s official document is under review. Hopefully, some clarity comes out in the next few days.

How about In-Memory, Multitenant and Active Dataguard?

I also do not see these technologies listed in the document. So, are these products/technologies not allowed on Azure/AWS? I know technically it is possible to create a Container Database with the In-memory option and Active Dataguard (together or separate) on AWS EC2 or Azure Compute (both IaaS). Question is, am I allowed to do so? Amazon’s own white paper, Oracle Advanced Architectures include Active Dataguard in multiple scenarios. This white paper was published in April 2016 though, much before Oracle updating the Cloud Licensing Document!

Thoughts??

%d bloggers like this: