The Oracle Clusterware software will be installed to /u01/app/crs on both of the nodes that make up the RAC cluster. Oracle Clusterware should be installed in. 176 Responses to “Step by Step installing Oracle 10g RAC on VMware” Wissem Says: April 5th, 2011 at 1:35 pm. Thanks Brother, Really Useful. Installing Oracle Database 10g with Real Application Cluster (RAC) on Red Hat Enterprise Linux Advanced Server 3 www.puschitz.com The following procedure is a step-by. 4 Configuring Operating Systems for Oracle Grid Infrastructure and Oracle RAC. This chapter describes the operating system configuration tasks you must complete on.
Chapter 1. 3. Using OCFS2 with DRBD - DRBD Users Guide (8. This chapter outlines the steps necessary to set up a DRBD resource as a block device holding a shared Oracle Cluster File System, version 2 (OCFS2). The Oracle Cluster File System, version 2 (OCFS2) is a concurrent access shared storage file system developed by Oracle Corporation. Unlike its predecessor OCFS, which was specifically designed and only suitable for Oracle database payloads, OCFS2 is a general- purpose filesystem that implements most POSIX semantics.
The most common use case for OCFS2 is arguably Oracle Real Application Cluster (RAC), but OCFS2 may also be used for load- balanced NFS clusters, for example. Although originally designed for use with conventional shared storage devices, OCFS2 is equally well suited to be deployed on dual- Primary. DRBD. Applications reading from the filesystem may benefit from reduced read latency due to the fact that DRBD reads from and writes to local storage, as opposed to the SAN devices OCFS2 otherwise normally runs on. In addition, DRBD adds redundancy to OCFS2 by adding an additional copy to every filesystem image, as opposed to just a single filesystem image that is merely shared.
Like other shared cluster file systems such as GFS, OCFS2 allows multiple nodes to access the same storage device, in read/write mode, simultaneously without risking data corruption. It does so by using a Distributed Lock Manager (DLM) which manages concurrent access from cluster nodes. The DLM itself uses a virtual file system (ocfs.
WHAT IS OCFS2? OCFS2 is a general-purpose shared-disk cluster file system for Linux capable of providing both high performance and high availability. DBA: Linux. Installing Oracle RAC 1f0g Release 2 on Linux x86. by John Smiley. Learn the basics of installing Oracle RAC 10g Release 2 on Red Hat.
OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux. The steps in this section are for configuring HugePages on a 64-bit Oracle Linux system running one or more Oracle Database instances. Configuring Public and Private Network: Each node in the cluster must have 3 network adapter (eth0, eth1and eth2) one for the public, second one for the private.
OCFS2 file systems present on the system. OCFS2 may either use an intrinsic cluster communication layer to manage cluster membership and filesystem mount and unmount operation, or alternatively defer those tasks to the Pacemaker cluster infrastructure. OCFS2 is available in SUSE Linux Enterprise Server (where it is the primarily supported shared cluster file system), Cent. OS, Debian GNU/Linux, and Ubuntu Server Edition.
Oracle also provides packages for Red Hat Enterprise Linux (RHEL). This chapter assumes running OCFS2 on a SUSE Linux Enterprise Server system.
Installing Oracle RAC 1. Linux x. 86 DBA: Linux Installing Oracle RAC 1f. Release 2 on Linux x. John Smiley. Learn the basics of installing Oracle RAC 1. Release 2 on Red Hat Enterprise Linux or Novell SUSE Enterprise Linux, from the bare metal up (for evaluation purposes only) Contents. Overview. Background.
Part I: Install Linux. Part II: Configure Linux for Oracle. Part III: Prepare the Shared Disks.
Part IV: Install Oracle RAC Software. Conclusion. Overview.
This guide provides a walkthrough of installing an Oracle Database 1. Release 2 RAC database on commodity hardware for the purpose of evaluation . If you are new to Linux and/or Oracle, this guide is for you.
It starts with the basics and walks you through an installation of Oracle Database 1. Release 2 RAC from the bare metal up.
This guide will take the approach of offering the easiest paths, with the fewest number of steps, for accomplishing a task. This approach often means making configuration choices that would be inappropriate for anything other than an evaluation. For that reason, this guide is not appropriate for building production- quality environments, nor does it reflect best practices. The three Linux distributions certified for Oracle 1. Release 2 RAC are: Red Hat Enterprise Linux 4 (RHEL4)Red Hat Enterprise Linux 3 (RHEL3)Novell SUSE Linux Enterprise Server 9.
We will cover both of the Linux 2. RHEL4 and SLES9. RHEL3 is not covered here.
This guide is divided into four parts: Part I covers the installation of the Linux operating system, Part II covers configuring Linux for Oracle, Part III discusses the essentials of partitioning shared disk, and Part IV covers installation of the Oracle software. A Release 1 version of this guide is also available. Background. The illustration below shows the major components of an Oracle RAC 1. Release 2 configuration. Nodes in the cluster are typically separate servers (hosts). Hardware. At the hardware level, each node in a RAC cluster shares three things: Access to shared disk storage. Connection to a private network.
Access to a public network. Shared Disk Storage. Oracle RAC relies on a shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later).
There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS). Private Network. Each cluster node is connected to all other nodes via a private high- speed network, also known as the cluster interconnect or high- speed interconnect (HSI). This network is used by Oracle's Cache Fusion technology to effectively combine the physical memory (RAM) in each host into a single cache. Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
The private network is typically built with Gigabit Ethernet, but for high- volume environments, many vendors offer proprietary low- latency, high- bandwidth solutions specifically designed for Oracle RAC. Linux also offers a means of bonding multiple physical NICs into a single virtual NIC (not covered here) to provide increased bandwidth and availability. Public Network. To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of node failure, the failed node's IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.
Configuring the Cluster Hardware. There are many different ways to configure the hardware for an Oracle RAC cluster. Our configuration here uses two servers with two CPUs, 1. GB RAM, two Gigabit Ethernet NICs, a dual channel SCSI host bus adapter (HBA), and eight SCSI disks connected via copper to each host (four disks per channel). The disks were configured as Just a Bunch Of Disks (JBOD)—that is, with no hardware RAID controller. Software. At the software level, each node in a RAC cluster needs: An operating system. Oracle Clusterware.
Oracle RAC software. An Oracle Automatic Storage Management (ASM) instance (optional). Operating System. Oracle RAC is supported on many different operating systems. This guide focuses on Linux. The operating system must be properly configured for the OS- -including installing the necessary software packages, setting kernel parameters, configuring the network, establishing an account with the proper security, configuring disk devices, and creating directory structures. All these tasks are described in this guide.
Oracle Cluster Ready Services becomes Oracle Clusterware. Oracle RAC 1. 0g Release 1 introduced Oracle Cluster Ready Services (CRS), a platform- independent set of system services for cluster environments.
In Release 2, Oracle has renamed this product to Oracle Clusterware. Clusterware maintains two files: the Oracle Cluster Registry (OCR) and the Voting Disk.
The OCR and the Voting Disk must reside on shared disks as either raw partitions or files in a cluster filesystem. This guide describes creating the OCR and Voting Disks using a cluster filesystem (OCFS2) and walks through the CRS installation. Oracle RAC Software. Oracle RAC 1. 0g Release 2 software is the heart of the RAC database and must be installed on each cluster node. Fortunately, the Oracle Universal Installer (OUI) does most of the work of installing the RAC software on each node. You only have to install RAC on one node—OUI does the rest.
Oracle Automatic Storage Management (ASM)ASM is a new feature in Oracle Database 1. RAID in a platform- independent manner. Oracle ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove "hot spots." It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle. Oracle ASM is not a general- purpose filesystem and can be used only for Oracle data files, redo logs, control files, and the RMAN Flash Recovery Area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN). ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it.
Memory requirements for ASM are light: only 6. MB for most systems.
In Oracle RAC environments, an ASM instance must be running on each cluster node. Part I: Installing Linux. Install and Configure Linux as described in the first guide in this series.
You will need three IP addresses for each server: one for the private network, one for the public network, and one for the virtual IP address. Use the operating system's network configuration tools to assign the private and public network addresses. Do not assign the virtual IP address using the operating system's network configuration tools; this will be done by the Oracle Virtual IP Configuration Assistant (VIPCA) during Oracle RAC software installation. Red Hat Enterprise Linux 4 (RHEL4)Required Kernel: 2. EL or higher. Verify kernel version. Other required package versions (or higher).
EL4. . compat- db- 4. EL4. . gcc- c++- 3. EL4. . . glibc- common- 2. EL4. . libstdc++- devel- 3. EL4. . . pdksh- 5.
Verify installed packages. SUSE Linux Enterprise Server 9 (SLES9)Required Package Sets: Basis Runtime System Ya. ST Graphical Base System Linux Tools KDE Desktop Environment C/C++ Compiler and Tools (not selected by default)Do not install: Authentication Server (NIS, LDAP, Kerberos) Required Kernel: 2. Verify kernel version. Other required package versions (or higher). Verify installed packages. Part II: Configure Linux for Oracle.
Create the Oracle Groups and User Account. Next we'll create the Linux groups and user account that will be used to install and maintain the Oracle 1. Release 2 software. The user account will be called 'oracle' and the groups will be 'oinstall' and 'dba.' Execute the following commands as root on one cluster node only.
G dba oracle. . . Ex. # /usr/sbin/groupadd oinstall. G dba oracle. . . The User ID and Group IDs must be the same on all cluster nodes.
Using the information from the id oracle command, create the Oracle Groups and User Account on the remaining cluster nodes. G dba oracle. . Ex. G dba oracle. . . Set the password on the oracle account. Changing password for user oracle.
Retype new password. Create Mount Points.
Now create mount points to store the Oracle 1. Release 2 software. This guide will adhere to the Optimal Flexible Architecture (OFA) for the naming conventions used in creating the directory structure. For more information on OFA standards, see Appendix D of the Oracle Database 1. Release 2 Installation Guide . Issue the following commands as root.
R oracle: oinstall /u. R 7. 75 /u. 01/app/oracle. Ex. # mkdir - p /u. R oracle: oinstall /u.
R 7. 75 /u. 01/app/oracle. Configure Kernel Parameters.
Login as root and configure the Linux kernel parameters on each node. EOF. . kernel. shmall = 2. On SUSE Linux Enterprise Server 9.
Set the kernel parameter disable_cap_mlock as follows. Run the following command after completing the steps above. Setting Shell Limits for the oracle User.