In September of 2004 I assisted in the migrated a Four Terabyte Oracle Data Warehouse from a 20way 24GB E15K Solaris system to a 4way Opteron RedHat Enterprise 3 x86_64
system running on a Sun V40z with 32GB of memory. I did the Linux system
configuration and setup.
This document describes the process of setting up the operating system for that installation.
RHEL 3 Update 3 x86_64
Internally Hardware Mirrored 73GB drives for OS.
32BG Swap Space on separate unmirrored disk.
NetApp 980 8 Gig Fiber interfaces Vifed with two active paths for DB storage.
1 Dual port IBM NetExtreem Copper Gig Network card.
4 IBM NetExtreem Fiber Gig Network cards.
2 Qlogic Single port San cards.
On board ETH0 and one port of Gig Copper Card, ETH3 Bonded for primary network.
On board ETH1 and one port of Gig Copper Card, ETH2 Bonded for application traffic.
Two Fiber Cards, ETH4 and ETH5 bonded for one Bonded NAS storage Path.
Two Fiber Cards, ETH6 and ETH7 bonded for one Bonded NAS storage Path.
Qlogic Cards not used, Oracle RMan support not ready yet.
Bonded networks are connected to two switches in Active/Passive mode because
the bonded driver uses the same mac address for both connections. This allows
for failover if there is a switch/network problem. The switches are connected
together for routing data in the data center.
Same is true for the fiber network connectivity.
1. Oracle Software Installation:
Glibc Version Number glibc 2.3.2-95.3
Required OS Packages
The Oracle installer requires 32bit libraries for the 64bit version of
Oracle. Therefore the following packages in both 64 and 32 versions
have to be installed. The 64 bit libraries get installed in a ../lib64/
and the 32 bit get installed in a ../lib/ directories under the
The 32bit libraries appear to be needed for the installer, you will
have to manually copy the 32bit versions over to the os and do a force
install. Place them in a directory then issue "rpm -ivh --force *.rpm"
Itainium installs aparently don't use the /lib64 directory structure,
why I am not sure so the issue appears to be different for that
platform. But since we are not doing itanium, non-issue.
Now the Oracle Installed needs a compiler but cannot work with the
newer 3.2 or 3.4 gcc, so you need to link the older gcc and g++ so that
oracle will install
My suggestion is not to follow oracle's recommendation, and link them in an
oracle/bin directory and set that path before the /usr/bin/gcc and /usr/bin/g++.
/usr/bin/gcc296 is linked to $ORACLE_HOME/bin/gcc
ln -s /usr/bin/gcc296 $ORACLE_HOME/bin/gcc
/usr/bin/g++296 is linked to $ORACLE_HOME/bin/g++
ln -s /usr/bin/g++296 $ORACLE_HOME/bin/g++
Set $ORACLE_HOME/bin comes in the path before /usr/bin, this will
support the oracle installer without altering the install base.
3. System Tuning.
/etc/sysctl.conf additions. The Shared memory segment size is set here
to 21GB which is way too big. I believe we are actually using closer to
7GB for the SGA. We need to make sure that we leave enough room in
memory for disk file caching for performance reasons.
# For NFS since the DB is on NetApp.
# net.ipv4.ipfrag_low_thresh original value = 196608
net.ipv4.ipfrag_low_thresh = 262144
# net.ipv4.ipfrag_high_thresh original value = 262144
net.ipv4.ipfrag_high_thresh = 393216
# Oracle Settings.
net.core.rmem_max = 262144
net.core.wmem_max = 262144
# We set the segment to allow up to 21GB, but set the sga in oracle to 7GB
kernel.shmmax = 2147483648
kernel.shmall = 21474836480
kernel.sem = 250 256000 100 1024
# Oracle recommends this but god knows why:
net.ipv4.ip_local_port_range = 1024 65000
#Added per RedHat Bug 127240 and 118152 wrw 9/12/2004
vm.pagecache = 1 15 15
# Incase asychronous IO is turned on in Oracle.
fs.aio-max-size = 2147483648
4. Locking the Shared memory in Memory.
Now because our version of Oracle (188.8.131.52) doesn't use HugeTable
Pages. we need to add the following to /etc/rc.local to lock the shared
memory segments into memory.
mount -t ramfs ramfs /dev/shm
chown oracle:dba /dev/shm
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
This prevents the shared segment from being swapped out.
5. Bonding Setup.
We used the broadcom 5700 driver rather than the tg3 driver
supplied by RedHat. I had very very flaky results with it and raised
the issue with RedHat who did not have a system that could
replicate/test the issue.
So here is /etc/modules.conf
alias eth0 bcm5700
alias eth1 bcm5700
alias eth2 bcm5700
alias eth3 bcm5700
alias eth4 bcm5700
alias eth5 bcm5700
alias eth6 bcm5700
alias eth7 bcm5700
#Devices are bonded.
#eth0 eth3 are bond0 DataCenter
#eth1 eth2 are bond1 VLAN 119 Gigabit Copper
#eth4 eth5 are bond2 VLAN 112 Gigabit Fiber NAS
#eth6 eth7 are bond3 VLAN 112 Gigabit Fiber NAS
#GigCopper can only do 1500 MTU on those switches.
options bcm5700 full_duplex=1,1,1,1,1,1,1,1
alias bond0 bonding
alias bond1 bonding
alias bond2 bonding
alias bond3 bonding
options bond0 miimon=100 mode=1
options bond1 -o bonding1 miimon=100 mode=1
options bond2 -o bonding2 miimon=100 mode=1 mtu=9000
options bond3 -o bonding3 miimon=100 mode=1 mtu=9000
alias scsi_hostadapter mptbase
alias scsi_hostadapter1 mptscsih
alias scsi_hostadapter2 qla2300
alias usb-controller usb-ohci
Basic /etc/sysconfig/network-scripts for bonding follows the bonding documentation.
Now everything else is Oracle. This system screams and puts the solaris
configuration to shame. We do 250MB/s data transfers on the NetApp
Filer and the load peaks at 50 with some spikes at 100. In all of this,
you still can log in and have sub millisecond response time. The
solaris install was on Solaris 8 which has the tcp stack problems and
really isn't the place for a NAS installation.
Since moving the database, we have had no outages due to os, hardware,
or oracle failures. On the Solaris platform, we had an average of 1.7
outages per month.
Resources on the web were frighteningly slim, but here are some.
W.R.Welty - n9oah.