11gR2 Clusterware and Grid Home – What You Need to Know

这篇文章详细的介绍了11g clusterware整体结构,记录一下

DETAILS

  • 11gR2 Clusterware Key Facts
  • 11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.
  • The GRID home consists of the Oracle Clusterware and ASM. ASM should not be in a separate home.
  • The 11gR2 Clusterware can be installed in “Standalone” mode for ASM and/or “Oracle Restart” single node support. This clusterware is a subset of the full clusterware described in this document.
  • The 11gR2 Clusterware can be run by itself or on top of vendor clusterware. See the certification matrix for certified combinations. Ref: Note: 184875.1 “How To Check The * Certification Matrix for Real Application Clusters”
  • The GRID Home and the RAC/DB Home must be installed in different locations.
  • The 11gR2 Clusterware requires a shared OCR files and voting files. These can be stored on ASM or a cluster filesystem.
  • The OCR is backed up automatically every 4 hours to /cdata// and can be restored via ocrconfig.
  • The voting file is backed up into the OCR at every configuration change and can be restored via crsctl.
  • The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication. Several virtual IPs need to be registered with DNS. This includes the node VIPs (one per node), SCAN VIPs (three). This can be done manually via your network administrator or optionally you could configure the “GNS” (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).
  • A SCAN (Single Client Access Name) is provided to clients to connect to. For more information on SCAN see Note: 887522.1
  • The root.sh script at the end of the clusterware installation starts the clusterware stack. For information on troubleshooting root.sh issues see Note: 1053970.1
  • Only one set of clusterware daemons can be running per node.
  • On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with “respawn”.
  • A node can be evicted (rebooted) if a node is deemed to be unhealthy. This is done so that the health of the entire cluster can be maintained. For more information on this see: Note: 1050693.1 “Troubleshooting 11.2 Clusterware Node Evictions (Reboots)”
  • Either have vendor time synchronization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchronization. See Note: 1054006.1 for more information.
  • If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors. See Note 946332.1 and Note:948456.1 for more information.
  • The clusterware stack can be started by either booting the machine, running “crsctl start crs” to start the clusterware stack, or by running “crsctl start cluster” to start the clusterware on all nodes. Note that crsctl is in the /bin directory. Note that “crsctl start cluster” will only work if ohasd is running.
  • The clusterware stack can be stopped by either shutting down the machine, running “crsctl stop crs” to stop the clusterware stack, or by running “crsctl stop cluster” to stop the clusterware on all nodes. Note that crsctl is in the /bin directory.
  • Killing clusterware daemons is not supported.
  • Instance is now part of .db resources in “crsctl stat res -t” output, there is no separate .inst resource for 11gR2 instance.

Clusterware Startup Sequence

The following is the Clusterware startup sequence (image from the “Oracle Clusterware Administration and Deployment Guide):

Don’t let this picture scare you too much. You aren’t responsible for managing all of these processes, that is the Clusterware’s job!

Short summary of the startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High Availability Services Daemon). This daemon spawns 4 processes.

Level 1: OHASD Spawns:

  • cssdagent – Agent responsible for spawning CSSD.
  • orarootagent – Agent responsible for managing all root owned ohasd resources.
  • oraagent – Agent responsible for managing all oracle owned ohasd resources.
  • cssdmonitor – Monitors CSSD and node health (along wth the cssdagent).

Level 2: OHASD rootagent spawns:

  • CRSD – Primary daemon responsible for managing cluster resources.
  • CTSSD – Cluster Time Synchronization Services Daemon
  • Diskmon
  • ACFS (ASM Cluster File System) Drivers

Level 2: OHASD oraagent spawns:

  • MDNSD – Used for DNS lookup
  • GIPCD – Used for inter-process and inter-node communication
  • GPNPD – Grid Plug & Play Profile Daemon
  • EVMD – Event Monitor Daemon
  • ASM – Resource for monitoring ASM instances

Level 3: CRSD spawns:

  • orarootagent – Agent responsible for managing all root owned crsd resources.
  • oraagent – Agent responsible for managing all oracle owned crsd resources.

Level 4: CRSD rootagent spawns:

  • Network resource – To monitor the public network
  • SCAN VIP(s) – Single Client Access Name Virtual IPs
  • Node VIPs – One per node
  • ACFS Registery – For mounting ASM Cluster File System
  • GNS VIP (optional) – VIP for GNS

Level 4: CRSD oraagent spawns:

  • ASM Resouce – ASM Instance(s) resource

  • Diskgroup – Used for managing/monitoring ASM diskgroups.

  • DB Resource – Used for monitoring and managing the DB and instances

  • SCAN Listener – Listener for single client access name, listening on SCAN VIP

  • Listener – Node listener listening on the Node VIP

  • Services – Used for monitoring and managing services

  • ONS – Oracle Notification Service

  • eONS – Enhanced Oracle Notification Service

  • GSD – For 9i backward compatibility

  • GNS (optional) – Grid Naming Service – Performs name resolution

    This image shows the various levels more clearly:

Important Log Locations

Clusterware daemon logs are all under /log/. Structure under /log/:

alert.log – look here first for most clusterware issues

./admin:

./agent:

./agent/crsd:

./agent/crsd/oraagent_oracle:

./agent/crsd/ora_oc4j_type_oracle:

./agent/crsd/orarootagent_root:

./agent/ohasd:

./agent/ohasd/oraagent_oracle:

./agent/ohasd/oracssdagent_root:

./agent/ohasd/oracssdmonitor_root:

./agent/ohasd/orarootagent_root:

./client:

./crsd:

./cssd:

./ctssd:

./diskmon:

./evmd:

./gipcd:

./gnsd:

./gpnpd:

./mdnsd:

./ohasd:

./racg:

./racg/racgeut:

./racg/racgevtf:

./racg/racgmain:

./srvm:

The cfgtoollogs dir under <GRID_HOME> and $ORACLE_BASE contains other important logfiles. Specifically for rootcrs.pl and configuration assistants like ASMCA, etc&#8230;

ASM logs live under $ORACLE_BASE/diag/asm/+asm//trace

The diagcollection.pl script under /bin can be used to automatically collect important files for support. Run this as the root user.

Clusterware Resource Status Check

The following command will display the status of all cluster resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
$ ./crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
ora.LISTENER.lsnr
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
ora.SYSTEMDG.dg
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
ora.asm
ONLINE ONLINE racbde1 Started
ONLINE ONLINE racbde2 Started
ora.eons
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
ora.gsd
OFFLINE OFFLINE racbde1
OFFLINE OFFLINE racbde2
ora.net1.network
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
ora.ons
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
ora.registry.acfs
ONLINE ONLINE racbde1
ONLINE ONLINE racbde2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racbde1
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racbde2
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racbde2
ora.oc4j
1 OFFLINE OFFLINE
ora.rac.db
1 ONLINE ONLINE racbde1 Open
2 ONLINE ONLINE racbde2 Open
ora.racbde1.vip
1 ONLINE ONLINE racbde1
ora.racbde2.vip
1 ONLINE ONLINE racbde2
ora.scan1.vip
1 ONLINE ONLINE racbde1
ora.scan2.vip
1 ONLINE ONLINE racbde2
ora.scan3.vip
1 ONLINE ONLINE racbde2

Clusterware Resource Administration

Srvctl and crsctl are used to manage clusterware resources. The general rule is to use srvctl for whatever resource management you can. Crsctl should only be used for things that you cannot do with srvctl (like start the cluster). Both have a help feature to see the available syntax.

Note that the following only shows the available srvctl syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Srvctl syntax:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
$ srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d >db_unique_name> -o >oracle_home> [-m >domain_name>] [-p >spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s >start_options>] [-t >stop_options>] [-n >db_name>] [-y {AUTOMATIC | MANUAL}] [-g ">serverpool_list>"] [-x >node_name>] [-a ">diskgroup_list>"]
Usage: srvctl config database [-d >db_unique_name> [-a] ]
Usage: srvctl start database -d >db_unique_name> [-o >start_options>]
Usage: srvctl stop database -d >db_unique_name> [-o >stop_options>] [-f]
Usage: srvctl status database -d >db_unique_name> [-f] [-v]
Usage: srvctl enable database -d >db_unique_name> [-n >node_name>]
Usage: srvctl disable database -d >db_unique_name> [-n >node_name>]
Usage: srvctl modify database -d >db_unique_name> [-n >db_name>] [-o >oracle_home>] [-u >oracle_user>] [-m >domain>] [-p >spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s >start_options>] [-t >stop_options>] [-y {AUTOMATIC | MANUAL}] [-g ">serverpool_list>" [-x >node_name>]] [-a ">diskgroup_list>"|-z]
Usage: srvctl remove database -d >db_unique_name> [-f] [-y]
Usage: srvctl getenv database -d >db_unique_name> [-t ">name_list>"]
Usage: srvctl setenv database -d >db_unique_name> {-t >name>=>val>[,>name>=>val>,...] | -T >name>=>val>}
Usage: srvctl unsetenv database -d >db_unique_name> -t ">name_list>"

Usage: srvctl add instance -d >db_unique_name> -i >inst_name> -n >node_name> [-f]
Usage: srvctl start instance -d >db_unique_name> {-n >node_name> [-i >inst_name>] | -i >inst_name_list>} [-o >start_options>]
Usage: srvctl stop instance -d >db_unique_name> {-n >node_name> | -i >inst_name_list>} [-o >stop_options>] [-f]
Usage: srvctl status instance -d >db_unique_name> {-n >node_name> | -i >inst_name_list>} [-f] [-v]
Usage: srvctl enable instance -d >db_unique_name> -i ">inst_name_list>"
Usage: srvctl disable instance -d >db_unique_name> -i ">inst_name_list>"
Usage: srvctl modify instance -d >db_unique_name> -i >inst_name> { -n >node_name> | -z }
Usage: srvctl remove instance -d >db_unique_name> [-i >inst_name>] [-f] [-y]

Usage: srvctl add service -d >db_unique_name> -s >service_name> {-r ">preferred_list>" [-a ">available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g >server_pool> [-c {UNIFORM | SINGLETON}] } [-k >net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z >failover_retries>] [-w >failover_delay>]
Usage: srvctl add service -d >db_unique_name> -s >service_name> -u {-r ">new_pref_inst>" | -a ">new_avail_inst>"}
Usage: srvctl config service -d >db_unique_name> [-s >service_name>] [-a]
Usage: srvctl enable service -d >db_unique_name> -s ">service_name_list>" [-i >inst_name> | -n >node_name>]
Usage: srvctl disable service -d >db_unique_name> -s ">service_name_list>" [-i >inst_name> | -n >node_name>]
Usage: srvctl status service -d >db_unique_name> [-s ">service_name_list>"] [-f] [-v]
Usage: srvctl modify service -d >db_unique_name> -s >service_name> -i >old_inst_name> -t >new_inst_name> [-f]
Usage: srvctl modify service -d >db_unique_name> -s >service_name> -i >avail_inst_name> -r [-f]
Usage: srvctl modify service -d >db_unique_name> -s >service_name> -n -i ">preferred_list>" [-a ">available_list>"] [-f]
Usage: srvctl modify service -d >db_unique_name> -s >service_name> [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z >integer>] [-w >integer>]
Usage: srvctl relocate service -d >db_unique_name> -s >service_name> {-i >old_inst_name> -t >new_inst_name> | -c >current_node> -n >target_node>} [-f]
Specify instances for an administrator-managed database, or nodes for a policy managed database
Usage: srvctl remove service -d >db_unique_name> -s >service_name> [-i >inst_name>] [-f]
Usage: srvctl start service -d >db_unique_name> [-s ">service_name_list>" [-n >node_name> | -i >inst_name>] ] [-o >start_options>]
Usage: srvctl stop service -d >db_unique_name> [-s ">service_name_list>" [-n >node_name> | -i >inst_name>] ] [-f]

Usage: srvctl add nodeapps { { -n >node_name> -A >name|ip>/>netmask>/[if1[|if2...]] } | { -S >subnet>/>netmask>/[if1[|if2...]] } } [-p >portnum>] [-m >multicast-ip-address>] [-e >eons-listen-port>] [-l >ons-local-port>] [-r >ons-remote-port>] [-t >host>[:>port>][,>host>[:>port>]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
Usage: srvctl modify nodeapps {[-n >node_name> -A >new_vip_address>/>netmask>[/if1[|if2|...]]] | [-S >subnet>/>netmask>[/if1[|if2|...]]]} [-m >multicast-ip-address>] [-p >multicast-portnum>] [-e >eons-listen-port>] [ -l >ons-local-port> ] [-r >ons-remote-port> ] [-t >host>[:>port>][,>host>[:>port>]...]] [-v]
Usage: srvctl start nodeapps [-n >node_name>] [-v]
Usage: srvctl stop nodeapps [-n >node_name>] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-v]
Usage: srvctl disable nodeapps [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t ">name_list>"]
Usage: srvctl setenv nodeapps {-t ">name>=>val>[,>name>=>val>,...]" | -T ">name>=>val>"}
Usage: srvctl unsetenv nodeapps -t ">name_list>" [-v]

Usage: srvctl add vip -n >node_name> -k >network_number> -A >name|ip>/>netmask>/[if1[|if2...]] [-v]
Usage: srvctl config vip { -n >node_name> | -i >vip_name> }
Usage: srvctl disable vip -i >vip_name> [-v]
Usage: srvctl enable vip -i >vip_name> [-v]
Usage: srvctl remove vip -i ">vip_name_list>" [-f] [-y] [-v]
Usage: srvctl getenv vip -i >vip_name> [-t ">name_list>"]
Usage: srvctl start vip { -n >node_name> | -i >vip_name> } [-v]
Usage: srvctl stop vip { -n >node_name> | -i >vip_name> } [-f] [-r] [-v]
Usage: srvctl status vip { -n >node_name> | -i >vip_name> }
Usage: srvctl setenv vip -i >vip_name> {-t ">name>=>val>[,>name>=>val>,...]" | -T ">name>=>val>"}
Usage: srvctl unsetenv vip -i >vip_name> -t ">name_list>" [-v]

Usage: srvctl add asm [-l >lsnr_name>]
Usage: srvctl start asm [-n >node_name>] [-o >start_options>]
Usage: srvctl stop asm [-n >node_name>] [-o >stop_options>] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n >node_name>] [-a]
Usage: srvctl enable asm [-n >node_name>]
Usage: srvctl disable asm [-n >node_name>]
Usage: srvctl modify asm [-l >lsnr_name>]
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t >name>[, ...]]
Usage: srvctl setenv asm -t ">name>=>val> [,...]" | -T ">name>=>value>"
Usage: srvctl unsetenv asm -t ">name>[, ...]"

Usage: srvctl start diskgroup -g >dg_name> [-n ">node_list>"]
Usage: srvctl stop diskgroup -g >dg_name> [-n ">node_list>"] [-f]
Usage: srvctl status diskgroup -g >dg_name> [-n ">node_list>"] [-a]
Usage: srvctl enable diskgroup -g >dg_name> [-n ">node_list>"]
Usage: srvctl disable diskgroup -g >dg_name> [-n ">node_list>"]
Usage: srvctl remove diskgroup -g >dg_name> [-f]

Usage: srvctl add listener [-l >lsnr_name>] [-s] [-p "[TCP:]>port>[, ...][/IPC:>key>][/NMP:>pipe_name>][/TCPS:>s_port>] [/SDP:>port>]"] [-o >oracle_home>] [-k >net_num>]
Usage: srvctl config listener [-l >lsnr_name>] [-a]
Usage: srvctl start listener [-l >lsnr_name>] [-n >node_name>]
Usage: srvctl stop listener [-l >lsnr_name>] [-n >node_name>] [-f]
Usage: srvctl status listener [-l >lsnr_name>] [-n >node_name>]
Usage: srvctl enable listener [-l >lsnr_name>] [-n >node_name>]
Usage: srvctl disable listener [-l >lsnr_name>] [-n >node_name>]
Usage: srvctl modify listener [-l >lsnr_name>] [-o >oracle_home>] [-p "[TCP:]>port>[, ...][/IPC:>key>][/NMP:>pipe_name>][/TCPS:>s_port>] [/SDP:>port>]"] [-u >oracle_user>] [-k >net_num>]
Usage: srvctl remove listener [-l >lsnr_name> | -a] [-f]
Usage: srvctl getenv listener [-l >lsnr_name>] [-t >name>[, ...]]
Usage: srvctl setenv listener [-l >lsnr_name>] -t ">name>=>val> [,...]" | -T ">name>=>value>"
Usage: srvctl unsetenv listener [-l >lsnr_name>] -t ">name>[, ...]"

Usage: srvctl add scan -n >scan_name> [-k >network_number> [-S >subnet>/>netmask>[/if1[|if2|...]]]]
Usage: srvctl config scan [-i >ordinal_number>]
Usage: srvctl start scan [-i >ordinal_number>] [-n >node_name>]
Usage: srvctl stop scan [-i >ordinal_number>] [-f]
Usage: srvctl relocate scan -i >ordinal_number> [-n >node_name>]
Usage: srvctl status scan [-i >ordinal_number>]
Usage: srvctl enable scan [-i >ordinal_number>]
Usage: srvctl disable scan [-i >ordinal_number>]
Usage: srvctl modify scan -n >scan_name>
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l >lsnr_name_prefix>] [-s] [-p [TCP:]>port>[/IPC:>key>][/NMP:>pipe_name>][/TCPS:>s_port>] [/SDP:>port>]]
Usage: srvctl config scan_listener [-i >ordinal_number>]
Usage: srvctl start scan_listener [-n >node_name>] [-i >ordinal_number>]
Usage: srvctl stop scan_listener [-i >ordinal_number>] [-f]
Usage: srvctl relocate scan_listener -i >ordinal_number> [-n >node_name>]
Usage: srvctl status scan_listener [-i >ordinal_number>]
Usage: srvctl enable scan_listener [-i >ordinal_number>]
Usage: srvctl disable scan_listener [-i >ordinal_number>]
Usage: srvctl modify scan_listener {-u|-p [TCP:]>port>[/IPC:>key>][/NMP:>pipe_name>][/TCPS:>s_port>] [/SDP:>port>]}
Usage: srvctl remove scan_listener [-f] [-y]

Usage: srvctl add srvpool -g >pool_name> [-l >min>] [-u >max>] [-i >importance>] [-n ">server_list>"]
Usage: srvctl config srvpool [-g >pool_name>]
Usage: srvctl status srvpool [-g >pool_name>] [-a]
Usage: srvctl status server -n ">server_list>" [-a]
Usage: srvctl relocate server -n ">server_list>" -g >pool_name> [-f]
Usage: srvctl modify srvpool -g >pool_name> [-l >min>] [-u >max>] [-i >importance>] [-n ">server_list>"]
Usage: srvctl remove srvpool -g >pool_name>

Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n >node_name>] [-v]
Usage: srvctl status oc4j [-n >node_name>]
Usage: srvctl enable oc4j [-n >node_name>] [-v]
Usage: srvctl disable oc4j [-n >node_name>] [-v]
Usage: srvctl modify oc4j -p >oc4j_rmi_port> [-v]
Usage: srvctl remove oc4j [-f] [-v]

Usage: srvctl start home -o >oracle_home> -s >state_file> -n >node_name>
Usage: srvctl stop home -o >oracle_home> -s >state_file> -n >node_name> [-t >stop_options>] [-f]
Usage: srvctl status home -o >oracle_home> -s >state_file> -n >node_name>

Usage: srvctl add filesystem -d >volume_device> -v >volume_name> -g >dg_name> [-m >mountpoint_path>] [-u >user>]
Usage: srvctl config filesystem -d >volume_device>
Usage: srvctl start filesystem -d >volume_device> [-n >node_name>]
Usage: srvctl stop filesystem -d >volume_device> [-n >node_name>] [-f]
Usage: srvctl status filesystem -d >volume_device>
Usage: srvctl enable filesystem -d >volume_device>
Usage: srvctl disable filesystem -d >volume_device>
Usage: srvctl modify filesystem -d >volume_device> -u >user>
Usage: srvctl remove filesystem -d >volume_device> [-f]

Usage: srvctl start gns [-v] [-l >log_level>] [-n >node_name>]
Usage: srvctl stop gns [-v] [-n >node_name>] [-f]
Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n >node_name>] [-p] [-s] [-V]
Usage: srvctl status gns -n >node_name>
Usage: srvctl enable gns [-v] [-n >node_name>]
Usage: srvctl disable gns [-v] [-n >node_name>]
Usage: srvctl relocate gns [-v] [-n >node_name>] [-f]
Usage: srvctl add gns [-v] -d >domain> -i >vip_name|ip> [-k >network_number> [-S >subnet>/>netmask>[/>interface>]]]
srvctl modify gns [-v] [-f] [-l >log_level>] [-d >domain>] [-i >ip_address>] [-N >name> -A >address>] [-D >name> -A >address>] [-c >name> -a >alias>] [-u >alias>] [-r >address>] [-V >name>] [-F >forwarded_domains>] [-R >refused_domains>] [-X >excluded_interfaces>]
Usage: srvctl remove gns [-f] [-d >domain_name>]

Crsctl Syntax (for further explanation of these commands see the Oracle Documentation)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ ./crsctl -h
Usage: crsctl add - add a resource, type or other entity
crsctl check - check a service, resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource, type or other entity
crsctl disable - disable autostart
crsctl enable - enable autostart
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource, type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource, server or other entity
crsctl replace - replaces the location of voting files
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource, server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource, server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value, restoring its default

For more information non each command. Run “crsctl -h”.

OCRCONFIG Options:

Note that the following only shows the available ocrconfig syntax. For additional explanation on what these commands do, see the Oracle Documentation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ ./ocrconfig -help
Name:
ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
ocrconfig [option]
option:
[-local] -export >filename>
- Export OCR/OLR contents to a file
[-local] -import >filename> - Import OCR/OLR contents from a file
[-local] -upgrade [>user> [>group>]]
- Upgrade OCR from previous version
-downgrade [-version >version string>]
- Downgrade OCR to the specified version
[-local] -backuploc >dirname> - Configure OCR/OLR backup location
[-local] -showbackup [auto|manual] - Show OCR/OLR backup information
[-local] -manualbackup - Perform OCR/OLR backup
[-local] -restore >filename> - Restore OCR/OLR from physical backup
-replace >current filename> -replacement >new filename>
- Replace a OCR device/file >filename1> with >filename2>
-add >filename> - Add a new OCR device/file
-delete >filename> - Remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair -add >filename> | -delete >filename> | -replace >current filename> -replacement >new filename>
- Repair OCR configuration on the local node
-help - Print out this help information

Note:
* A log file will be created in
$ORACLE_HOME/log/>hostname>/client/ocrconfig_>pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
* Only -local -showbackup [manual] is supported.
* Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry

OLSNODES Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

1
2
3
4
5
6
7
8
9
10
11
12
13
$ ./olsnodes -h
Usage: olsnodes [ [-n] [-i] [-s] [-t] [>node> | -l [-p]] | [-c] ] [-g] [-v]
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
>node> print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode; use at direction of Oracle Support only.
-c print clusterware name

Cluster Verification Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Component Options:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ ./cluvfy comp -list

USAGE:
cluvfy comp >component-name> >component-specific options> [-verbose]

Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
olr : checks OLR integrity
ha : checks HA integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
software : checks software distribution
asm : checks ASM integrity
acfs : checks ACFS integrity
gpnp : checks GPnP integrity
gns : checks GNS integrity
scan : checks SCAN configuration
ohasd : checks OHASD integrity
clocksync : checks Clock Synchronization
vdisk : check Voting Disk Udev settings

Stage Options:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ ./cluvfy stage -list

USAGE:
cluvfy stage {-pre|-post} >stage-name> >stage-specific options> [-verbose]

Valid stage options and stage names are:
-post hwos : post-check for hardware and operating system
-pre cfs : pre-check for CFS setup
-post cfs : post-check for CFS setup
-pre crsinst : pre-check for CRS installation
-post crsinst : post-check for CRS installation
-pre hacfg : pre-check for HA configuration
-post hacfg : post-check for HA configuration
-pre dbinst : pre-check for database installation
-pre acfscfg : pre-check for ACFS Configuration.
-post acfscfg : post-check for ACFS Configuration.
-pre dbcfg : pre-check for database configuration
-pre nodeadd : pre-check for node addition.
-post nodeadd : post-check for node addition.
-post nodedel : post-check for node deletion.

12cR2 使用 opatchauto 安装 GI PSU

测试环境:

oracle 12.2.0.1 rac 2 nodes

环境配置:

type homepath owner version shared
GI /u01/app/12.2.0/grid grid 12.2.0.1 no
DB1 /u01/app/oracle/product/12.2.0/dbhome_1 oracle 12.2.0.1 no
DB2 /u01/app/oracle/product/12.2.0/dbhome_1 oracle 12.2.0.1 no

下载最新版本的opatch 6880880

解压到GI 和 DB 目录下面,两个节点都要操作

使用grid用户

1
[grid@rac1 grid]$ unzip /home/oracle/p6880880_122010_Linux-x86-64.zip -d .

使用oracle用户

1
[oracle@rac1 dbhome_1]$ unzip /home/oracle/p6880880_122010_Linux-x86-64.zip -d .

patch 文件

1
2
3
[root@rac2 oracle]# unzip p28183653_122010_Linux-x86-64.zip

chmod -R 775 28183653

12cR2版本就不需要以前的ocm 响应文件了

创建wallet

如果使用非root用户,则必须要生成对应用户的wallet

命令参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[grid@rac1 bin]$ pwd
/u01/app/12.2.0/grid/OPatch/auto/core/bin

[grid@rac1 bin]$ ./patchingWallet.sh -help
Usage: run patchingWallet.cmd or patchingWallet.sh with the following parameters:
[ options ] { -create | -delete | -list } alias ...
Supported alias format :: >userName>:>hostName>{:}{/}>protocol>
Examples :
abc:xyz:ssh
abc:xyz/ssh

-walletDir - >Path to wallet directory.>
[-password - If you should be prompted for the wallet password.]
[-useStdin - >Read passwords from stdin rather than attempting to use Console.>]
[-map - >Map name within wallet.>]
[-force - >Force overwrite of existing aliases.>]
[-list - List wallet]
[-listmap - List maps available in wallet]
[-create - Create alias]
[-delete - Delete alias]
[-log_priority - >Log priority. Supported values: OFF, SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST, ALL.>]
[-log - >Log file. The value can be a filename or one of the special values STDOUT, STDERR, or DISABLE.>]
[-help - >Displays usage.>]


使用grid用户:

1
2
3
4
5
6
[grid@rac1 bin]$ ./patchingWallet.sh -walletDir /u01/app/grid/wallet -create grid:rac1:ssh grid:rac2:ssh -log /u01/app/grid/wallet/wallet.log
Session log file is /u01/app/grid/wallet/wallet.log
Enter Password for grid:rac1:ssh:
Confirm Password for grid:rac1:ssh:
Enter Password for grid:rac2:ssh:
Confirm Password for grid:rac2:ssh:

采用ssh协议 输入对应的密码即可

检查补丁冲突

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[grid@rac1 bin]$ /u01/app/12.2.0/grid/OPatch/opatchauto apply /home/oracle/28183653 -oh /u01/app/12.2.0/grid -analyze -wallet /u01/app/grid/wallet

OPatchauto session is initiated at Wed Sep 26 15:28:29 2018

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2018-09-26_03-28-38PM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2018-09-26_03-29-24PM.log
The id for this session is SQGY
[sudo] password for grid: grid is not in the sudoers file. This incident will be reported.

OPatchAuto failed.

OPatchauto session completed at Wed Sep 26 15:29:28 2018
Time taken to complete the session 0 minute, 59 seconds

opatchauto failed with error code 42

有报错,因为我这里采用的grid用户 提示grid不是sudoer 添加grid到sudo

1
2
3
4
5
6
[root@rac1 oracle]# whereis sudoers
sudoers: /etc/sudoers /etc/sudoers.d /usr/share/man/man5/sudoers.5.gz

Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) ALL
阅读更多

12c new features: OPatch Automation Tool – opatchauto

从12c开始,在集群GRID/RAC环境下,通过root用户使用opatchauto命令安装patch,11g 是opatch auto

opatchauto能同时对GI集群打补丁,包括grid目录和db目录,能很简单的对单机或rac进行补丁操作。

支持的平台

  • Oracle Solaris on x86-64 (64-bit)
  • Linux x86-64
  • Oracle Solaris on SPARC (64-bit)
  • IBM AIX on POWER Systems (64-bit)
  • HP-UX Itanium
  • Linux

准备工作

  • 确定你的opatchauto目录以及确保其为最新版本
  • 设置好正确的环境变量
  • 创建wallet来储存密码信息
  • 创建Node Manager来做启停操作
  • 补丁之前最好备份工作

实施操作

  • 获取所需的patch补丁

  • 仔细阅读readme

  • 检查prerequisites

    opatchauto apply -analyze, 这个不会真正改变系统,只会进行模拟操作

  • apply patch

  • 检查oracle home是否applied

  • 检查软件是否允许正常

  • 如有异常,检查日志

  • rollback

opatchauto详细参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[grid@rac1 OPatch]$ ./opatchauto -h
Oracle OPatchAuto Version 13.9.0.2.0
Copyright (c) 2016, Oracle Corporation. All rights reserved.

Usage: opatchauto [ command ] [ -help ]

Purpose:
A patch orchestration tool that generates patching instructions specific
to your target configuration and then uses OPatch to perform the patching
operations without user intervention. Specifically, OPatchauto can:
Perform pre-patch checks.
Apply the patch
Perform post-patch checks.
Roll back patches when patch de installation is required.

command := version
apply
resume
rollback

>global_arguments> := -help

example:
'opatchauto -help'
'opatchauto version'
'opatchauto apply -help'
'opatchauto resume -help'
'opatchauto rollback -help'


查看 apply 选项的详细输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
[grid@rac1 OPatch]$ ./opatchauto apply -help
Oracle OPatchAuto Version 13.9.0.2.0
Copyright (c) 2016, Oracle Corporation. All rights reserved.

DESCRIPTION
This operation applies patch.

Purpose:
Apply a System Patch to Oracle Home. If patch location is not
specified, current directory will be taken as the patch location.

SYNTAX
opatchauto apply [ >patch-location> ]
[ -phBaseDir >patch.base.directory> ]
[ -oh >home> ] [ -log >log> ]
[ -logLevel >log_priority> ] [ -binary ]
[ -analyze ]
[ -invPtrLoc >inventory.pointer.location> ]
[ -host >host> ] [ -wallet >wallet> ]
[ -force_conflict ] [ -rolling ]
[ -database >database> ] [ -generatesteps ]
[ -norestart ] [ -ocmrf >ocmrf> ] [ -sdb ]
[ -remote ] [ -nonrolling ]

OPTIONS
-phBaseDir >patch.base.directory>
The location of base patch directory.

-oh >home>
The location of the oracle home.

-log >log>
The log location.

-logLevel >log_priority>
The log level (defaults to "INFO").
Supported values:
[SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST, ALL, OFF]

-binary
Forces execution of "-phases offline:binary-patching".

-analyze
If this option is selected, the environment will be analysed for suitability of the patch on each home, without affecting the home.
The patch will not be applied or rolled back, and targets will not be shut down.

-invPtrLoc >inventory.pointer.location>
The central inventory pointer file location.

-host >host>
The remote host or host:port.

-wallet >wallet>
The location of the wallet file. It is a mandatory option from DB 12.2 onwards if Grid Home patching is requested.

-force_conflict
If a conflict exist which prevents the patch from being applied, this flag can be used to force application of the patch.
All the conflicting patches will be removed before applying the current patch.

-rolling
Enables sdb rolling mode where database(s) are patched one after the other.

-database >database>
List of databases to be patched.

-generatesteps
Enables generation of steps.

-norestart
The no restart option during execution.

-ocmrf >ocmrf>
Location of ocmrf file.

-sdb
To signify patching sharded database. Run 'opatchauto >apply|rollback> -sdb -help' to get more help on patching a sharded database.

-remote
Enables remote node patching. This is supported only for Grid setup and it should be up and running.

-nonrolling
Enables non-rolling mode.

PARAMETERS
patch-location
The patch location.

EXAMPLES
To patch GI home and all RAC homes:
'>GI_HOME>/OPatch/opatchauto apply >Patch_Location>'

To patch multiple homes:
'>GI_HOME>/OPatch/opatchauto apply >Patch_Location> -oh >GI_HOME>,>RAC_HOME1>,>RAC_HOME2>'

To patch databases running only from RAC homes:
'>RAC_HOME>/OPatch/opatchauto apply >Patch_Location>-database db1,db2...dbn'

To patch software-only installation:
'>RAC_HOME>/OPatch/opatchauto apply >Patch_Location> -oh >RAC_HOME>' OR
'>GI_HOME>/OPatch/opatchauto apply >Patch_Location>-oh >GI_HOME>'

在安装patch之前请下载最新的OPatch包:patch 6880880 ,以避免一些低版本OPatch包中的已知问题。

opatchauto 安装GI PSU 具体命令:

  1. 同时对GI home 和 all Oracle RAC database homes 打psu:
1
# opatchauto apply  <UNZIPPED_PATCH_LOCATION>/28183653 -ocmrf <ocm response file>
  1. 只单独对GI home 打psu:
1
# opatchauto apply  <UNZIPPED_PATCH_LOCATION>/28183653 -oh <GI_HOME> -ocmrf <ocm response file>
  1. 只单独对RAC database homes 打psu:
1
# opatchauto apply  <UNZIPPED_PATCH_LOCATION>/28183653 -oh  <oracle_home1_path>,<oracle_home2_path> -ocmrf <ocm response  file>

Oracle Cluster Time Synchronization Service (CTSS)

以前我安装rac的同步都是用的ntp,后来oracle引入ctss,以下为官方介绍

The Cluster Time Synchronization Service (CTSS) is installed as part of Oracle Clusterware. By default the CTSS runs in observer mode if it detects a time synchronization service or a time synchronization service configuration, valid or broken, on the system. If CTSS detects that there is no time synchronization service or time synchronization service configuration on any node in the cluster, then CTSS goes into active mode and takes over time management for the cluster

简单来说 就是ctss是作为grid的一部分安装完成,会自动检查服务器上是否有安装其他时间同步服务。如果没有那么ctss就会自动生效。

如果ctss本身就处于active模式,当有新的节点加入进来时,会将新节点的时间与cluster的引用时间比较差值,如果这个差值在一定范围内,则ctss对其进行同步。如果超出了这个范围,ctss会执行slew time synchronization,将新节点的时间增速加快或者调慢,直到与引用时间同步。

当cluster启动时,ctss也随之启动并以active模式运行,当发现时间差值超过限制(为24小时),则ctss会在alert.log生成一条日志,然后cluster会启动失败,必须手动对新加入的节点进行时间调整。

检查集群是否存在第三方时间同步服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[grid@rac1 ~]$ cluvfy comp clocksync -n all

Verifying Clock Synchronization ...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Verifying Network Time Protocol (NTP) ...
Verifying '/etc/chrony.conf' ...PASSED
Verifying '/var/run/ntpd.pid' ...WARNING (PRVG-1019)
Verifying '/var/run/chronyd.pid' ...WARNING (PRVG-1019)
Verifying Daemon 'ntpd' ...FAILED (PRVG-1024, PRVF-7590)
Verifying Daemon 'chronyd' ...FAILED (PRVG-1024, PRVF-7590)
Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1019, PRVG-1024, PRVF-7590)
Verifying Clock Synchronization ...FAILED (PRVG-1019, PRVG-1024, PRVF-7590)

Verification of Clock Synchronization across the cluster nodes was unsuccessful on all the specified nodes.


Failures were encountered during execution of CVU verification request "Clock Synchronization across the cluster nodes".

Verifying Clock Synchronization ...FAILED
Verifying Network Time Protocol (NTP) ...FAILED
Verifying '/var/run/ntpd.pid' ...WARNING
PRVG-1019 : The NTP configuration file "/var/run/ntpd.pid" does not exist
on nodes "rac2,rac1"

Verifying '/var/run/chronyd.pid' ...WARNING
PRVG-1019 : The NTP configuration file "/var/run/chronyd.pid" does not
exist on nodes "rac2,rac1"

Verifying Daemon 'ntpd' ...FAILED
PRVG-1024 : The NTP daemon or Service was not running on any of the cluster
nodes.

rac2: PRVF-7590 : "ntpd" is not running on node "rac2"
rac2: Liveness check failed for "ntpd"

rac1: PRVF-7590 : "ntpd" is not running on node "rac1"
rac1: Liveness check failed for "ntpd"

Verifying Daemon 'chronyd' ...FAILED
PRVG-1024 : The NTP daemon or Service was not running on any of the cluster
nodes.

rac2: PRVF-7590 : "chronyd" is not running on node "rac2"
rac2: Liveness check failed for "chronyd"

rac1: PRVF-7590 : "chronyd" is not running on node "rac1"
rac1: Liveness check failed for "chronyd"


CVU operation performed: Clock Synchronization across the cluster nodes
Date: Sep 20, 2018 4:01:05 PM
CVU home: /u01/app/12.2.0/grid/
User: grid

检查ctss的状态

1
2
[grid@rac1 ~]$ crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode

删除第三方时间同步配置文件

1
2
3
rm -rf /etc/ntp.conf
rm -rf /etc/chrony.conf
rm -rf /var/run/chronyd.pid

过大概半分钟 就可以看到状态变成了active模式

1
2
3
[grid@rac1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

同时在crs alert里会看到一条记录

1
2
3
/u01/app/grid/diag/crs/rac1/crs/trace/alert.log

2018-09-20 16:16:33.139 [OCTSSD(2801)]CRS-2410: The Cluster Time Synchronization Service on host rac1 is in active mode.

再次检查集群时间同步

1
2
3
4
5
6
7
8
9
10
[grid@rac1 ~]$ cluvfy comp clocksync -n all

Verifying Clock Synchronization ...PASSED

Verification of Clock Synchronization across the cluster nodes was successful.

CVU operation performed: Clock Synchronization across the cluster nodes
Date: Sep 20, 2018 5:02:48 PM
CVU home: /u01/app/12.2.0/grid/
User: grid

OEL7上配置dns服务

搭建rac的时候如果选择多个scan ip 则需要考虑配置dns server,多个虚拟机也可以考虑作为公用的dns服务器

安装相关packages

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@xb ~]# yum install bind* -y
Loaded plugins: refresh-packagekit
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package bind.x86_64 32:9.8.2-0.68.rc1.el6_10.1 will be installed
---> Package bind-chroot.x86_64 32:9.8.2-0.68.rc1.el6_10.1 will be installed
---> Package bind-devel.x86_64 32:9.8.2-0.68.rc1.el6_10.1 will be installed
---> Package bind-dyndb-ldap.x86_64 0:2.3-8.el6 will be installed
---> Package bind-libs.x86_64 32:9.8.2-0.68.rc1.el6_10.1 will be installed
---> Package bind-sdb.x86_64 32:9.8.2-0.68.rc1.el6_10.1 will be installed
---> Package bind-utils.x86_64 32:9.8.2-0.68.rc1.el6_10.1 will be installed
--> Finished Dependency Resolution

主要文件

1
2
3
4
5
6
7
8
9
10
11
/etc/named #named目录
/etc/named.conf #主配置文件
/etc/rc.d/init.d/named #BIND开机自动时启动的脚本
/usr/sbin/named #named进程程序文件
/usr/sbin/rndc #远程控制named进程的工具
/usr/sbin/rndc-confgen #产生rndc密钥的工具
/usr/share/doc/bind-9.8.2 # 帮助文档和例子文件
/usr/share/man/man5/ #手册
/usr/share/man/man8/#手册
/var/named # Bind配置文件的默认存放目录
/var/run/named #named进程PID文件存放的目录

修改named.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
options {
listen-on port 53 { any; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
recursion yes;

dnssec-enable yes;
dnssec-validation yes;

/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";
};

...省略
zone "oracle.com" IN {
type master;
file "oracle.com.zone";
allow-transfer {192.0.2.1;};
};
zone "2.0.192.in-addr.arpa" IN {
type master;
file "2.0.192.in-addr.arpa.zone";
};

新增了两个zone,oracle.com.zone作为正向解析域,2.0.192.in-addr.arpa.zone为反向解析域,文件位于/var/named/下面

配置oracle.com.zone

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$TTL    86400
@ SOA oracle.com. root (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
@ NS dns.oracle.com.
dns A 192.0.2.20
rac1 A 192.0.2.11
rac2 A 192.0.2.12
rac-scan A 192.0.2.15
rac-scan A 192.0.2.16
rac-scan A 192.0.2.17
rac1-vip A 192.0.2.13
rac2-vip A 192.0.2.14

配置2.0.192.in-addr.arpa.zone

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$TTL    86400
@ IN SOA oracle.com. root.dns.oracle.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS dns.oracle.com.
11 IN PTR rac1.oracle.com.
12 IN PTR rac2.oracle.com.
13 IN PTR rac1-vip.oracle.com.
14 IN PTR rac2-vip.oracle.com.
15 IN PTR rac-scan.
16 IN PTR rac-scan.
17 IN PTR rac-scan.

修改/etc/resolv.conf

1
2
3
# Generated by NetworkManager
search oracle.com
nameserver 192.0.2.20

验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@xb etc]# ping rac1.oracle.com
PING rac1.oracle.com (192.0.2.11) 56(84) bytes of data.
64 bytes from rac1.oracle.com (192.0.2.11): icmp_seq=1 ttl=64 time=1.19 ms
64 bytes from rac1.oracle.com (192.0.2.11): icmp_seq=2 ttl=64 time=0.390 ms
64 bytes from rac1.oracle.com (192.0.2.11): icmp_seq=3 ttl=64 time=0.468 ms
^C
--- rac1.oracle.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2594ms
rtt min/avg/max/mdev = 0.390/0.683/1.192/0.361 ms
[root@xb etc]# nslookup rac-scan
Server: 192.0.2.20
Address: 192.0.2.20#53

Name: rac-scan.oracle.com
Address: 192.0.2.15
Name: rac-scan.oracle.com
Address: 192.0.2.16
Name: rac-scan.oracle.com
Address: 192.0.2.17

在vmware linux 7.5安装oracle 12c rac

作为测试目的安装,环境如下:

OS: OEL 7.5 64-bit (12c 已经不支持32位系统)

db verions: 12.2

node: 2 (rac1.oracle.com,rac2.oracle.com)

IP规划:

1
2
3
4
5
6
7
8
9
10
192.0.2.11  rac1 rac1.oracle.com
192.0.2.13 rac1-vip

192.0.2.12 rac2 rac2.oracle.com
192.0.2.14 rac2-vip

192.0.2.15 rac-scan

192.168.2.11 rac1-priv
192.168.2.12 rac2-priv

安装介质

linuxx64_12201_database.zip

linuxx64_12201_grid_home.zip

oracle linux 7.5.iso

Oracle linux 7.5安装

操作步骤略过。

提下vmware 配置共享磁盘, 在vmx文件里添加如下参数

1
2
3
4
5
6
7
8
9
10
11
12
13
disk.enableUUID = "TRUE"
disk.locking="FALSE"

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1:1.SharedBus="Virtual"
scsi1:2.SharedBus="Virtual"
scsi1:3.SharedBus="Virtual"
scsi1:4.SharedBus="Virtual"

配置yum

1
2
3
4
5
6
7
8
9
10
mkdir /opt/media
createrepo .
yum clean all
yum list

[myrepo]
name=myrepo
baseurl=file:///opt/media
gpgcheck=0
enabled=1

必备参数 用户 组

1
2
3
yum install oracle-database-server-12cR2-preinstall
[root@rac1 yum.repos.d]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

可以看到12c默认建了一些新的group分配给oracle用户,另外手动建grid相关的组和用户

1
2
3
4
5
6
7
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/groupadd -g 507 asmoper
/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
/usr/sbin/usermod -u 502 -g oinstall -G dba,asmdba oracle
passwd oracle
passwd grid

关闭selinux

1
2
edit file /etc/selinux/config.
SELINUX=disabled

开启NTP(linux 7为chrony) &#8211;我采用ctss 没开启

1
2
systemctl enable chronyd.service
systemctl start chronyd.service

关闭防火墙

1
2
systemctl stop firewalld.service
systemctl disable firewalld.service

安装目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory

mkdir -p /u01/app/12.2.0/grid
chown -R grid:oinstall /u01/app/12.2.0/grid
chmod -R 775 /u01/app/12.2.0/grid

mkdir -p /u01/app/grid
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/app/grid

mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle

环境变量

/home/oracle/.bash_profile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=rac1.oracle.com
export ORACLE_UNQNAME=rac
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.2.0/grid
export DB_HOME=$ORACLE_BASE/product/12.2.0/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=rac1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

/home/oracle/grid_env

1
2
3
4
5
6
export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

/home/oracle/db_env

1
2
3
4
5
6
export ORACLE_SID=rac1
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

udev

在linux7 配置12c的udev有些不同

1
2
3
4
5
6
7
8
9
10
11
12
for i in b c d e;
do
echo "KERNEL==\"sd*\", ENV{DEVTYPE}==\"disk\", SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id -g -u -d \$devnode\", RESULT==\"`/usr/lib/udev/scsi_id -g -u /dev/sd$i`\", RUN+=\"/bin/sh -c 'mknod /dev/asmdisk$i b \$major \$minor; chown grid:asmadmin /dev/asmdisk$i; chmod 0660 /dev/asmdisk$i'"\" >>/etc/udev/rules.d/99-oracle-asmdevices.rules
done

/sbin/udevadm trigger --type=devices --action=change

[root@rac1 rules.d]# ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Sep 17 14:51 /dev/asmdiskb
brw-rw---- 1 grid asmadmin 8, 32 Sep 17 14:51 /dev/asmdiskc
brw-rw---- 1 grid asmadmin 8, 48 Sep 17 14:51 /dev/asmdiskd
brw-rw---- 1 grid asmadmin 8, 64 Sep 17 14:51 /dev/asmdiske

安装Grid Infrastructure

12.2的grid变成了镜像安装,所以先用grid用户软件解压到对应的目录

1
2
cd /u01/app/12.2.0.1/grid
unzip -q /opt/linuxx64_12201_grid_home.zip

用root用户安装package,所有节点

1
2
3
4
5
6
7
8
9
su -
# Local node.
cd /u01/app/12.2.0.1/grid/cv/rpm
rpm -Uvh cvuqdisk*

# Remote node.
scp ./cvuqdisk* root@rac2:/tmp
ssh root@rac2 rpm -Uvh /tmp/cvuqdisk*
exit

如果要使用AFD Driver (新版ASMLib) 参考如下操作,我这里用的udev

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Set environment.
export ORACLE_HOME=/u01/app/12.2.0.1/grid
export ORACLE_BASE=/tmp

# Mark disks.
$ORACLE_HOME/bin/asmcmd afd_label DISK1 /dev/asmdisk-c --init
$ORACLE_HOME/bin/asmcmd afd_label DISK2 /dev/asmdisk-d --init
$ORACLE_HOME/bin/asmcmd afd_label DISK3 /dev/asmdisk-d --init
$ORACLE_HOME/bin/asmcmd afd_label DISK4 /dev/asmdisk-d --init

# Test Disks.
$ORACLE_HOME//bin/asmcmd afd_lslbl /dev/asmdisk-c
$ORACLE_HOME//bin/asmcmd afd_lslbl /dev/asmdisk-c
$ORACLE_HOME//bin/asmcmd afd_lslbl /dev/asmdisk-c
$ORACLE_HOME//bin/asmcmd afd_lslbl /dev/asmdisk-c

# unset environment.
unset ORACLE_BASE

exit

开始安装

1
2
[grid@rac1 grid]$ ./gridSetup.sh 
Launching Oracle Grid Infrastructure Setup Wizard...

标准cluster

填写cluster名称

添加节点2信息,我选择的两个都是hub节点

对于 12c 版本的集群,除了基本的公网( Public)和私网(private)之外,由于 flex ASM 的出现,我们还需要一个网络来负责在节点之间传递 ASM 的元数据(meta data)。由于 ASM 的元数据数据量很小,所以,对于大部分的系统可以选择集群的私网和 ASM 使用相同的网络。也就是选择 ASM&Private。

配置asm,我已经做了udev绑定

创建 GI management Repository(a.k.a management DB)。 又是一个 12c 的新特性。对于 12c 的集群, oracle 会创建一个小型的数据库,用于存放 CHM 产生的统计信息,这个数据库被称为 GI management Repository(a.k.a management DB). 当然,这个小型的数据库会和 OCR&VF 存放在相同的位置(磁盘组),我这里选择否

选择外部冗余

设置相同的密码

-fancybox=”gallery” href=”/img/5b9f683d1f87d.jpg”>

不配置EM

-fancybox=”gallery” href=”/img/5b9f684e1316d.jpg”>

默认组

-fancybox=”gallery” href=”/img/5b9f6857ea713.jpg”>

配置base路径

安装信息目录

点击fix and check again

可忽略的错误

点击install

root执行脚本

检查crs服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[grid@rac2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.proxy_advm
OFFLINE OFFLINE rac1 STABLE
OFFLINE OFFLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac1 STABLE
ora.qosmserver
1 ONLINE ONLINE rac1 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------

安装database

1
2
3
[oracle@rac1 database]$ ./runInstaller 
Starting Oracle Universal Installer...

Install database software only

安装rac

配置ssh互信

选择企业版

选择路径

都选择dba组

我这里没采用dns server 忽略相关错误

执行root.sh

新建实例

Create Database

Typical configuration

错误可忽略

检查rac状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[grid@rac1 ~]$ srvctl config database -d xb
Database unique name: xb
Database name: xb
Oracle home: /u01/app/oracle/product/12.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/XB/PARAMETERFILE/spfile.279.987348781
Password file: +DATA/XB/PASSWORD/pwdxb.258.987348279
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: xb1,xb2
Configured nodes: rac1,rac2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

通过一个新视图能查看所有实例的状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Sep 20 15:43:41 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> desc v$active_instances;
Name Null? Type
----------------------------------------- -------- ----------------------------
INST_NUMBER NUMBER
INST_NAME VARCHAR2(240)
CON_ID NUMBER

SQL> select inst_name from v$active_instances;

INST_NAME
--------------------------------------------------------------------------------
rac1.oracle.com:xb1
rac2.oracle.com:xb2

至此12c rac在OEL7.5上基本安装完毕,后续需要对grid 和database打上最新补丁。

oracle用户密码错误导致被锁

碰到一个数据库用户总是被锁住

1
2
3
4
5
SQL> select name,lcount from user$ where name='WOLF';

NAME LCOUNT
------------------------------ ----------
WOLF 149

从解锁到现在又失败了149次 listner日志里没有可用的内容,被锁时间附近显示的ip配置的密码均正确

开始的想法是做审计,数据库之前已经关闭了审计 要重新开启比较麻烦,需要停机,于是采用触发器的方式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
create or replace trigger logon_denied_to_alert
after servererror on database
declare
message varchar2(256);
IP varchar2(15);
v_os_user varchar2(80);
v_module varchar2(50);
v_action varchar2(50);
v_pid varchar2(10);
v_sid number;
v_program varchar2(48);
v_client_id VARCHAR2(64);
v_host varchar2(60);
begin
IF (ora_is_servererror(1017)) THEN

-- get IP for remote connections:
if sys_context('userenv','network_protocol') = 'TCP' then
IP := sys_context('userenv','ip_address');
end if;
select distinct sid into v_sid from sys.v_$mystat;
SELECT p.SPID, v.PROGRAM into v_pid, v_program
FROM V$PROCESS p, V$SESSION v
WHERE p.ADDR = v.PADDR AND v.sid = v_sid;

v_host:=sys_context('userenv','host');
v_os_user := sys_context('userenv','os_user');
dbms_application_info.READ_MODULE(v_module,v_action);

v_client_id := sys_context('userenv','client_identifier');

message:= to_char(sysdate,'Dy Mon dd HH24:MI:SS YYYY')||
' logon denied '|| 'IP ='||nvl(IP,'localhost')||' pid = '||v_pid||
' os user = '||v_os_user||' client id = '||v_client_id||
' with program= '||v_program||' module ='||v_module||' action='||v_action||' host='||v_host;

sys.dbms_system.ksdwrt(2,message);

-- remove comments from next line to let it hang for 5 minutes
-- to be able to do more diagnostics on the operating system:
-- sys.dbms_lock.sleep(300);
end if;
end;
/

将相关触发器得到的信息显示在alert.log里,结果如下

1
2
3
4
5
Fri Nov 24 10:42:03 2017 logon denied IP =localhost pid = 62360 os user = Administrator client id =  with program= JDBC Thin Client module =JDBC Thin Client action= host=WINDOWS-8KQN6TF
Fri Nov 24 10:42:43 2017
Fri Nov 24 10:42:43 2017 logon denied IP =localhost pid = 70370 os user = Administrator client id = with program= JDBC Thin Client module =JDBC Thin Client action= host=WINDOWS-8KQN6TF
Fri Nov 24 10:43:23 2017
Fri Nov 24 10:43:23 2017 logon denied IP =localhost pid = 71419 os user = Administrator client id = with program= JDBC Thin Client module =JDBC Thin Client action= host=WINDOWS-8KQN6TF

找到host名为WINDOWS-8KQN6TF的机器,看到了配置错误密码的tomcat程序

附上userenv参数

Possible values that can be used for the USERENV Namespace:>

(Extracted from SQL Reference Documentation)>

TERMINAL The operating system identifier for the client of the

current session. In distributed SQL statements, this

option returns the identifier for your local session.

In a distributed environment, this is supported only

for remote SELECTs, not for remote INSERTs, UPDATEs,

or DELETEs. (The return length of this parameter may

vary by operating system.)

LANGUAGE The language and territory currently used by your session,

along with the database character set, in this form:

language_territory.characterset

LANG The ISO abbreviation for the language name, a shorter form

than the existing &#8216;LANGUAGE&#8217; parameter.

SESSIONID The auditing session identifier. You cannot use this

option in distributed SQL statements.

INSTANCE The instance identification number of the current instance.

ENTRYID The available auditing entry identifier. You cannot use

this option in distributed SQL statements. To use this

keyword in USERENV, the initialization parameter AUDIT_TRAIL

must be set to TRUE.

ISDBA TRUE if you currently have the DBA role enabled and FALSE

if you do not.

CLIENT_INFO Returns up to 64 bytes of user session information that can

be stored by an application using the DBMS_APPLICATION_INFO

package.

NLS_TERRITORY The territory of the current session.

NLS_CURRENCY The currency of the current session.

NLS_CALENDAR The current calendar of the current session.

NLS_DATE_FORMAT The date format for the session.

NLS_DATE_LANGUAGE The language used for expressing dates.

NLS_SORT BINARY or the linguistic sort basis.

CURRENT_USER The name of the user whose privilege the current session is

under.

CURRENT_USERID User ID of the user whose privilege the current session is

under.

SESSION_USER Database user name by which the current user is authenticated.

This value remains the same throughout the duration of the

session.

SESSION_USERID Identifier of the database user name by which the current

user is authenticated.

CURRENT_SCHEMA Name of the default schema being used in the current schema.

This value can be changed during the session with an ALTER

SESSION SET CURRENT_SCHEMA statement.

CURRENT_SCHEMAID Identifier of the default schema being used in the current

session.

PROXY_USER Name of the database user who opened the current session on

behalf of SESSION_USER.

PROXY_USERID Identifier of the database user who opened the current session

on behalf of SESSION_USER.

DB_DOMAIN Domain of the database as specified in the DB_DOMAIN

initialization parameter.

DB_NAME Name of the database as specified in the DB_NAME

initialization parameter.

HOST Name of the host machine from which the client has connected.

OS_USER Operating system username of the client process that initiated

the database session.

EXTERNAL_NAME External name of the database user. For SSL authenticated

sessions using V.503 certificates, this field returns the

distinguished name (DN) stored in the user certificate.

IP_ADDRESS IP address of the machine from which the client is connected.

NETWORK_PROTOCOL Network protocol being used for communication, as specified

in the &#8216;PROTOCOL=protocol&#8217; portion of the connect string.

BG_JOB_ID Job ID of the current session if it was established by an

Oracle background process. Null if the session was not

established by a background process.

FG_JOB_ID Job ID of the current session if it was established by a

client foreground process. Null if the session was not

established by a foreground process.

AUTHENTICATION_TYPE How the user was authenticated:

DATABASE: Username/password authentication.

OS: Operating system external user authentication.

NETWORK: Network protocol or ANO authentication.

PROXY: OCI proxy connection authentication.

AUTHENTICATION_DATA Data being used to authenticate the login user. For

X.503 certificate authenticated sessions, this field

returns the context of the certificate in HEX2 format.>

Note: You can change the return value of the AUTHENTICATION_DATA

attribute using the length parameter of the syntax. Values of

up to 4000 are accepted. This is the only attribute of USERENV

for which Oracle implements such a change.

Scripts: Oracle11g自动化安装

安装脚本:

https://raw.githubusercontent.com/xblvesting/oracle-install/master/auto-install.sh

Readme

  • 支持Oracle 11g和18c版本在linux6和7的安装
  • 支持Dataguard的一键安装

部分安装日志:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
[root@xb ~]# wget http://127.0.0.1/soft/auto-install.sh && chmod a+x auto-install.sh && ./auto-install.sh install_db
......
SELINUX=enforcing
Please input oracle's user password:
(Default password: oracle):
Please input oracle install PATH:
(Default path: /u01/app/oracle):
Please input oracle version to install [11g or 18c]:
(Default version: 11g):
Please input oracle_sid:
(Default oracle_sid: orcl):

=======================================
Now checking rpm,please wait...
=======================================

binutils-2.27-34.base.0.1.el7.x86_64
compat-libcap1-1.10-7.el7.x86_64
compat-libstdc++-33-3.2.3-72.el7.x86_64
gcc-c++-4.8.5-36.0.1.el7.x86_64
gcc-gfortran-4.8.5-36.0.1.el7.x86_64
gcc-4.8.5-36.0.1.el7.x86_64
gcc-objc-4.8.5-36.0.1.el7.x86_64
gcc-objc++-4.8.5-36.0.1.el7.x86_64
gcc-gnat-4.8.5-36.0.1.el7.x86_64
gcc-c++-4.8.5-36.0.1.el7.x86_64
glibc-2.17-260.0.9.el7.x86_64
glibc-headers-2.17-260.0.9.el7.x86_64
glibc-common-2.17-260.0.9.el7.x86_64
glibc-devel-2.17-260.0.9.el7.x86_64
glibc-devel-2.17-260.0.9.el7.x86_64
ksh-20120801-139.0.1.el7.x86_64
libaio-0.3.109-13.el7.x86_64
libaio-devel-0.3.109-13.el7.x86_64
libgcc-4.8.5-36.0.1.el7.x86_64
libstdc++-4.8.5-36.0.1.el7.x86_64
libstdc++-devel-4.8.5-36.0.1.el7.x86_64
libstdc++-devel-4.8.5-36.0.1.el7.x86_64
make-3.82-23.el7.x86_64
sysstat-10.1.5-17.el7.x86_64
libXi-1.7.9-1.el7.x86_64
libXtst-1.2.3-1.el7.x86_64

=======================================
++++++++++++++++CHECK PASS!+++++++++++++++++++++
=======================================

#########################################################
######## Summary Info ############
Oracle Home Dir : /u01/app/oracle/product/11.2.0/dbhome_1
Oracle User Password : oracle
Oracle Base Dir : /u01/app/oracle
Oracle SID : orcl
Oracle Version : 11g
#########################################################

Press any key to start...or Press Ctrl+c to cancel

......

=======================================
Now downloading software,please wait...
=======================================

......

=======================================
Now unziping zip files,please wait...
=======================================

Oracle install pre-setting finish!

=======================================
Now installing db software,please wait...
=======================================

Starting Oracle Universal Installer...

......

The installation of Oracle Database 11g was successful.

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/oracle/product/11.2.0/dbhome_1/root.sh


Successfully Setup Software.
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

......

=======================================
Database software installed successfully ... OK!
=======================================


=======================================
Now installing soft patchs,please wait...
=======================================

......

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 17478514 18031668 18522509 19121551 19769489 20299013 20760982 21352635 21948347 22502456 23054359 24006111 24732075 25869727 26609445 26392168 26925576 27338049 27734982 28204707

Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/11.2.0/dbhome_1')


Is the local system ready for patching? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...
Applying sub-patch '17478514' to OH '/u01/app/oracle/product/11.2.0/dbhome_1'

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...


......

Composite patch 28204707 successfully applied.
OPatch Session completed with warnings.

OPatch completed with warnings.

=======================================
Now installing db instance,please wait...
=======================================

Copying database files
1% complete
3% complete
11% complete
18% complete
26% complete
37% complete
Creating and starting Oracle instance
40% complete
45% complete
50% complete
55% complete
56% complete
60% complete
62% complete
Completing Database Creation
66% complete
70% complete
73% complete
74% complete
75% complete
76% complete
77% complete
88% complete
99% complete
100% complete