Tuesday, December 23, 2008

Fun with RAC on Windows (part 2)

Another issue during the installation with RAC on Windows was the fact that the VIPCA failed. Of course both machines were pingable so there was no obvious error.

After some searching the following emerged: the VIP's for both nodes were all on one node.

While trying to move the VIP to the other node it showed that the network was called "Team A + B" on machine 1 and "Team A+ B" on machine two. Apparently under windows the network name is used as well when it comes to the VIPCA.

After fixing this error the VIPCA completed and the resource could be moved to the second node.


I added a nice gadget to the sidebar. If you are a follower of this blog just get on it.

Monday, December 22, 2008

Fun with RAC on Windows (part 1)

After having some "nice" experiences with the installation of RAC on Windows I like to share this with you - so you might find a solution easier.

The environment is easy, but not without some nice "features".

We have a three node RAC cluster with two machines in room 1 and the third machine in room 2 (both on the same site).

We did not run cluvfy as this very often gives errors when everything is OK.

On the SAN we will start with only one voting disk, and add two more later. Also the OCR will be mirrored. Furthermore we intend to use OCFS on some of the LUN's of the SAN.

Well, the installation went ok, the other machine was found, etc etc etc.
However the first configuration assistant failed.
Research showed some error - the usual stuff about node connectivity.

Stopped the OUI and retried from the command line.

Found the following errors in the evmd.log

Oracle Database 10g CRS Release Production Copyright 1996, 2005 Oracle. All rights reserved.

2008-12-17 23:33:27.430: [ EVMD][4900]32EVMD Starting

2008-12-17 23:33:27.445: [ EVMD][4900]32

Oracle Database 10g CRS Release Production Copyright 1996, 2004, Oracle. All rights reserved

2008-12-17 23:33:28.414: [ COMMCRS][1456]clsc_send_msg: (00000000032B8BA0) NS err (12571, 12560), transport (533, 57, 0)

The TNS error did look promising but it was not related to the 12571 or 12560.

The solution was that the hostname of the second node was looked up as a FQDN in the OUI, while the command line looked for the hostname.
As both were one way or the other not correctly administrated in the DNS (other department did this) the OUI failed.

More too follow.