I'm sure most of you are now starting to regularly deploy the new Series 5 Hardware. Many of us have used the NwArrayConfig script (2.1 / 2.2) versions in the past, but you may encounter issues when running these older scripts on Series 5 Gear with "Multiple DACs on Concentrators specifically" It has been suggested to me to use the new script located in "/opt/rsa/saTools". You can also acquire this also by downloading the following rpm;
DO NOT use the NwArrayCfg script in the opt/rsa/saTools directory from any release earlier than the release listed above. It will incorrectly configure the first two drives in each Packet Decoder DAC into a RAID 0 Configuration.
This Article contains the error(s) and the workaround for this particular issue.
I encountered the issue below while deploying some S5 gear for a customer. The deployment consisted of (2) S5 Decoders with (5) DACs each, and (2) S5 Concentrators with (4) DACs each. The gear was re-imaged/down-revved to 10.4.0.2.J because the customer was running at version 10.4.1.4 at the time of the installation. After the gear was spun up and IP’d the ArrayCfg script was ran (2.2) version on each of the Decoders, and all (5) DACs on each initialized without issue, it was only the S5 Concentrators that had this issue.
I was able to successfully run the script the first time with the “- - action init “ option, which configured the first DAC correctly. When attempting to run the script a second time with the “ - -action add” option it failed and I got the following error message;
Failed!: The number of disk group sizes (3) is not valid for configuring a NextGen device. Please verify there are only 1 or 2 disk group sizes being presented to the appliance and rerun this script.
I ran the nwraidutil.pl script to check the status and saw the following; See in ORANGE that it appeared to have configured the second DAC, but when doing a df –Ph I could see that it indeed did not. Also toward the end of the nwraidutil.pl output it showed that I had physical disk issues. Also see in ORANGE.
It was at this it was discovered that the script was failing because of a conflict associated with the < /dev/sdb > physical volume. We came up with the following workaround to “trick” the script into running properly. See in RED below…