Before You Install or Upgrade CDH4 on a Cluster
When starting, stopping and restarting CDH components, always use the service (8) command rather than running scripts in /etc/init.d directly. This is important because service sets the current working directory to / and removes most environment variables (passing only LANG and TERM) so as to create a predictable environment in which to administer the service. If you run the scripts in/etc/init.d, any environment variables you have set remain in force, and could produce unpredictable results. (If you install CDH from packages, service will be installed as part of the Linux Standard Base (LSB).)
- Upgrading from CDH3: If you are upgrading from CDH3, you must first uninstall CDH3, then install CDH4; see Upgrading from CDH3 to CDH4.
Before you install CDH4 on a cluster, there are some important steps you need to do to prepare your system:
- Verify you are using a supported operating system for CDH4. See CDH4 Requirements and Supported Versions.
- If you haven't already done so, install the Oracle Java Development Kit. For instructions and recommendations, see Java Development Kit Installation.
On SLES 11 platforms, do not install or try to use the IBM Java version bundled with the SLES distribution; Hadoop will not run correctly with that version. Install the Oracle JDK following directions under Java Development Kit Installation.