Thursday, 31 May 2012

AWS - Migrate Linux AMI (EBS) using CloudyScripts

In a typical Amazon Web Services(AWS) Environment, Amazon Machine Images (AMIs) are strictly available in a certain region only. These AMIs cannot be moved from one region to another. Though the AMIs are shared within different Availability Zones of the same region.

For this purpose, you can use a third party tool called as CloudyScripts.

CloudyScripts is a collection of tools to help you programming Infrastructure Clouds.

The web-based tool is self explanatory and regularly updated. In case you find any bug, do not hesitate to email the owners right away.

Goto the CloudyScripts Copy AMI to different region tool

Cloud Computing: US Intelligence, Big Data, and the Cloud at Cloud Expo NY | Cloud Computing Journal

cloud computing: Simple Workflow Service - Amazon Adding One Enterp...

cloud computing: Simple Workflow Service - Amazon Adding One Enterp...:

Amazon has announced a new orchestration service called Simple Workflow Service . I would encourage you to read the announcement  on Werner's blog where he explains the need, rationale, and architecture.

cloud computing: 4 Big Data Myths - Part II

This is the second and the last part of this two-post series blog post on Big Data myths. If you haven't read the first part, check it check it in my previous blogs...

cloud computing: 4 Big Data Myths - Part II

cloud computing: 4 Big Data Myths - Part I

It was cloud then and it's Big Data now. Every time there's a new disruptive category it creates a lot of confusion. These categories are not well-defined. They just catch on. What hurts the most is the myths. This is the first part of my two-part series to debunk Big Data myths...

cloud computing: 4 Big Data Myths - Part I:

Wednesday, 30 May 2012

AWS EBS-Backed Instance Backup &Restore

Starting with the 2009-10-31 API, Amazon Web Services (AWS) has a new type of Amazon Machine Image(AMI) that stores its root device as an Amazon Elastic Block Store(EBS) volume. They refer to these AMIs as Amazon EBS-backed. When an instance of this type of AMI launches, an Amazon EBS volume is created from the associated snapshot, and that volume becomes the root device. You can create an AMI that uses an Amazon EBS volume as its root device with Windows or Linux/UNIX operating systems.

These instances can be easily backed-up. You can modify the original instance to suit your particular needs and then save it as an EBS-backed AMI. Hence, if in future you need the the modified version of instance, you can simply launch multiple new instances from the backed-up AMI and are ready to-go.

Following are the steps to be performed for backup/restoring of AWS EBS instance into/from an AWS AMI. Also brief steps for deletion of AMI backup are noted for reference

EBS-instance to EBS-backed AMI

  • Go to AWS Management Console and in the My Instances Pane, select the instance which has to be backed up.
  • Right click the instance and select option Create Image (EBS AMI).

  • In the Create Image dialog box, give proper AMI Name and Description. Click on Create This Image button.

  • The image creation will be in progress. This will take sometime depending upon the number & size of volumes attached to the instance. Click on View pending image link. It will take you to the AMIs pane.

  • The AMI will be in pending state. It is important to note that this AMI is private to the account and not available for AWS public use.
  • If you select Snapshots from the Navigation Pane, then you can see that EBS volumes attached to the instance will be backed up as too.

  • Once the backup is done, the AMI will be in available state.

Restore from backup AMI into instance

In case, the running instance needs to be restored, use the latest backup AMI. To launch an instance from this AMI, right-click the AMI and select Launch Instance option. The Launch Instance Wizard will be displayed, perform the usual configurations and a new instance will be created containing all the data & configurations done before backup.

Delete AMI & Snapshots:

  • To delete any AMI, Right-click it and select De-register AMI.

  • Remember, deleting AMI doesn’t delete the EBS volume snapshots. Click on Snapshots from Navigation pane, search & select the snapshot(s) to be deleted. Right-click on the snapshot(s) and select delete snapshot option.



Friday, 25 May 2012

Microsoft SharePoint on the AWS Cloud

Amazon Web Services (AWS) provides services & tools for deploying Microsoft® SharePoint® workloads on its cloud infrastructure platform. This white paper discusses general concepts regarding how to use these services and provides detailed technical guidance on how to configure, deploy, and run a SharePoint Server farm on AWS.

Deploy SharePoint quickly at lower total cost on AWS Cloud. Learn how

Thursday, 24 May 2012

Why Zynga moved from public cloud to hybrid cloud

One more related read:

Why Zynga loves the hybrid cloud
By Michal Lev-Ram, writerApril 9, 2012: 5:00 AM ET

How Big WIll The Internet Be In 2015?

How Big WIll The Internet Be In 2015?.


Install and Configure MySQL on CentOS

MySQL is the world's most popular open source database.

MySQL Community Edition is freely downloadable version.

Commercial customers have the flexibility of choosing from multiple editions to meet specific business and technical requirements. For more details please refer to the MySQL official website.


On any CentOS server with open internet, run the below command to install MySQL Community Edition:

yum install mysql-server mysql php-mysql


Download the server and client rpm files from the MySQL Website depending upon the platform(OS) and architecture(32/64bit).

Install both rpm files using below command:

rpm -ivh <<rpm_filenames>>


rpm -ivh mysql-server-version.rpm mysqlclient9-version.rpm


Once installed, run the below commands to configure MySQL Server:

1.Set the MySQL service to start on boot

chkconfig --levels 235 mysqld on

2. Start the MySQL service

service mysqld start

3. By default the root user will have no password, so to log into MySQL use command:

mysql -u root

4. To exit Mysql Console, enter below command



To set the root user password for all local domains, login and run below commands

SET PASSWORD FOR 'root'@'localhost' = PASSWORD('<<new-password>>');

SET PASSWORD FOR 'root'@'localhost.localdomain' = PASSWORD('<<new-password>>');

SET PASSWORD FOR 'root'@'' = PASSWORD('<<new-password>>');

(Replace <<new-password>> with actual password)


run below command at linux shell:

mysqladmin -u root password '<<new-password>>'

(Replace <<new-password>> with actual password)

Once password is set, to login to Mysql use below command:

mysql -u root -p

Once you enter the above command, you will be prompted for the root password.


To add a new user for MySQL login, use the below SQL query. Remember this query must be run from the MySQL prompt.

for localhost:

INSERT INTO user (Host, User, Password, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv, Create_view_priv, Show_view_priv, Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, Trigger_priv, ssl_type, ssl_cipher, x509_issuer, x509_subject, max_questions, max_updates, max_connections, max_user_connections) VALUES ('localhost', '<<USERNAME>>', password('<<PASSWORD>>'), 'Y','Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'N', '', '', '', '', 0, 0, 0, 0);

for anyhostname:

INSERT INTO user (Host, User, Password, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv, Create_view_priv, Show_view_priv, Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, Trigger_priv, ssl_type, ssl_cipher, x509_issuer, x509_subject, max_questions, max_updates, max_connections, max_user_connections) VALUES ('%', '<<USERNAME>>', password('<<PASSWORD>>'), 'Y','Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'N', '', '', '', '', 0, 0, 0, 0);

Replace <<USERNAME>> and <<PASSWORD>> with actual username and password respectively.

Note they must be enclosed in single quotes.


In case, you want to drop any user use below command:

DROP '<<username>>''@'localhost';

DROP '<<username>>''@'localhost.localdomain';

(Replace <<username>> with actual username)

For more help and commands, refer -->

Tuesday, 22 May 2012

Amazon CloudSearch-Information Retrieval as a Service

The idea of using computers to search for relevant pieces of information was popularized in the article “As We May Think” by Vannevar Bush in 1945.
As We May Think predicted (to some extent) many kinds of technology invented after its publication, including hypertext, personal computers, the Internet, the World Wide Web, speech recognition, and online encyclopedias such as Wikipedia: “Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.”
According to Wikipedia, Information retrieval (IR) is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web. There is overlap in the usage of the terms data retrieval, document retrieval, information retrieval, and text retrieval, but each also has its own body of literature, theory, praxis, and technologies.
Purpose of Information Retrieval: To find the desired content quickly and efficiently by simply consulting the index.
“News & Announcements” section in AWS Newsletter brings new surprise in terms of Amazon’s offering and the way they are expanding the domain every month. AWS users are not surprised about the surprise they are getting but its more in terms of what kind of offering will be targeted by Amazon remains surprise. In April 2012 AWS has come up with new offering that is Amazon CloudSearch.
Amazon CloudSearch offers a way to integrate search into websites and applications, whether they’re customer-facing or for use behind the corporate firewall. It’s the same search technology that’s available at
Amazon CloudSearch is a fully-managed search service in the cloud that allows customers to easily integrate fast and highly scalable search functionality into their applications. Amazon CloudSearch effortlessly scales as the amount of searchable data increases or as the query rate changes, and developers can change search parameters, fine tune search relevance, and apply new settings at any time without having to upload the data again.
According to Amazon Web Services Blog,
“CloudSearch hides all of the complexity and all of the search infrastructure from you. You simply provide it with a set of documents and decide how you would like to incorporate search into your application.
You don’t have to write your own indexing, query parsing, query processing, results handling, or any of that other stuff. You don’t need to worry about running out of disk space or processing power, and you don’t need to keep rewriting your code to add more features.
With CloudSearch, you can focus on your application layer. You upload your documents, CloudSearch indexes them, and you can build a search experience that is custom-tailored to the needs of your customers.”


Configuration Service: The configuration service enables you to create and configure search domains. Each domain encapsulates a collection of data you want to search.
  • Indexing Option specifies the field to include it is index
  • Text Options, to avoid words during indexing
  • Rank Expressions to determine how search results are ranked.
Document Service: to make changes to a domain’s searchable data.
Search Service: The search service handles search requests for a domain.



  • Offloads administrative burden of operating and scaling a search platform
  • No need to worry about hardware provisioning, data partitioning, or software patches; it will be taken care by service provider
  • Pay-as-you-go pricing with no up-front expenses

Pricing Dimensions

  • Search instances
  • Document batch uploads
  • Index Documents requests
  • Data transfer


Search Instance Type
US East Region
Small Search Instance
$0.12 per hour
Large Search Instance
$0.48 per hour
Extra Large Search Instance
$0.68 per hour

Video Tutorials

Introducing Amazon CloudSearch
To see a summary of Amazon CloudSearch features, please watch this video.
Introducing Amazon CloudSearch
Building a Search Application Using Amazon CloudSearch
To see how to use Amazon CloudSearch to develop a search application, including uploading and indexing a large public data set, setting up index fields, customizing ranking, and embedding search in a sample application, please watch this video.
Building a Search Application Using Amazon CloudSearch

SOURCE : Amazon CloudSearch-Information Retrieval as a Service.

Sunday, 20 May 2012

Getting Started with Amazon Web Services EBS Volumes

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2 enables “compute” in the cloud.

Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances. EBS provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance. It persists independently from the life of an instance. These EBS volumes are created in a particular Availability Zone and can be from 1 GB to 1 TB in size.

Follow the below steps to Create, attach and mount EBS Volumes to launched EC2 instances:

Create the EBS Volume

Log into AWS Management Console and follow the below steps for all the each extra volume to be attached to instances. For example, let’s create and attach a 6GB EBS volume (for Oracle Alert Logs and Traces) to Database server.

• Choose “Volumes” on the left hand control panel:

• In the right-hand pane under EBS Volumes, click on ‘Create Volume’

• In Create Volume dialog box that appears:
Enter the size mentioned in table, keep availability zone same as that of Database instance and select No Snapshot and click on ‘Create’.

• This will create an EBS volume and once create is complete it will be displayed as

Attach Volume

• Select a volume and click on button to Attach Volume

• Select the instance for which EBS volume is to be attached. Also mention the mount point for the volume in device.
Here Instance is for database and mount device is /dev/sdf

• Once attached it will be displayed as

Mount the Volume

• Execute commands in the EC2 instance’s (Database Server) linux shell. As this is a new volume (with no data), we will have to format it
Run command:

mkfs -t ext3 /dev/sdf

(Replace text in blue with mount device used in previous step)

• Make a directory to mount the device.

mkdir /mnt/disk1

• Mount the device in newly created directory

mount /dev/sdf /mnt/disk1

(Replace text in blue as required)

• By default volumes will not be attached to the instance on reboot. To attach these volumes to given mount point every time on reboot, execute the following command

echo "/dev/sdf /mnt/disk1ext3 noatime 0 0" >> /etc/fstab"

(Replace text in blue as required)

Check attached volume by using command: df -h

Unmounting the volume

From the Elastic Block Storage Feature Guide: A volume must be unmounted inside the instance before being detached. Failure to do so will result in damage to the file system or the data it contains.

umount /mnt/disk1

Remember to cd out of the volume, otherwise you will get an error message

umount: /mnt/disk1: device is busy

Hope the above steps help you get into action in minutes.

In case you get stuck at any point, do comment below. I will be glad to help. :)

Friday, 18 May 2012

Install JAVA on Linux using rpm files

Steps for installing JAVA (JDK 6) on linux using rpm files:

1. Log into the linux shell and become root user by running the command

su –i

2. Change directory.

cd /opt

3. Please search at for newer versions to download.

You can download to any directory you choose; it does not have to be the directory where you want to install the JDK. Before you download the file, notice its byte size provided on the download page on the web site. Once the download has completed, compare that file size to the size of the downloaded file to make sure they are equal.

To download use one of the below commands, depending on the server's architecture (32/64 bit) :
64 bit:

32 bit:

4. Make sure that execute permissions are set in the self-extracting binary.
Enter ls –la to see the permissions for the file.

5. Run below command to grant execute permission to the file:

chmod a+x <<name-of-rpm-file-downloaded-earlier>>
For e.g.:
chmod +x jdk-6u25-linux-x64-rpm.bin\?e\=1306317438\&h\=294de0d36f54e28dd65fc8370e3c406d

6. Change directory to the location where you would like the files to be installed. The next step installs the JDK into the current directory.

7. Execute the downloaded file, prepended by the path to it.
For example, if the file is in the current directory, prepend it with "./"  :
For e.g.:


8. The binary code license is displayed, and you are prompted to agree to its terms.

9. Check if java is installed using command

java -version

The java version must be displayed correctly. You may also want to run commands like java or javac to check if installation is proper.

10. Execute below command to test if the JAVA_HOME environment variable is set.


It must display the location where java is installed.

11.  Delete the bin and rpm files if you want to save disk space.

rm -rf sun*


Installation and Setup of S3fs on Amazon Web Services

FUSE-based file system backed by Amazon S3.
S3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It doesn’t store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance, as if a network drive attached to it.
S3fs-fuse project is written in python backed by Amazons Simple Storage service. Amazon offers an open API to build applications on top of this service, which several companies have done, using a variety of interfaces (web, rsync, fuse, etc).
These steps are specific to an Ubuntu Server.
  1. Launch an Ubuntu Server on AWS EC2. (Recommended AMI – ami-4205e72b, username : ubuntu )
  2. Login to the Server using Winscp / Putty
  3. Type below command to update the existing libraries on the server.
sudo apt-get update
Setup of S3fs on AWS
4.Type command to upgrade the libraries. If any msg is prompted, say ‘y’ or ‘OK’ as applicable.
sudo apt-get upgrade
Setup of S3fs on AWS
Once upgrade is complete, install the necessary libraries for fuse with following command
sudo aptitude install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev comerr-dev libfuse2 libidn11-dev libkadm55 libkrb5-dev libldap2-dev libselinux1-dev libsepol1-dev pkg-config fuse-utils sshfs
If any msg is prompted, say ‘y’ or ‘OK’ as applicable.
Setup of S3fs on AWS

5. Once all the packages are installed, download the s3fs source (Revision 177 as of this writing) from the Google Code project:
Setup of S3fs on AWS
6.Untar and install the s3fs binary: (Run each command individually)
tar xzvf s3fs-r177-source.tar.gz
cd ./s3fs
sudo make
sudo make install
Setup of S3fs on AWS
7. In order to use the allow_other option (see below) you will need to modify the fuse configuration:
sudo vi /etc/fuse.conf
And uncomment the following line in the conf file: ( To uncomment a line, remove the ‘#’ symbol )
Setup of S3fs on AWS
Save the file using command: ‘Esc + : wq ’
8. Now you can mount an S3 bucket. Create directory using command :
sudo mkdir -p /mnt/s3
Mount the bucket to the created directory
sudo s3fs bucketname -o accessKeyId=XXX -o secretAccessKey=YYY -o use_cache=/tmp -o allow_other /mnt/s3
Replace the XXX above with your real Amazon Access Key and YYY with your real Secret Key.
Setup of S3fs on AWS
Command also includes instruction to cache the bucket’s files locally (in /tmp) and to Allow other users to be able to manipulate files in the mount.
Now any files written to /mnt/s3 will be replicated to your Amazon S3 bucket.
Setup of S3fs on AWS
WinScp – Verify mount directory
Check the wiki documentation for more options available to s3fs, including how to save your Access Key and Secret Key in /etc/passwd-s3fs.

Hello world!

Welcome to thecrystalclouds!

I'm new to blogging and this is my very first post!!! Being in the cloud industry for more than 2 years, I have gained immense hands-on, practical knowledge to work things around the cloud!!! So, I have started writing this blog to share my experiences and tech bits with the world.

Feel free to comment and suggest changes. I'm always open for some productive discussions.

Let's make the clouds crystal clear and fly freely in the vast space of the universe ! ;)

Happy blogging!