Showing posts with label How-to. Show all posts
Showing posts with label How-to. Show all posts

Wednesday 3 October 2012

Get Started With Oracle Applications Now With Our New Test Drive Program


AWS has just launched Oracle Test Drive Labs.
The purpose of the Oracle Test Drive program is to provide customers with the ability to quickly and easily explore the benefits of using Oracle software on AWSserver infrastructure.
These labs have been developed by Oracle and AWS partners and are provided free of charge for educational and demonstration purposes.
Each Test Drive lab includes up to 5 hours of complimentary AWS server time to complete the lab, and you can return here and to try any or all of the Test drive Labs at any time, so feel free to experiment and explore!
Please note that there may be some pre-requisitesfor few labs. Kindly understand them, acquire the required accounts or softwares before proceeding with the labs.


For example, Oracle Secure Backup to S3 requires Oracle Technet (OTN) account.
The products vary from Oracle products for Database and Infrastructure, Oracle Applications and Oracle Fusion Middleware.
We can select from nearly a dozen labs which include but are not limited to: 


  • Oracle Data Guard Disaster Recovery
  • Oracle Secure Backup to S3
  • Siebel on AWS


Read below post by Jeff for sample demo of back up Oracle database to AWS using the Oracle Secure Backup product.


One of the key advantages that customers and partners are telling us they really appreciate about AWS is its unique ability to cut down the time required to evaluate new software stacks. These "solution appliances" can now be easily deployed on AWS and evaluated by customers in hours or days, rather than in weeks or months, as is the norm with the previous generation of IT infrastructure.
With this in mind, AWS has teamed up with leading Oracle ecosystem partners on a new initiative called the Oracle Test Drive program.







Friday 28 September 2012

Get Started With the vCloud Service Evaluation Beta Today!


The Wait Is Over – Get Started With the vCloud Service Evaluation Beta Today!


Good news – the waitlist for the vCloud Service Evaluation Beta has been removed! This means that users can now sign up today and get a public vCloud account in 15 minutes or less.

Announced last month, the vCloud Service Evaluation provides a quick, easy and low-cost way for you to learn about the advantages of a vCloud though hands-on testing and experimentation. All you need to sign up is a credit card and you can get your own public vCloud up and running in minutes.

vmware vCloud service evaluation beta

The vCloud Service Evaluation has all the basics you need, including a catalog of useful VM templates, virtual networking, persistent storage, external IP addresses, firewalls, load balancers, the vCloud API, and more. A variety of pre-built content templates are also available (at no charge) through the vCloud Service Evaluation, including WordPress, Jommia!, Sugar CRM, LAMP stack, Windows Server, etc.

For a limited time, you can also use the promo code “VMworld50” for a $50 credit towards your vCloud environment.

Looking for support? Technical How-To Guides available on vCloud.VMware.com are perfect for new vCloud users looking for implementation assistance.


vmware vCloud service evaluation beta 


In addition, signing up for the vCloud Service Evaluation gives you access to the vCloud Service Evaluation Community, where users can ask questions and get answers directly from others in the vCloud community.


vmware vCloud service evaluation beta


Your own vCloud is just a few clicks away – sign up for the vCloud Service Evaluation Beta (don’t forget to use the promo code, “VMworld50”) and set up your own vCloud today!

Re-Blogged from VMware Blogs and yoyoclouds.com .

Installing AWS Command Line Tools from Amazon Downloads

A very well put up Blog on Installing AWS Command Line Tools from Amazon Downloads by Eric Hammond. Some useful extract from the Blog.

When you need an AWS command line toolset not provided by Ubuntu packages, you can download the tools directly from Amazon and install them locally.Unfortunately, Amazon does not have one single place where you can download all the command line tools for the various services, nor are all of the tools installed in the same way, nor do they all use the same format for accessing the AWS credentials.

The following steps show how to install and configure the AWS command line tools provided by Amazon [...]

Prerequisites

Install required software packages:

sudo apt-get update
sudo apt-get install -y openjdk-6-jre ruby1.8-full libxml2-utils unzip cpanminus build-essential

Create a directory where all AWS tools will be installed:
 
sudo mkdir -p /usr/local/aws

Now we’re ready to start downloading and installing all of the individual software bundles that Amazon has released and made available in scattered places on their web site and various S3 buckets.
Download and Install AWS Command Line Tools

These steps should be done from an empty temporary directory so you can afterwards clean up all of the downloaded and unpacked files.

Note: Some of these download URLs always get the latest version and some tools have different URLs every time a new version is released. Click through on the tool link to find the latest [Download] URL.

EC2 API command line tools:
wget --quiet http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
unzip -qq ec2-api-tools.zip
sudo rsync -a --no-o --no-g ec2-api-tools-*/ /usr/local/aws/ec2/

EC2 AMI command line tools:
wget --quiet http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
unzip -qq ec2-ami-tools.zip
sudo rsync -a --no-o --no-g ec2-ami-tools-*/ /usr/local/aws/ec2/

Tuesday 21 August 2012

Amazon CloudSearch - Start Searching in One Hour for Less Than $100 / Month


Extract from Amazon Web Service Evangelist Jeff Barr's CloudSearch blog post for more information about how you can start searching in an hour for less than $100 a month...

Continuing along in our quest to give you the tools that you need to build ridiculously powerful web sites and applications in no time flat at the lowest possible cost, I'd like to introduce you to Amazon CloudSearch. If you have ever searched Amazon.com, you've already used the technology that underlies CloudSearch. You can now have a very powerful and scalable search system (indexing and retrieval) up and running in less than an hour.

You, sitting in your corporate cubicle, your coffee shop, or your dorm room, now have access to search technology at a very affordable price. You can start to take advantage of many years of Amazon R&D in the search space for just $0.12 per hour (I'll talk about pricing in depth later).


What is Search?

Search plays a major role in many web sites and other types of online applications. The basic model is seemingly simple. Think of your set of documents or your data collection as a book or a catalog, composed of a number of pages. You know that you can find the desired content quickly and efficiently by simply consulting the index.

Search does the same thing by indexing each document in a way that facilitates rapid retrieval. You enter some terms into a search box and the site responds (rather quickly if you use CloudSearch) with a list of pages that match the search terms.

As is the case with many things, this simple model masks a lot of complexity and might raise a lot of questions in your mind. For example:
  1. How efficient is the search? Did the search engine simply iterate through every page, looking for matches, or is there some sort of index?
  2. The search results were returned in the form of an ordered list. What factor(s) determined which documents were returned, and in what order (commonly known as ranking)? How are the results grouped?
  3. How forgiving or expansive was the search? Did a search for "dogs" return results for "dog?" Did it return results for "golden retriever," or "pet?"
  4. What kinds of complex searches or queries can be used? Does the result for "dog training" return the expected results. Can you search for "dog" in the Title field and "training" in the Description?
  5. How scalable is the search? What if there are millions or billions of pages? What if there are thousands of searches per hour? Is there enough storage space?
  6. What happens when new pages are added to the collection, or old pages are removed? How does this affect the search results?
  7. How can you efficiently navigate through and explore search results? Can you group and filter the search results in ways that take advantage of multiple named fields (often known as a faceted search).
Needless to say, things can get very complex very quickly. Even if you can write code to do some or all of this yourself, you still need to worry about the operational aspects. We know that scaling a search system is non-trivial. There are lots of moving parts, all of which must be designed, implemented, instantiated, scaled, monitored, and maintained. As you scale, algorithmic complexity often comes in to play; you soon learn that algorithms and techniques which were practical at the beginning aren't always practical at scale.


What is Amazon CloudSearch?

Amazon CloudSearch is a fully managed search service in the cloud. You can set it up and start processing queries in less than an hour, with automatic scaling for data and search traffic, all for less than $100 per month.

CloudSearch hides all of the complexity and all of the search infrastructure from you. You simply provide it with a set of documents and decide how you would like to incorporate search into your application.

You don't have to write your own indexing, query parsing, query processing, results handling, or any of that other stuff. You don't need to worry about running out of disk space or processing power, and you don't need to keep rewriting your code to add more features.

With CloudSearch, you can focus on your application layer. You upload your documents, CloudSearch indexes them, and you can build a search experience that is custom-tailored to the needs of your customers.


How Does it Work?

The Amazon CloudSearch model is really simple, but don't confuse simple, with simplistic -- there's a lot going on behind the scenes!

Here's all you need to do to get started (you can perform these operations from the AWS Management Console, the CloudSearch command line tools, or through the CloudSearch APIs):
  1. Create and configure a Search Domain. This is a data container and a related set of services. It exists within a particular Availability Zone of a single AWS Region (initially US East).
  2. Upload your documents. Documents can be uploaded as JSON or XML that conforms to our Search Document Format (SDF). Uploaded documents will typically be searchable within seconds.  You can, if you'd like, send data over an HTTPS connection to protect it while it is transit.
  3. Perform searches.
There are plenty of options and goodies, but that's all it takes to get started.

Amazon CloudSearch applies data updates continuously, so newly changed data becomes searchable in near real-time. Your index is stored in RAM to keep throughput high and to speed up document updates. You can also tell CloudSearch to re-index your documents; you'll need to do this after changing certain configuration options, such as stemming (converting variations of a word to a base word, such as "dogs" to "dog") or stop words (very common words that you don't want to index).
Amazon CloudSearch has a number of advanced search capabilities including faceting and fielded search:

Faceting allows you to categorize your results into sub-groups, which can be used as the basis for another search. You could search for "umbrellas" and use a facet to group the results by price, such as $1-$10, $10-$20, $20-$50, and so forth. CloudSearch will even return document counts for each sub-group.
Fielded searching allows you to search on a particular attribute of a document. You could locate movies in a particular genre or actor, or products within a certain price range.

 
Search Scaling
Behind the scenes, CloudSearch stores data and processes searches using search instances. Each instance has a finite amount of CPU power and RAM. As your data expands, CloudSearch will automatically launch additional search instances and/or scale to larger instance types. As your search traffic expands beyond the capacity of a single instance, CloudSearch will automatically launch additional instances and replicate the data to the new instance. If you have a lot of data and a high request rate, CloudSearch will automatically scale in both dimensions for you.

Amazon CloudSearch will automatically scale your search fleet up to a maximum of 50 search instances. We'll be increasing this limit over time; if you have an immediate need for more than 50 instances, please feel free to contact us and we'll be happy to help.

The net-net of all of this automation is that you don't need to worry about having enough storage capacity or processing power. CloudSearch will take care of it for you, and you'll pay only for what you use.

Pricing Model

The Amazon CloudSearch pricing model is straightforward:

You'll be billed based on the number of running search instances. There are three search instance sizes (Small, Large, and Extra Large) at prices ranging from $0.12 to $0.68 per hour (these are US East Region prices, since that's where we are launching CloudSearch).

There's a modest charge for each batch of uploaded data. If you change configuration options and need to re-index your data, you will be billed $0.98 for each Gigabyte of data in the search domain.
There's no charge for in-bound data transfer, data transfer out is billed at the usual AWS rates, and you can transfer data to and from your Amazon EC2 instances in the Region at no charge.

Advanced Searching

Like the other Amazon Web Services, CloudSearch allows you to get started with a modest effort and to add richness and complexity over time. You can easily implement advanced features such as faceted search, free text search, Boolean search expressions, customized relevance ranking, field-based sorting and searching, and text processing options such as stopwords, synonyms, and stemming.

CloudSearch Programming

You can interact with CloudSearch through the AWS Management Console, a complete set of Amazon CloudSearch APIs, and a set of command line tools. You can easily create, configure, and populate a search domain through the AWS Management Console.
Here's a tour, starting with the welcome screen:

Amazon CloudSearch
 
You start by creating a new Search Domain:

Amazon CloudSearch
 
You can then load some sample data. It can come from local files, an Amazon S3 bucket, or several other sources:

Amazon CloudSearch
 
Here's how you choose an S3 bucket (and an optional prefix to limit which documents will be indexed):

Amazon CloudSearch
 
You can also configure your initial set of index fields:

Amazon CloudSearch
 
You can also create access policies for the CloudSeach APIs:

Amazon CloudSearch
 
Your search domain will be initialized and ready to use within twenty minutes:

Amazon CloudSearch
 
Processing your documents is the final step in the initialization process:

Amazon CloudSearch
 
After your documents have been processed you can perform some test searches from the console:

Amazon CloudSearch
 
The CloudSearch console also provides you with full control over a number of indexing options including stopwords, stemming, and synonyms:



 
CloudSearch in Action
Some of our early customers have already deployed some applications powered by CloudSearch. Here's a sampling:
  • Search Technologies has used CloudSearch to index the Wikipedia (see the demo).
  • NewsRight is using CloudSearch to deliver search for news content, usage and rights information to over 1,000 publications.
  • ex.fm is using CloudSearch to power their social music discovery website.
  • CarDomain is powering search on their social networking website for car enthusiasts.
  • Sage Bionetworks is powering search on their data-driven collaborative biological research website.
  • Smugmug is using CloudSearch to deliver search on their website for over a billion photos.

SOURCE

    AWS Direct Connect - New Locations and Console Support

    On 13th August AWS has announced new locations and console support for AWS Direct Connect. Great article by Jeff...

    Did you know that you can use AWS Direct Connect to set up a dedicated 1 Gbps or 10 Gbps network connect from your existing data center or corporate office to AWS?

    New Locations

    Today we are adding two additional Direct Connect locations so that you have even more ways to reduce your network costs and increase network bandwidth throughput. You also have the potential for a more consistent experience. Here is the complete list of locations:
    If you have your own equipment running at one of the locations listed above, you can use Direct Connect to optimize the connection to AWS. If your equipment is located somewhere else, you can work with one of our APN Partners supporting Direct Connect to establish a connection from your location to a Direct Connection Location, and from there on to AWS.

    Console Support

    Up until now, you needed to fill in a web form to initiate the process of setting up a connection. In order to make the process simpler and smoother, you can now start the ordering process and manage your Connections through the AWS Management Console.
    Here's a tour. You can establish a new connection by selecting the Direct Connect tab in the console:

    AWS Direct connect Establish a new connection
     
    After you confirm your choices you can place your order with one final click:

    AWS Direct connect Establish a new connection
     
    You can see all of your connections in a single (global) list:

    AWS Direct connect connections
     
    You can inspect the details of each connection:

    AWS Direct connect - connection details
     
    You can then create a Virtual Interface to your connection. The interface can connected to one of your Virtual Private Clouds or it can connect to the full set of AWS services:

    AWS Direct connect

    AWS Direct connect
     
    You can even download a router configuration file tailored to the brand, model, and version of your router:

    AWS Direct connect
     
    Get Connected
    And there you have it! Learn more about AWS Direct Connect and get started today.

    SOURCE
     

    Thursday 26 July 2012

    Creating A Local Yum Repository on CentOS

    Creating A Local Yum Repository on CentOS

    Reducing the costs of I.T without reducing the functionally of your systems is one of the major obstacles to overcome. One of these costs is bandwidth.

    One of the first bandwidth saving tips any organization should know is the importance of creating a local YUM repository on your LAN. Not only do you decrease the time it takes to download and install updates, you also decrease bandwidth usage. This saving will definitely please the suites of any organization.

    This “How To” show’s you a simple yet effective way of setting up your local YUM server and client.

    Read more here.

    Monday 4 June 2012

    yoyoclouds: Update CentOS

    Update CentOS 


    There are basically two ways of updating a CentOS machine.. first is by using the GUI and the second, via command line...

    Read more here ...

    Saturday 2 June 2012

    Seeding Torrents with Amazon S3 and s3cmd on Ubuntu

    Again a nice post by  . Hope its useful for some of you out there...

    Amazon Web Services is such a huge, complex service with so many products and features that sometimes very simple but powerful features fall through the cracks when you’re reading the extensive documentation.

    One of these features, which has been around for a very long time, is the ability to use AWS to seed (serve) downloadable files using the BitTorrent™ protocol. You don’t need to run EC2 instances and set up software. In fact, you don’t need to do anything except upload your files to S3 and make them publicly available.

    Any file available for normal HTTP download in S3 is also available for download through a torrent. All you need to do is append the string ?torrent to the end of the URL and Amazon S3 takes care of the rest.

    Steps

    Let’s walk through uploading a file to S3 and accessing it with a torrent client using Ubuntu as our local system. This approach uses s3cmd to upload the file to S3, but any other S3 software can get the job done, too.
    1. Install the useful s3cmd tool and set up a configuration file for it. This is a one time step:
      sudo apt-get install s3cmd s3cmd --configure 
      The configure phase will prompt for your AWS access key id and AWS secret access key. These are stored in $HOME/.s3cmd which you should protect. You can press [Enter] for the encryption password and GPG program. I prefer “Yes” for using the HTTPS protocol, especially if I am using s3cmd from outside of EC2.
    2. Create an S3 bucket and upload the file with public access:
      bucket=YOURBUCKETNAME filename=FILETOUPLOAD basename=$(basename $filename) s3cmd mb s3://$bucket s3cmd put --acl-public $filename s3://$bucket/$basename 
    3. Display the URLs which can be used to access the file through normal web download and through a torrent:
      cat <<EOM web: http://$bucket.s3.amazonaws.com/$basename torrent: http://$bucket.s3.amazonaws.com/$basename?torrent EOM 

    Notes

    1. The above process makes your file publicly available to anybody in the world. Don’t use this for anything you wish to keep private.
    2. You will pay standard S3 network charges for all downloads from S3 including the initial torrent seeding. You do not pay for network transfers between torrent peers once folks are serving the file chunks to each other.
    3. You cannot throttle the rate or frequency of downloads from S3. You can turn off access to prevent further downloads, but monitoring accesses and usage is not entirely real time.
    4. If your file is not popular enough for other torrent peers to be actively serving it, then every person who downloads it will transfer the entire content from S3’s torrent servers.
    5.  If people know what they are doing, they can easily remove “?torrent” and download the entire file direct from S3, perhaps resulting in a higher cost to you. So as a work-around just download the ?torrent URL, save the torrent file, and upload it back to S3 as a .torrent file. Share the torrent file itself, not the ?torrent URL. Since nobody will know the URL of the original file, they can only download it via the torrent.You don't even need to share the .torrent file using S3.
    SOURCE

    Wednesday 30 May 2012

    AWS EBS-Backed Instance Backup &Restore

    Starting with the 2009-10-31 API, Amazon Web Services (AWS) has a new type of Amazon Machine Image(AMI) that stores its root device as an Amazon Elastic Block Store(EBS) volume. They refer to these AMIs as Amazon EBS-backed. When an instance of this type of AMI launches, an Amazon EBS volume is created from the associated snapshot, and that volume becomes the root device. You can create an AMI that uses an Amazon EBS volume as its root device with Windows or Linux/UNIX operating systems.

    These instances can be easily backed-up. You can modify the original instance to suit your particular needs and then save it as an EBS-backed AMI. Hence, if in future you need the the modified version of instance, you can simply launch multiple new instances from the backed-up AMI and are ready to-go.

    Following are the steps to be performed for backup/restoring of AWS EBS instance into/from an AWS AMI. Also brief steps for deletion of AMI backup are noted for reference


    EBS-instance to EBS-backed AMI

    • Go to AWS Management Console and in the My Instances Pane, select the instance which has to be backed up.
    • Right click the instance and select option Create Image (EBS AMI).

    • In the Create Image dialog box, give proper AMI Name and Description. Click on Create This Image button.
     

    • The image creation will be in progress. This will take sometime depending upon the number & size of volumes attached to the instance. Click on View pending image link. It will take you to the AMIs pane.

    • The AMI will be in pending state. It is important to note that this AMI is private to the account and not available for AWS public use.
     
    • If you select Snapshots from the Navigation Pane, then you can see that EBS volumes attached to the instance will be backed up as too.

    • Once the backup is done, the AMI will be in available state.
     


    Restore from backup AMI into instance


    In case, the running instance needs to be restored, use the latest backup AMI. To launch an instance from this AMI, right-click the AMI and select Launch Instance option. The Launch Instance Wizard will be displayed, perform the usual configurations and a new instance will be created containing all the data & configurations done before backup.


    Delete AMI & Snapshots:

    • To delete any AMI, Right-click it and select De-register AMI.

    • Remember, deleting AMI doesn’t delete the EBS volume snapshots. Click on Snapshots from Navigation pane, search & select the snapshot(s) to be deleted. Right-click on the snapshot(s) and select delete snapshot option.
     


    References:


     

    Thursday 24 May 2012

    Install and Configure MySQL on CentOS

    MySQL is the world's most popular open source database.

    MySQL Community Edition is freely downloadable version.

    Commercial customers have the flexibility of choosing from multiple editions to meet specific business and technical requirements. For more details please refer to the MySQL official website.

    INSTALL :


    On any CentOS server with open internet, run the below command to install MySQL Community Edition:

    yum install mysql-server mysql php-mysql

    OR

    Download the server and client rpm files from the MySQL Website depending upon the platform(OS) and architecture(32/64bit).

    Install both rpm files using below command:

    rpm -ivh <<rpm_filenames>>

    Example:

    rpm -ivh mysql-server-version.rpm mysqlclient9-version.rpm


    CONFIGURE :


    Once installed, run the below commands to configure MySQL Server:

    1.Set the MySQL service to start on boot

    chkconfig --levels 235 mysqld on

    2. Start the MySQL service

    service mysqld start

    3. By default the root user will have no password, so to log into MySQL use command:

    mysql -u root

    4. To exit Mysql Console, enter below command

    exit;

    SET PASSWORD FOR ROOT :


    To set the root user password for all local domains, login and run below commands

    SET PASSWORD FOR 'root'@'localhost' = PASSWORD('<<new-password>>');

    SET PASSWORD FOR 'root'@'localhost.localdomain' = PASSWORD('<<new-password>>');

    SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('<<new-password>>');

    (Replace <<new-password>> with actual password)

    OR

    run below command at linux shell:

    mysqladmin -u root password '<<new-password>>'

    (Replace <<new-password>> with actual password)

    Once password is set, to login to Mysql use below command:

    mysql -u root -p

    Once you enter the above command, you will be prompted for the root password.

    ADD NEW USER :




    To add a new user for MySQL login, use the below SQL query. Remember this query must be run from the MySQL prompt.

    for localhost:

    INSERT INTO user (Host, User, Password, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv, Create_view_priv, Show_view_priv, Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, Trigger_priv, ssl_type, ssl_cipher, x509_issuer, x509_subject, max_questions, max_updates, max_connections, max_user_connections) VALUES ('localhost', '<<USERNAME>>', password('<<PASSWORD>>'), 'Y','Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'N', '', '', '', '', 0, 0, 0, 0);

    for anyhostname:

    INSERT INTO user (Host, User, Password, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv, Create_view_priv, Show_view_priv, Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, Trigger_priv, ssl_type, ssl_cipher, x509_issuer, x509_subject, max_questions, max_updates, max_connections, max_user_connections) VALUES ('%', '<<USERNAME>>', password('<<PASSWORD>>'), 'Y','Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'N', '', '', '', '', 0, 0, 0, 0);

    Replace <<USERNAME>> and <<PASSWORD>> with actual username and password respectively.

    Note they must be enclosed in single quotes.


    DROP ANY USER :


    In case, you want to drop any user use below command:

    DROP '<<username>>''@'localhost';

    DROP '<<username>>''@'localhost.localdomain';

    (Replace <<username>> with actual username)




    For more help and commands, refer --> http://www.yolinux.com/TUTORIALS/LinuxTutorialMySQL.html

    Sunday 20 May 2012

    Getting Started with Amazon Web Services EBS Volumes

    Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2 enables “compute” in the cloud.

    Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances. EBS provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance. It persists independently from the life of an instance. These EBS volumes are created in a particular Availability Zone and can be from 1 GB to 1 TB in size.

    Follow the below steps to Create, attach and mount EBS Volumes to launched EC2 instances:

    Create the EBS Volume


    Log into AWS Management Console and follow the below steps for all the each extra volume to be attached to instances. For example, let’s create and attach a 6GB EBS volume (for Oracle Alert Logs and Traces) to Database server.

    • Choose “Volumes” on the left hand control panel:



    • In the right-hand pane under EBS Volumes, click on ‘Create Volume’



    • In Create Volume dialog box that appears:
    Enter the size mentioned in table, keep availability zone same as that of Database instance and select No Snapshot and click on ‘Create’.



    • This will create an EBS volume and once create is complete it will be displayed as



    Attach Volume


    • Select a volume and click on button to Attach Volume



    • Select the instance for which EBS volume is to be attached. Also mention the mount point for the volume in device.
    Here Instance is for database and mount device is /dev/sdf



    • Once attached it will be displayed as



    Mount the Volume


    • Execute commands in the EC2 instance’s (Database Server) linux shell. As this is a new volume (with no data), we will have to format it
    Run command:

    mkfs -t ext3 /dev/sdf

    (Replace text in blue with mount device used in previous step)

    • Make a directory to mount the device.


    mkdir /mnt/disk1

    • Mount the device in newly created directory


    mount /dev/sdf /mnt/disk1

    (Replace text in blue as required)

    • By default volumes will not be attached to the instance on reboot. To attach these volumes to given mount point every time on reboot, execute the following command

    echo "/dev/sdf /mnt/disk1ext3 noatime 0 0" >> /etc/fstab"

    (Replace text in blue as required)

    Check attached volume by using command: df -h


    Unmounting the volume


    From the Elastic Block Storage Feature Guide: A volume must be unmounted inside the instance before being detached. Failure to do so will result in damage to the file system or the data it contains.

    umount /mnt/disk1

    Remember to cd out of the volume, otherwise you will get an error message

    umount: /mnt/disk1: device is busy

    Hope the above steps help you get into action in minutes.

    In case you get stuck at any point, do comment below. I will be glad to help. :)

    Friday 18 May 2012

    Install JAVA on Linux using rpm files

    Steps for installing JAVA (JDK 6) on linux using rpm files:

    1. Log into the linux shell and become root user by running the command

    su –i

    2. Change directory.

    cd /opt

    3. Please search at http://www.oracle.com/technetwork/java/javase/downloads/index.html for newer versions to download.

    You can download to any directory you choose; it does not have to be the directory where you want to install the JDK. Before you download the file, notice its byte size provided on the download page on the web site. Once the download has completed, compare that file size to the size of the downloaded file to make sure they are equal.

    To download use one of the below commands, depending on the server's architecture (32/64 bit) :
    64 bit:
    wget http://download.oracle.com/otn-pub/java/jdk/6u31-b04/jdk-6u31-linux-x64-rpm.bin

    32 bit:
    wget http://download.oracle.com/otn-pub/java/jdk/6u31-b04/jdk-6u31-linux-i586-rpm.bin



    4. Make sure that execute permissions are set in the self-extracting binary.
    Enter ls –la to see the permissions for the file.


    5. Run below command to grant execute permission to the file:

    chmod a+x <<name-of-rpm-file-downloaded-earlier>>
     
    For e.g.:
    chmod +x jdk-6u25-linux-x64-rpm.bin\?e\=1306317438\&h\=294de0d36f54e28dd65fc8370e3c406d

    6. Change directory to the location where you would like the files to be installed. The next step installs the JDK into the current directory.

    7. Execute the downloaded file, prepended by the path to it.
    For example, if the file is in the current directory, prepend it with "./"  :
    ./<<name-of-rpm-file-downloaded-earlier>>
    For e.g.:

    ./jdk-6u25-linux-x64-rpm.bin\?e\=1306317438\&h\=294de0d36f54e28dd65fc8370e3c406d



    8. The binary code license is displayed, and you are prompted to agree to its terms.



    9. Check if java is installed using command

    java -version



    The java version must be displayed correctly. You may also want to run commands like java or javac to check if installation is proper.

    10. Execute below command to test if the JAVA_HOME environment variable is set.

    echo $JAVA_HOME

    It must display the location where java is installed.

    11.  Delete the bin and rpm files if you want to save disk space.

    rm -rf sun*