Showing posts with label Infrastructure as a Service (IaaS). Show all posts
Showing posts with label Infrastructure as a Service (IaaS). Show all posts

Thursday 27 September 2012

Elastic Detector : Elastic Vulnerability Assessment

 

SecludIT developed a new approach to vulnerability assessment by using the elasticity of IaaS: Elastic Vulnerability Assessment - EVA.
Elastic Detector is Secludit's fully automated security event detection tool for Amazon EC2. 
It helps administrators and users of Amazon EC2-based infrastructures to continuously identify holes on security groups and applications, thus dramatically reducing the risk of external and internal attacks. 
It is Delivered as SaaS or Virtual Appliance (currently only running on US East Region).
In contrary to existing tools, you don’t need to install any additional software, such as agents, and do not need to configure any monitors up-front.
If you want to know more about Elastic Detector, watch the video below or try the service for free under elastic-detector.secludit.com.


AWS Announcement : High Performance Provisioned IOPS Storage For Amazon RDS

 
After announcing EBS Provisioned IOPS offering lately which allows you to specify both volume size and volume performance in term of number of I/O operations per second (IOPS),  AWS has now announced High Performance Provisioned IOPS Storage for Amazon RDS.
 
You can now create an RDS database instance and specify your desired level of IOPS in order to get more consistent throughput and performance.

Amazon RDS Provisioned IOPS is immediately available for new database instances in the US East (N. Virginia), US West (N. California), and EU West (Ireland) Regions and AWS plan to launch in other AWS Regions in the coming months.
 
AWs is rolling this out in two phases. Read on more the extract from the announcement on AWS Blog by Jeff.
 
 
    
                   We are rolling this out in two stages. Here's the plan:
 
  • Effective immediately, you can provision new RDS database instances with 1,000 to 10,000 IOPS, and with 100GB to 1 TB of storage for MySQL and Oracle databases. If you are using SQL Server, the maximum IOPS you can provision is 7,000 IOPS. All other RDS features including Multi-AZ, Read Replicas, and the Virtual Private Cloud, are also supported.
  •  
  • In the near future, we plan to provide you with an automated way to migrate existing database instances to Provisioned IOPS storage for the MySQL and Oracle database engines. If you want to migrate an existing database instance to Provisioned IOPS storage immediately, you can export your data and re-import it into a new database instance equipped with Provisioned IOPS storage.

We expect database instances with RDS Provisioned IOPS to be used in demanding situations. For example, they are a perfect host for I/O-intensive transactional (OLTP) workloads.
We recommend that customers running production database workloads use Amazon RDS Provisioned IOPS for the best possible performance. (By the way, for mission critical OLTP workloads, you should also consider adding the Amazon RDS Multi-AZ option to improve availability.)


Check out the video with Rahul Pathak of the Amazon RDS team to learn more about this new feature and how some of AWS customers were using it:




Responses from AWS customers :

  • AWS customer Flipboard uses RDS to deliver billions of page flips each month to millions of mobile phone and tablet users. Sang Chi, Data Infrastructure Architect at Flipboard told us:
"We want to provide the best possible reading and content delivery experience for a rapidly growing base of users and publishers. This requires us not only to use a high performance database today but also to continue to improve our performance in the future. Throughput consistency is critical for our workloads. Based on results from our early testing, we are very excited about Amazon RDS Provisioned IOPS and the impact it will have on our ability to scale. We’re looking forward to scaling our database applications to tens of thousands of IOPS and achieving consistent throughput to improve the experience for our users."
  • AWS customer Shine Technologies uses RDS for Oracle to build complex solutions for enterprise customers. Adam Kierce, their Director said:
"Amazon RDS Provisioned IOPS provided a turbo-boost to our enterprise class database-backed applications. In the past, we have invested hundreds of days in time consuming and costly code based performance tuning, but with Amazon RDS Provisioned IOPS we were able to exceed those performance gains in a single day. We have demanding clients in the Energy, Telecommunication, Finance and Retail industries, and we fully expect to move all our Oracle backed products onto AWS using Amazon RDS for Oracle over the next 12 months. The increased performance of Amazon's RDS for Oracle with Provision IOPS is an absolute game changer, because it delivers more (performance) for less (cost)."
 

Tuesday 18 September 2012

Amazon VPC - New Additions

 
AWS has added 3 new features / options to the  Amazon Virtual Private Cloud (VPC) service.
 
 
PFB extract for the two blogs written by Jeff on the same:
 
 
The Amazon Virtual Private Cloud (VPC) gives you the power to create a private, isolated section of the AWS Cloud. You have full control of network addressing. Each of your VPCs can include subnets (with access control lists), route tables, and gateways to your existing network and to the Internet.
 
You can connect your VPC to the Internet via an Internet Gateway and enjoy all the flexibility of Amazon EC2 with the added benefits of Amazon VPC. You can also setup an IPsec VPN connection to your VPC, extending your corporate data center into the AWS Cloud. Today we are adding two options to give you additional VPN connection flexibility:
  1. You can now create Hardware VPN connections to your VPC using static routing. This means that you can establish connectivity using VPN devices that do not support BGP such as Cisco ASA and Microsoft Windows Server 2008 R2. You can also use Linux to establish a Hardware VPN connection to your VPC. In fact, any IPSec VPN implementation should work.
  2. You can now configure automatic propagation of routes from your VPN and Direct Connect links (gateways) to your VPC's routing tables. This will make your life easier as you won’t need to create static route entries in your VPC route table for your VPN connections. For instance, if you’re using dynamically routed (BGP) VPN connections, your BGP route advertisements from your home network can be automatically propagated into your VPC routing table.
If your VPN hardware is capable of supporting BGP, this is still the preferred way to go as BGP performs a robust liveness check on the IPSec tunnel. Each VPN connection uses two tunnels for redundancy; BGP simplifies the failover procedure that is invoked when one VPN tunnel goes down.

Tuesday 21 August 2012

ALL about AWS EBS Provisioned IOPS - feature and resources

AWS has recently announced EBS Provisioned IOPS feature, a new Elastic Block Store volume type for running high performance databases in the cloud. Provisioned IOPS are designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, you can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume.
Simple comparison between the standard volumes and provisioned IOPS volumes:

Amazon EBS Standard volumes
  • Offer cost effective storage for applications with moderate or bursty I/O requirements.
  • Deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS.
  • Are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.
  • $0.10 per GB-month of provisioned storage
  • $0.10 per 1 million I/O requests

Amazon EBS Provisioned IOPS volumes
  • Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases.
  • Amazon EBS currently supports up to 1000 IOPS per Provisioned IOPS volume, with higher limits coming soon.
  • Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.
  • $0.125 per GB-month of provisioned storage
  • $0.10 per provisioned IOPS-month
AWS has compiled some interesting resources for the users:

Our recent release of the EBS Provisioned IOPS feature (blog post, explanatory video, EBS home page) has met with a very warm reception. Developers all over the world are already making use of this important and powerful new EC2 feature. I would like to make you aware of some other new resources and blog posts to help you get the most from your EBS volumes.
  • The EC2 FAQ includes answers to a number of important performance and architecture questions about Provisioned IOPS.
  • The EC2 API tools have been updated and now support the creation of Provisioned IOPS volumes. The newest version of the ec2-create-volume tool supports the --type and --iops options. For example, the following command will create a 500 GB volume with 1000 Provisioned IOPS:
    $ ec2-create-volume --size 500 --availability-zone us-east-1b --type io1 --iops 1000
  • Eric Hammond has written a detailed migraton guide to show you how to convert a running EC2 instance to an EBS-Optimized EC2 instance with Provisioned IOPS volumes. It is a very handy post, and it also shows off the power of programmatic infrastructure.
  • I have been asked about the applicability of existing EC2 Reserved Instances to the new EBS-Optimized instances. Yes, they apply, and you pay only the additional hourly charge. Read our new FAQ entry to learn more.
  • I have also been asked about the availability of EBS-Optimized instances for more instance types. We intend to support other instance types based on demand. Please feel free to let us know what you need by posting a comment on this blog or in the EC2 forum.
  • The folks at CloudVertical have written a guide to understanding new AWS I/O options and costs.
  • The team at Stratalux wrote a very informative blog post, Putting Amazon's Provisoned IOPS to the Test. Their conclusion:
    "Based upon our tests PIOPS definitely provides much needed and much sought after performance improvements over standard EBS volumes. I’m glad to see that Amazon has heeded the calls of its customers and developed a persistent storage solution optimized for database workloads."
EBS Provisioned IOPS We have also put together a new guide to benchmarking provisioned IOPS volumes. The guide shows you how to set up and run high-quality, repeatable benchmarks on Linux and Windows using the fio, Oracle Orion, and SQLIO tools. The guide will walk you through the following steps:
  • Launching an EC2 instance.
  • Creating Provisioned IOPS EBS volumes.
  • Attaching the volumes to the instance.
  • Creating a RAID from the volumes.
  • Installing the appropriate benchmark tool.
  • Benchmarking the I/O performance of your volumes.
  • Deleting the volumnes and terminate the instance.
Since I like to try things for myself, I created six 100 GB volumes, each provisioned for 1000 IOPS:
EBS Provisioned IOPS
Then I booted up an EBS-Optimized EC2 instance, built a RAID, and ran fio. Here's what I saw in the AWS Management Console's CloudWatch charts after the run. Each volume was delivering 1000 IOPS, as provisioned:
EBS Provisioned IOPS
Here's an excerpt from the results:
fio_test_file: (groupid=0, jobs=32): err= 0: pid=23549: Mon Aug 6 14:01:14 2012
read : io=123240MB, bw=94814KB/s, iops=5925 , runt=1331000msec
clat (usec): min=356 , max=291546 , avg=5391.52, stdev=8448.68
lat (usec): min=357 , max=291547 , avg=5392.91, stdev=8448.68
clat percentiles (usec):
| 1.00th=[ 418], 5.00th=[ 450], 10.00th=[ 478], 20.00th=[ 548],
| 30.00th=[ 596], 40.00th=[ 668], 50.00th=[ 892], 60.00th=[ 1160],
| 70.00th=[ 3152], 80.00th=[10432], 90.00th=[20864], 95.00th=[26752],
| 99.00th=[29824], 99.50th=[30336], 99.90th=[31360], 99.95th=[31872],
| 99.99th=[37120]
Read the benchmarking guide to learn more about running the benchmarks and interpreting the results.

SOURCE for resources : http://aws.typepad.com/aws/2012/08/ebs-provisioned-iops-some-interesting-resources.html

Saturday 2 June 2012

Seeding Torrents with Amazon S3 and s3cmd on Ubuntu

Again a nice post by  . Hope its useful for some of you out there...

Amazon Web Services is such a huge, complex service with so many products and features that sometimes very simple but powerful features fall through the cracks when you’re reading the extensive documentation.

One of these features, which has been around for a very long time, is the ability to use AWS to seed (serve) downloadable files using the BitTorrent™ protocol. You don’t need to run EC2 instances and set up software. In fact, you don’t need to do anything except upload your files to S3 and make them publicly available.

Any file available for normal HTTP download in S3 is also available for download through a torrent. All you need to do is append the string ?torrent to the end of the URL and Amazon S3 takes care of the rest.

Steps

Let’s walk through uploading a file to S3 and accessing it with a torrent client using Ubuntu as our local system. This approach uses s3cmd to upload the file to S3, but any other S3 software can get the job done, too.
  1. Install the useful s3cmd tool and set up a configuration file for it. This is a one time step:
    sudo apt-get install s3cmd s3cmd --configure 
    The configure phase will prompt for your AWS access key id and AWS secret access key. These are stored in $HOME/.s3cmd which you should protect. You can press [Enter] for the encryption password and GPG program. I prefer “Yes” for using the HTTPS protocol, especially if I am using s3cmd from outside of EC2.
  2. Create an S3 bucket and upload the file with public access:
    bucket=YOURBUCKETNAME filename=FILETOUPLOAD basename=$(basename $filename) s3cmd mb s3://$bucket s3cmd put --acl-public $filename s3://$bucket/$basename 
  3. Display the URLs which can be used to access the file through normal web download and through a torrent:
    cat <<EOM web: http://$bucket.s3.amazonaws.com/$basename torrent: http://$bucket.s3.amazonaws.com/$basename?torrent EOM 

Notes

  1. The above process makes your file publicly available to anybody in the world. Don’t use this for anything you wish to keep private.
  2. You will pay standard S3 network charges for all downloads from S3 including the initial torrent seeding. You do not pay for network transfers between torrent peers once folks are serving the file chunks to each other.
  3. You cannot throttle the rate or frequency of downloads from S3. You can turn off access to prevent further downloads, but monitoring accesses and usage is not entirely real time.
  4. If your file is not popular enough for other torrent peers to be actively serving it, then every person who downloads it will transfer the entire content from S3’s torrent servers.
  5.  If people know what they are doing, they can easily remove “?torrent” and download the entire file direct from S3, perhaps resulting in a higher cost to you. So as a work-around just download the ?torrent URL, save the torrent file, and upload it back to S3 as a .torrent file. Share the torrent file itself, not the ?torrent URL. Since nobody will know the URL of the original file, they can only download it via the torrent.You don't even need to share the .torrent file using S3.
SOURCE

Thursday 31 May 2012

AWS - Migrate Linux AMI (EBS) using CloudyScripts


In a typical Amazon Web Services(AWS) Environment, Amazon Machine Images (AMIs) are strictly available in a certain region only. These AMIs cannot be moved from one region to another. Though the AMIs are shared within different Availability Zones of the same region.

For this purpose, you can use a third party tool called as CloudyScripts.

CloudyScripts is a collection of tools to help you programming Infrastructure Clouds.

The web-based tool is self explanatory and regularly updated. In case you find any bug, do not hesitate to email the owners right away.


Goto the CloudyScripts Copy AMI to different region tool
































cloud computing: Simple Workflow Service - Amazon Adding One Enterp...

cloud computing: Simple Workflow Service - Amazon Adding One Enterp...:

Amazon has announced a new orchestration service called Simple Workflow Service . I would encourage you to read the announcement  on Werner's blog where he explains the need, rationale, and architecture.

Wednesday 30 May 2012

AWS EBS-Backed Instance Backup &Restore

Starting with the 2009-10-31 API, Amazon Web Services (AWS) has a new type of Amazon Machine Image(AMI) that stores its root device as an Amazon Elastic Block Store(EBS) volume. They refer to these AMIs as Amazon EBS-backed. When an instance of this type of AMI launches, an Amazon EBS volume is created from the associated snapshot, and that volume becomes the root device. You can create an AMI that uses an Amazon EBS volume as its root device with Windows or Linux/UNIX operating systems.

These instances can be easily backed-up. You can modify the original instance to suit your particular needs and then save it as an EBS-backed AMI. Hence, if in future you need the the modified version of instance, you can simply launch multiple new instances from the backed-up AMI and are ready to-go.

Following are the steps to be performed for backup/restoring of AWS EBS instance into/from an AWS AMI. Also brief steps for deletion of AMI backup are noted for reference


EBS-instance to EBS-backed AMI

  • Go to AWS Management Console and in the My Instances Pane, select the instance which has to be backed up.
  • Right click the instance and select option Create Image (EBS AMI).

  • In the Create Image dialog box, give proper AMI Name and Description. Click on Create This Image button.
 

  • The image creation will be in progress. This will take sometime depending upon the number & size of volumes attached to the instance. Click on View pending image link. It will take you to the AMIs pane.

  • The AMI will be in pending state. It is important to note that this AMI is private to the account and not available for AWS public use.
 
  • If you select Snapshots from the Navigation Pane, then you can see that EBS volumes attached to the instance will be backed up as too.

  • Once the backup is done, the AMI will be in available state.
 


Restore from backup AMI into instance


In case, the running instance needs to be restored, use the latest backup AMI. To launch an instance from this AMI, right-click the AMI and select Launch Instance option. The Launch Instance Wizard will be displayed, perform the usual configurations and a new instance will be created containing all the data & configurations done before backup.


Delete AMI & Snapshots:

  • To delete any AMI, Right-click it and select De-register AMI.

  • Remember, deleting AMI doesn’t delete the EBS volume snapshots. Click on Snapshots from Navigation pane, search & select the snapshot(s) to be deleted. Right-click on the snapshot(s) and select delete snapshot option.
 


References:


 

Friday 25 May 2012

Microsoft SharePoint on the AWS Cloud

Amazon Web Services (AWS) provides services & tools for deploying Microsoft® SharePoint® workloads on its cloud infrastructure platform. This white paper discusses general concepts regarding how to use these services and provides detailed technical guidance on how to configure, deploy, and run a SharePoint Server farm on AWS.

Deploy SharePoint quickly at lower total cost on AWS Cloud. Learn how