Configure AWS Lamba programatically with Boto3

Serverless computing has generated a lot of interest in the development community. AWS Lambda is the AWS version of Serverless. In this post, we will look at configuring AWS Lambda programmatically with boto3.  We will configure a lambda function that connects to a Postgres DB on an EC2 instance in a private VPC using sqlalchemy. This would need packaging of the dependencies of sqlalchemy along with the lambda function.

By default, the instance where AWS lambda is running has Boto installed (AWS Python SDK). So, importing a boto3 package in python code will work without any other packaging. But importing a package like “import sqlalchemy” will lead to python error “Unable to import module sqlalchemy”. This article shows how to achieve the same.

Step 1: Write the lambda function

First we need to write the python code which will run as a lambda. This below code creates a python file and zips it. Note down, the python code that we wanted to execute should contain a method called lambda_handler which serves as an entry point for the lambda to start the execution.

One quick aside: SqlAlchemy complains about not being able to find the right provider when connecting to postgres from lambda. So might have to add postgressql + pscycopg2 as part of connection string. Don't forget to open the port in the security group and enable incoming connections from all in the host file of the Postgres server. This can be a cause of a lot of heartaches.
Also, remember if you the system you package the code is windows, then you are in some trouble as the pip will pick windows related drivers. So you need to run this on lambda compatible OS (read as Amazon Linux or compatible OS). Else, you will end up with a lot of wierd run time errors.

Step 2:  Create the lambda function

Now use boto3 client API create_function to map it to the above code. If you take a close look, Handler=‘lambda_function.lambda_handler’ means that it will look for the python file lambda_function.py and inside it it will look for lambda_handler function. Also, you need to pass the right arn for this API to work.

Step 3: Set the frequency of lambda 

Set the rule and the frequency at which you want the lambda to run, it can be a simple expression like rate(1 minute) or rate(5 minutes) or a cron expression too. Also, set the rule target and grant permission for the lambda to execute. In this case, we are configuring the rule to run every minute.

Step 4: Additional Package Deployment

Do this step if we need other packages for our python code to run. For creating deployment folders, you can use this python script create_deployment.py and requirements.txt. Basically, when the script file runs, it picks the packages that need to be installed from requirements.txt and does a pip -install -t and which creates a folder, pull all the packages and zips them. If you need to have a particular version you can say sseclient==0.0.11 or you can omit the version to pull the latest.

Final Step: Putting it together

Now putting everything together yields the following output in the AWS CloudWatch logs, no import module error this time. This can be modified to use whichever package you need for your development purposes.

Hope this saves a few hours trying to jump a few hoops which weren’t really apparent at the onset!

References:

AWS Lambda Functions Made Easy

AWS Lambda: Programmatically scheduling a CloudWatchEvent

 

Is Cost Optimization on the Cloud essential?

Cloud computing ushered in a new era of low-cost computing and rapid development with the help of managed services that help with undifferentiated heavy lifting. As many enterprises have found out, the road to the promised land has many challenges and barriers. When does Cost Optimization in the Cloud become relevant?  Enterprises typically follow  Cloud Transformation Maturity Model and we can look to it to give a clue.

Stages of Cloud Adoption: 

Enterprises typically follow the 5 stages of cloud adoption. Smaller enterprises and startups typically follow an abridged version of the plan or skip entire steps and take an opinionated approach right from the beginning based on their teams’  prior experience.

Evaluation and Prototyping :

This is the stage when there is an initiative – usually at the CTO office level to start towards migration to the cloud. The focus of the initiative at this stage is on high-level goals and evaluation of multiple cloud vendors to find a good fit at a strategic level.  The initiatives are well-funded at this stage ( compared to the scale of operations). They shortlist a vendor/ vendors, build some small POCs to evaluate the platform, the tooling around it, how much customization is possible and the global reach of the provider. There is no real application development at this stage. Based on the evaluation, they choose a vendor, prepare a roadmap and identify pilot applications to move to the cloud.

Pilot Migration :

They identify one of the teams to migrate their application to the cloud. The focus at this point is to lift and shift the application from on-premise to the cloud. The team is not really skilled at the best practices of the cloud. Once the application starts working in the cloud, they look at non-functional aspects such as privacy, security. This typically takes up a lot of time and effort as the enterprise is initially very cautious (rightly so) with the threat assessments. The R&D team also realizes that there is more to moving to the cloud from a non-functional perspective and starts factoring in that time and effort. At this stage, they will also formulate some extra guidelines in terms of security and privacy policies to follow during development like security by design and privacy by design. Cost is typically not a focus at this point.

Taking a single application to Production. Adoption by multiple development teams:

The team which started development looks at taking the application live on the cloud. Other R&D teams star the process of either new development or moving their applications. The key thing to note is that while there is focus on getting security/privacy right, there isn’t much focus or appreciation on shifts needed to make cloud adoption successful. This is typically the time when the organizations feel the growing pains on the cloud. Since there is no clear-cut guidance, each team takes a slightly different approach. The very reason that motivated the company to choose the cloud provider – the flexibility it offers for a variety of scenarios works in bringing all-around chaos.  The IT team suddenly finds itself overwhelmed by different types of requests. The lack of coherent policies leads to an explosion in costs. This leads to intense scrutiny of the cloud costs. The teams look for better ways of Cost Optimisation.

Cloud Optimization:

The teams formulate a governance mechanism on how and when to use a set of resources, put in place a set of tools to automate actions prescribed in the governance mechanism. The teams also reorganize to best manage their applications in a self-service way.  They institutionalize the use of automation and DevOps. Building Cost-aware applications become a focus for the teams since they own the applications and their management.  There is a broad realization that while the Cloud can deliver massive cost savings, it takes a significant change in the way they architect the applications, how they need to think about their infrastructure about ( e.g programmable and disposable) and how the teams manage the applications. Cost Optimization in the cloud requires a mix of tactical measures in terms of curbing wasted expenditure, putting in place a governance mechanism and re-architecting the applications to be Cloud Native.

Being Cloud Native:

By this stage, the organization is well versed with Cloud Native practices and the starts application development using the  Cloud-Native best practices. The organization institutionalized Cost optimization and it becomes a part of regular DevOps process. There is a high level of automation and continuous data-driven monitoring, analysis, and remedial actions. This keeps the cloud costs under check and the organizations realize the true gains of moving to the cloud.

Cost Optimization:

Enterprises typically see a need for cost optimization once they are in the third stage of the maturity cycle. As the scale of cloud adoption increases, formulating a set of policies, practices and having a tool set for automating many of the activities is essential to manage the cloud adoption complexities.

As the  excellent AWS Cloud Transformation Maturity Model white paper from Amazon indicates,

AWS Cost Transformation Maturity Model
Courtesy: AWS Cloud Transformation Maturity Model

one of the key transformations is to “Implement a continuous cost optimization process – Either the designated resources on a CCoE or a group of centralized staff from IT Finance must be trained to support an ongoing process using AWS or third-party cost-management tools to assess costs and optimize savings“.

and the outcomes to measure the organizational maturity as optimized is

Optimized cost savings – Your organization has an ongoing process and a team focused on continually reviewing AWS usage across your organization, and identifying cost-reduction opportunities.

The cost optimization becomes an essential and continuous activity on the cloud.

 

Visit us at www.insisiv.com to know more on how

 

Considerations for using EC2 Reserved Instances

AWS Reserved Instances offer a great way to cut the cost of your EC2 instances for an upfront payment for fixed time intervals of 1-yr or 3 years.  However, some pre-work helps to check if these are right for your usage patterns.

What are Reserved Instances :

Reserved Instances are billing constructs for obtaining discounts for reserving capacity of EC2 instances ( or some select services in AWS)  based on

  1. Instance Attributes
  2. Term Commitment
  3. Payment options

Depending on the options selected, you can get 30 – 70% discount on OnDemand Instance pricing. While this can result in noticeable savings, since your making reservations upfront, you need to evaluate it with a lot of factors before finalizing on buying RIs, watch how the RIs perform to make sure that the investment has an ideal return.

Anecdotally, many companies have trouble with effective RI utilization in the first few years of moving to AWS.  Fortunately, a lot of tools and guidance are available. AWS itself is making RIs more flexible to make sure the investment in RIs is successful.

When does using Reserved Instances makes sense?

Although cost advantages are there in using RIs, we need to dig a little deeper to understand when using RIs in the overall deployment strategy a good fit.

Is your application ( or parts of it)  always on?

RI is a capacity commitment for 24×7 for either 1-yr or 3-yr irrespective of utilization level. So it will be better suited for resources like control nodes of Hadoop clusters or primary instances hosting your database on your own on the cloud and these applications are running 24×7.  Remember that only running OnDemand Instances are charged.  So if your application needs to run only for say 10 hours a day, purchasing an RI for it would mean longer break even or may end up costing more.

Do you know Steady state capacity?

Architecting Cloud native applications to scale up and down based on demand improves the cost efficiency of the applications and overall ROI. For every application, there typically exists a steady state and minimum capacity. RIs are most suitable for stead state pools running for the complete length of the term commitment.

For e.g. Let us say you have a High Availability Hadoop cluster consisting of 1 Name Server, 1 secondary Name server and 10 worker nodes with the data residing in S3 and running 24×7. Occasionally, if there is spike is demand, the size of the cluster will increase to 18 nodes and scales down when once the peak demand reduces.  In this case, the candidates for RIs can be the name node, secondary name node, and 10 worker nodes.

With the recent changes, the RI discounts will apply to the entire family for Linux-based instances if there is no AZ specified without any changes. For e.g., if you have purchased an m4.4xlarge RI for one year and you have 1 m4.4xlarge and 4 m4.2xlarge and 4 m4.xlarge, then the credits of m4.4xlarge can be used for 2 m4.2xlarge or 4 m4.xlarge in case m4.4xlarge is not in use.  While this was possible earlier, you need to change it explicitly.

Do you have visibility for the length of term commitment?

RIs are a commitment for the entire duration of the term. If you are certain your application will run for either 1-yr or 3 years, then you should consider using RIs.

Have you finished right-sizing the applications?

While right-sizing applications is a continual activity in the cloud world, you need to do at least one iteration of right-sizing of instances before making capacity reservations for a particular instance type.

Break Even Point for 1 yr RI EC2 instance.

Given the pace of change, if you have reasonable visibility for one year, understand the steady state requirements, a 1 yr convertible instance can offer a discount of up to 30% and a break-even of around 6-8 months and represents a good balance of risk-reward.

For committing to 3 yr term, you need to evaluate a bit more and understand the limitations of RI to make sure your application is not going to run into those.

 Additional considerations for 3-year RI commitment
New Introductions:

Technology is changing rapidly in the cloud and there are constant additions to the family of EC2 instances which have better price /performance – case in point C5 family gives about 25% better price/performance over C4 instance types.  Locking in to a specific instance type for 3 years can straddle you with assets which may not give the expected returns if there are  improvements in tech stack ( e.g. introduction of EBS Optimized, EN, and Nitro KVM ) and new pricing models ( massive drop in dedicated instances/dedicated host pricing a few years back, changes to RI  models themselves).

Price Cuts:

As the scale and adoption of AWS increases, there are price cuts for existing instances or new instance types with better price performance. Making a long-term commitment will lose out on these price cuts.

Changing Application Requirements:

As the usage of applications changes, applications need to scale up or scale down an instance type. This would hold especially true over a 2-3 year period.  You need to consider if having the 30-40% upfront discount for 3 years is worth it or not.

For e.g. having a less performing instance type doesn’t have much impact it is part of a large Instance pool for Hadoop/ECS/ Cloud foundry /Auto-scaling groups. But for applications that are still in development or don’t have an option to change easily ( e.g. DB instances), they may not lend well over 3 year period.

Business Commitment:

The business requirements and intended use of applications change rapidly. If there are decisions on sunsetting the products in 1-2 years, it is not financially viable making an upfront capacity commitment.

Governance Policy:

RI as suited when there is a governance policy in place that will limit the instance types or families. This will help in the better use of RIs – especially when the several instances runs more than 10s of instances per business group.

Automated workload management:

Currently, AWS offers RI credits sequentially.

For e.g.  For m4.4x large RI, if there are two m4.4xlarge instances running simultaneously for 30 minutes each and then shut down for the next 30 minutes,  RI discount is applied for one m4.4xlarge for the first 30 minutes only and nothing for the next 30 minutes and that is wasted capacity. For some work types of workloads e.g. batch processing, it is possible to schedule them so that there is a fair distribution of steady state load on the instances and there is better utilization of RI instance hour.

Reserved instances are not suitable for all types of applications.  The most obvious being the term commitment.  If you have already decided to use EC2 instances and understand the steady state load of your application(s), RIs make sense.

If your application satisfies these requirements, then a 3-year commitment RI can break even around 18-21 months.

Payment Options

Price for m4.4x large in US-East Ohio region

As you can see, there is about 30% discount for RI for the most flexible option among RIs – No upfront convertible instances, over on-demand instances for one year period assuming 24×7 usage for all of one year.  At this rate, you would need to use it for 9 months to break even and 3 months of savings. In other words, you will get one m4.4x large essentially free for every 4 m4.4xlarge for a year! Not bad considering the flexibility they offer. If you willing to make an upfront payment, there is up to an extra 6% discount possible.  While standard RIs offer 10% more, there are suitable for mature applications with defined load patterns or if there is capacity pooling available for your application.

What you wish you knew before it is too late :

Contrary to what the name suggests, Reserved Instances do not guarantee capacity.  RIs will only put you first in line ahead of OnDemand and Spot Instance requests in a given AZ. While not seen in practice, it could, at least in theory, be possible to not get capacity when needed even with the reservations in place. It is important to take this into consideration during FMEA analysis of your applications.

The AWS RI marketplace exists where you can sell the rest of the term if you no longer need the RI.  So it is possible to sell the unused term. However, anecdotally, do note that AWS RI secondary market is not as liquid and often times there is a significant markdown offered by sellers. This is more the case with older instance types. There aren’t publicly available statistics on transactions on RI marketplace.

You need a US bank account for listing your assets in RI marketplace. As of May 2018, other locations are not yet supported. For consumers outside of US, this can be a limiting factor in selling RIs.

Your RI instance will incur charges as long as the sale of RI does not happen. Mere listing in RI  marketplace does not count.

RIs discounts are for a particular account or across linked accounts. While it is a good thing in terms of utilization, for enterprises, it may come as a surprise that one group purchases the RI but the discount applied to instances in a different group or linked account. It is possible to disable RI sharing in billing preferences. However, you need some amount of housekeeping in segregating the AWS accounts to achieve this.

Unused time of RI is a wasted expense and the single biggest contributor to ineffective utilization of RIs. If there isn’t a strong governance/ automation mechanism available, remediation can be a lengthy process.

What can you change once you have purchased RI

As per AWS documentation,

Requirements and Restrictions for Modification

Not all attributes of a Reserved Instance can be modified, and restrictions may apply.

Modifiable attribute

Supported platforms

Limitations

Change Availability Zones within the same region

All Windows and Linux

Change the scope from Availability Zone to Region and vice versa

All Windows and Linux

If you change the scope from Availability Zone to a region, you lose the capacity reservation benefit.

If you change the scope from region to Availability Zone, you lose Availability Zone flexibility and instance size flexibility (if applicable). For more information, see How Reserved Instances Are Applied.

Change the network platform between EC2-VPC and EC2-Classic

All Windows and Linux

Only applicable if your account supports EC2-Classic.

Change the instance size within the same instance type

Supported on Linux, except for RedHat and SUSE Linux due to licensing differences. For more information about RedHat and SUSE pricing, see Amazon EC2 Reserved Instance Pricing.

Not supported on Windows.

Some instance types are not supported, because there are no other sizes available. For more information, see Modifying the Instance Size of Your Reservations.

Change Tenancy

All Windows and Linux

From <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html>

What can’t you change in RI?

Region, Instance family, Tenancy and other limitations as noted above that are OS specific.

RI utilization and RI recommendations in AWS:

AWS cost explorer has dashboards to better understand the RI utilization. A Sample RI utilization graph in cost explorer.  This shows around 75% utilization i.e., a wastage of 25% RI capacity.

The utilization of Reserved Instances

Reduce wasted capacity by looking at the current coverage of the RIs. There is headroom in terms of increasing coverage for RI.

Coverage of Reserved Instances 

Cost Explorer also provides recommendations on RI purchases based on past usage of EC2 instances.  Starting March 2018, these are available programmatically as well at the rate of  $0.01 per API request to the cost explorer API.

Some third-party tools like insisive cloud offer this service some time as well with varying sophistication. Whichever method you use, you can assume that as your AWS adoption increases, using some automated mechanism in getting effective RI utilization.

Reserved Instances are an important tool in your arsenal in tackling the AWS costs. Using RIs as part of the mix along with other pricing models available based on application usage considerations results in a best value of the investment.

Use a mixed model of RI, OnDemand, and Spot for optimal efficiency and pricing.

Continuously measuring, analyzing and taking corrective actions will help you realize your investment in RIs. Consider using services like insisive cloud which use data-driven recommendations and help make the best use of your current RI investments effortlessly.

About insisive cloud:

insisive cloud help enterprises maximize their AWS Cloud Spend by up to 60% using automation using data-driven insights, continuous cost monitoring and optimization using industry best practices.

Signup for a free trial at www.insisiv.com/signup

 

EC2 Tenancy Model and Cost considerations

Tenancy indicates how the physical machine hosts the EC2 instances.  The default tenancy is a shared tenancy model.

Courtesy:find out that

This would mean that 0-n virtual machines ( EC2 instances) share the resources of a physical host. The true number of virtual machines on a physical host is not published.  From a cost perspective, this is the most cost-effective model ( when there are no licensing requirements based on socket or cores) as the resources are better used and the cost per vCPU/GB would be much smaller.

A note about Noisy Neighbours and VM escape Bugs:

In the shared tenancy model, since the EC2 instances on a particular host are not configurable, it is possible to share the host with EC2 instances which are very busy – a.k.a  ‘ Noisy Neighbours’.  This may result in non-uniform performance from an EC2 instance.  Unfortunately, there are no metrics to detect the noisy neighbor problem easily. A few anecdotal techniques that seem to ease the problem are

  1. Stopping and Starting instances – This operation may move the instance to a different physical host
  2. Use larger instances which may mean that the physical host doesn’t have enough capacity to host other instances. However, if there isn’t enough utilization of larger instances, then you will end up paying a higher cost.
  3. With the recent advances in Hardware and virtualization models, using EBS Optimized and Enhanced Networking (with SR-IOV) can help guarantee a certain amount of bandwidth from hard disk and network.
  4. Build elasticity into the application. While it won’t end a noisy neighbor problem, application will be insulated from the effects of a noisy neighbor

VM escape Bugs:

There were a few vulnerabilities found until 2016  especially in the PV virtualization model. There haven’t been much-publicized security vulnerabilities in the HVM and KVM virtualization models in the past year. If there are strict performance SLAs ( especially over an extended time horizon ) for your application and the simple remedial measures are not feasible or if there are licensing requirements that are per Core, then look at other tenancy models.

Dedicated Instances:

Dedicated instances offer some benefits over shared instances in terms of virtual machine neighbors.  Dedicated instances can have dedicated physical hosts. However, other shared EC2 instances from the same account can share the same physical host. There isn’t control on which shared EC2 instances can share the host. These type of instances are useful if there are compliance requirements about VM sharing with the third-party. Since the other instances on the physical host are also from the same account, the issue of (potential ) VM escape bugs from un-trusted third parties does not arise.  The noisy neighbor issue is not addressed using dedicated instances.

Look at the chart below which shows how many instances of a particular instance type can be hosted on a physical host.

Dedicated Host Attributes

Instance Type

Sockets

Physical Cores

medium

large

xlarge

2xlarge

4xlarge

8xlarge

9xlarge

10xlarge

12xlarge

16xlarge

18xlarge

24xlarge

32xlarge

c3

2

20

16

8

4

2

1

c4

2

20

16

8

4

2

1

c5

2

36

36

18

8

4

2

1

p2

2

36

16

2

1

g2

2

16

4

1

g3

2

36

4

2

1

m3

2

20

32

16

8

4

m5

2

48

48

24

12

6

2

1

m4

2

24

22

11

5

2

1

d2

2

24

8

4

2

1

r4

2

36

32

16

8

4

2

1

r3

2

20

16

8

4

2

1

h1

2

36

8

4

2

1

i2

2

20

8

4

2

1

1

i3

2

36

32

16

8

4

2

1

x1

4

72

2

1

x1e

4

72

32

16

8

4

2

1

Possible EC2 instances per physical host

Notice that there are fewer instances possible for larger instance types.  So potentially, if you move to higher instances types, the issues with untrusted VMs will go away as there is only instance possible to hosted on the physical host (  e.g. m4.10xlarge, r4.8xlarge).

Consider m4.10xlarge. The OnDemand pricing is

m4.10xlarge 40 124.5 160 EBS Only $2 per Hour

For the same instance type, the dedicate instance price is

m4.10xlarge 40 124.5 160 EBS Only $2 per Hour

+ $2 per hour/region  = US$730 more than On-Demand instance.

However, looking at the above chart, it is possible to launch only one m4.10xlarge  for a physical host. That also reflects in the per hourly pricing of shared tenancy OnDemand and Dedicated Instances.  So if you keep a shared instance instead of the dedicated instance, you will have about 100% cost saving without any side effects currently*.

However, consider the same for m4.large.  The OnDemand Price for m4.large is $0.1 per hour and the dedicated instance is $0.11 /hr

m4.large 2 6.5 8 EBS Only $0.11 per Hour $0.1 per Hour

For a dedicated instance

m4.large 2 6.5 8 EBS Only $0.1 per Hour

= US $818 pa more than m4.large on-demand instance.

But notice, you can launch up to 22 m4.large instances on a host.  Here there is much more to gain by choosing dedicated instances if there are licensing / compliance/performance requirements.

A Note about Compliance requirements:

For a large enterprise with stricter regulatory requirements, where the price of non-compliance can be much more expensive, dedicated instances offer a viable route to meet Cloud adoption with minimal disruption in hosting models compared to on-premise and still get the benefits of elasticity and pay as go pricing.

As of May 2017, it is no longer required to use dedicated hosts/instances for HIPAA compliance.

This could potentially mean non-trivial savings using shared tenancy model with Spot Instances instead of dedicated instance/hosts and still be HIPAA compliant for a variety of workloads. However, you need to protect the compliant resources via VPC and other mechanisms.

More details here

Dedicated Hosts:

Dedicated hosts can offer significant advantages for applications with licensing requirements tied to sockets or cores.  These are also quite useful if you want fine-grained control on which instances are placed on which physical hosts, keep affinity to a certain host etc.  This can give potential benefits in terms of consistent performance as the instances can be controlled and the shared resources pose less of a risk.

Dedicated hosts are the most expensive in terms of per hour charges compared to the other two types of tenancy.

*Pl. do note that instances per physical host are subject to change without prior notice. They are not guaranteed to be constant. Applications should not make any assumption on how many EC2 instances are hosted on a physical host.

Using a cost management solution like insisive cloud that enforces policy-based rules, you can check if you follow the best practices in using the right tenancy for your application.

Sign up for a risk-free 14 day trial at www.insisiv.com.