Python programming language

Access Dictionary Keys As Object Attributes

You access Python dictionary keys using the syntax:


For example:

>>> my_dict = {'food': 'idly'}
>>> my_dict['food']

Sometimes, you might want to access the dictionary keys using:
syntax. If you do this is what happens:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'dict' object has no attribute 'food'

How can you solve this? Easy.

$ pip install attrdict
Taxonomy upgrade extras: 

How Many Seconds Are There Till End Of Month?

(datetime.datetime(,, calendar.monthrange(,[1]) -

There's a log going on in that one liner. Let's break it down.

The two key Python modules we need to calculate the number of seconds till the end of year are datetime and calendar.

calendar.monthrange(year, month) returns a tuple. The tuple's second element is the number of days in the month.

We create two date objects:

Taxonomy upgrade extras: 

Writing A Python Script To Send Files To Amazon S3

Amazon Simple Storage Service or Amazon S3 is a storage service with a web API. I use Amazon S3 to store backups of my blog and other sites. I made a simple python script to handle file uploads to S3.

In order to use Amazon S3, first create a bucket using your Amazon AWS account. As the name suggests, bucket is a container. You can create buckets using the AWS management console.

The script we're going to write will take two input parameters

  1. Path to file, local
  2. Target S3 path

Using Cookie Jar With urllib2

A while ago, we discussed how to scrape information from websites that don't offer information in a structured format like XML or JSON. We noted that urllib and lxml are indispensable tools in web scraping. While urllib enables us to connect to websites and retrieve information, lxml helps convert HTML, broken or not, to valid XML and parse it. In this post, I will demonstrate how to retrieve information from web pages that require a login session.

Taxonomy upgrade extras: 

Web Scraping With lxml

More and more websites are offering APIs nowadays. Previously, we've talked about XML-RPC and REST. Even though web services are growing exponentially there are a lot of websites out there that offer information in unstructured format. Especially, the government websites. If you want to consume information from those websites, web scraping is your only choice.

What is web scraping?

Web scraping is a technique used in programs that mimic a human browsing the website. In order to scrape a website in your programs you need tools to

  • Make HTTP requests to websites
  • Parse the HTTP response and extract content
Taxonomy upgrade extras: 

Make Your Own Script Appender In Mako Templates

In a recently started Pylons project, I wanted to make an easy script appending facility in Mako templates.

The requirement:

  • base.mako contains the layout of the web page. Many templates inherit base.mako. Here's a snippet from base.mako
        <title>Some title</title>
  • my_page.mako inherits base.mako. From within my_page.mako we want to be able to append script tags in the head section of the web page.
Taxonomy upgrade extras: