Use eXtensible Markup Language – XML

This post will examine when to use eXtensible Markup Language (XML).

XML stands for eXtensible Markup Language. Most programmers would probably prefer JSON, which is the other common wire formatting language, but XML does have advantages in certain circumstances.

XML is good for representing documents. For example, the new format of Microsoft Word and PowerPoint ends in “x”, which stands for XML.

XML
XML stands for eXtensible Markup Language.

XML is a textual representation of a tree structure with nodes. There are both simple and complex elements. Complex elements have tags within tags. Look at the picture below for an example.

XML Elements
This picture represents the difference between simple and complex elements.

Further, look at another picture for an illustration of more XML basics.

XML Basics
This picture color codes the basics of XML.

Indentation is used just for readability. In other words, white space is generally discarded.

In XML, unlike HTML, you make up the tag and attribute names to be useful in what you are describing.

XML Terminology

Indentation is often used to capture the nesting of elements.

For example:

  • In the picture below, the <a> tag has two child tags <b>, and <c>.
  • These tags are one level down from the root <a> tag. 
  • You could say <a> is the parent of <b> and <c>.
  • Also, <c> is the parent of <d> and <e>.
  • Text nodes and attribute nodes are considered children of the node itself.

XML as a tree

As a Python programmer, you could write code that traverses down tags, and pulls out information.

Web Services for Data on the Web

This post will discuss common web services.

Rather than retrieve and parse HTML documents, web services are URLs designed specifically to hand you data back for your application.

Web Services

XML and JSON are the two commonly used web services to format language going back and forth across the internet.

The problem is finding a way to send data that different programming languages can agree on. A Python dictionary, for example, is internally different from a Java hashmap, even though these data structures serve the same purpose. A “wire protocol” is how you send data structures in Python, that Java can agree on.

Wire Protocol
You send data across the net using a wire protocol.

The need for this wire protocol spawned two new terms.

Serialize is the act of taking an internal data structure, and creating a wire format.

De-Serialize is the act of taking the wire format and creating an internal data structure in a different language.

The wire protocol allows us to create sets of applications that work in different languages. Below is an example of the XML wire format.

XML Wire Format
This is an example of the XML wire format.

The next picture below is an example of the JSON wire format.

JSON Wire Format
This is an example of the JSON wire format.

XML and JSON are the two most common wire formats used for applications to exchange data.

BeautifulSoup Example as a Python Scraper

This post will give a BeautifulSoup example to demonstrate its usefulness as a Python scraper.

A problem you will encounter with HTML is that while the code might be technically correct, it could be edited in a very ugly fashion.

Even if you understand HTML, it can be hard to read if the code is ugly.

For example, there could be uneven indentations, inconsistent line spacing, or a host of other bad elements. A BeautifulSoup example will show how it can easily be used as a Python HTML parser.

BeautifulSoup
Use BeautifulSoup as a Python scraper for HTML pages.

After you download BeautifulSoup, place the BeautifulSoup.py file in the same file as your Python programs. You can download it here

The demonstrations in this post will show you how to use a BeautifulSoup example with Python 2, rather than Python 3. The concepts are very similar for both versions of Python, but installation is a bit different.

BeautifulSoup Example for Retrieving Web Pages

Thanks to BeautifulSoup, it is very easy to retrieve web pages, and print all the “href” attributes of the anchor tags. These are essentially the links that go to other web pages. The whole program to do this is shown in the picture below.

BeautifulSoup example as a Python scraper
This program takes user input of an HTML page, and prints all the anchor tags from that page.

The second line in the code pictured above is crucial because it imports all routines in the BeautifulSoup.py file.

The variable “html” (which could be could anything, but calling it html makes sense) is used to return a string consisting of the entire HTML page.

The variable “soup” becomes an object of parsed HTML data. You can then ask to retrieve certain things from this variable.

How to Print All Anchor Tags in an HTML Document

An anchor tag in HTML looks like <a> </a>, so by passing ‘a’ into the soup object you will get the web address of the actual page that the anchor tag links to.

This BeautifulSoup example its power as a Python scraper, using the “urllib” and “BeautifulSoup” libraries to parse HTML.

 

Making Sense of HTML Documents – Using Python

This post will focus on making sense of HTML documents that you retrieve from a web server – using Python.

Look at the example pictured below. It displays useful code to retrieve a web page, and print out the content.

You can see HTML tags in the document. These tags are rendered on a web page to give it structure. Learning HTML is a whole other topic.

However, what you will focus on here is parsing through the content using Python, and looking for certain elements within the content.

Retrieve HTML
The purple text is an HTML link to another web page.

In the above picture, look at the string highlighted in purple. This string represents a link to another web page.

You can create a loop that parses out these types of string, puts it in a “fhand” variable, and opens the page. This type of loop could continue until it opens and prints all the content on the internet.

Realistically, your computer would get drained of its memory long before your loop completed parsing through all the web links on the internet, but this concept outlines the beginning of a web crawler. A web crawler employs what is referred to as web scraping.

Web Scraping

The Power of Web Scraping

Web Scraping gives you great power. You are literally able to make a copy of web, or part of it, given enough memory.

Some web servers employ shields, like a captcha for example, to ward off applications like Python from scraping their site. However, Python can usually outsmart these types of shields. On the other hand, some servers do not care if you scrape their pages.

Why Scrape HTML Documents

Why Web Scrape HTML Documents

You can see that there are many reasons why you may want to scrape the web. You could write Python code that checks for new apartments on Craigslist, for example. You could write Python code to pull social data.

Web scraping provides a way to pull data when there is no application program interface.

Some websites have rules regarding web scraping. Facebook, for example, does not allow it. Facebook does not display public data. You have to be logged in to see anything. So if you did try to scrape their site, your code would have to log you in first, and then Facebook could easily know it’s you scraping.

What next? See how this BeautifulSoup example makes it easy to scrape HTML.

Use Python for Web Scraping

This post will demonstrate how you write Python for web scraping.

Learning the HTTP application is fairly complex, but it is simple to apply in Python. The picture demonstrates how to make an HTTP request in Python.

HTTP Request

The line starting with “mysock.connect” is what pushes the socket out across the internet, and connects it to an endpoint.

It is crucial there is a server there to connect to, or else your code will crash right there at the third line. A crucial difference between connecting with a socket versus reading, is you can send and retrieve data with a socket.

Because you are using HTTP protocol, and you established the socket connection, then it is your responsibility to make the first communication.

The line starting with “mysock.send” makes first communication with a GET request. Once you make the GET request, you can scrape the data you want.

The while loop will receive data at 512 characters at a time. If the data is less than 512 characters, you will still receive it, unless it is less than one character.  Running this program should return the following data:

web scraping

Make the HTTP Request Easier

You might agree that the previous example showed you that it is fairly simple to make an HTTP request with Python. Well, there is a library called “urllib” that makes it even easier.

urllib in Python

The urllib library work like an extra application layer that makes a URL seem like it is just a file.

You can see that using urllib is similar to using a handle to open and read a file.

Wk3WriteBrowser5

Make Your Python Socket Talk to the Internet

This post will show you how to make your Python socket talk to an application on another web server.

Once you establish a connection with your socket, you can use Python to browse web data. The most common protocol is HTTP (HyperText Transport Protocol). HTTP is a set of rules to allow browsers to retrieve web documents from servers over the internet.

python socket
Use this code to establish a socket in Python.

Examining the URL

Look at the URL in your location field or address bar of your web browser. It can be broken down into three parts.

For example, consider the URL http://dr-chuck.com/page1.htm.

  1. The first part is the “http”. This tells you what protocol is being used.
  2. The second part, “dr-chuck.com”, refers to the host you want to talk to.
  3. The last part, “page1.htm”, refers to the file you want to retrieve.

Every time you click on a link to get a new page on the internet, your browser initiates a request / response cycle to GET the new page. This, in a nutshell, is the act of surfing the web.

Web Surfing
The act of surfing…the web.

Use Python to Access Web Data

Wk3Network1

This post will discuss how to use Python to access web data.

  • Become familiar with the request and response cycle that your browser does to communicate with servers.
  • Become familiar with protocols that are happening when your browser is working to access data.
  • Know how to write Python programs that can access web data.

A Brief Discussion Regarding The Internet and Networking

The picture below describes the transport control protocol. It illustrates the basic method of how information goes back and forth from your computer and destination web servers.

TCP Protocol
The TCP layer of the network architecture serves to handle peer-to-peer connections between your computer and a web server.

Focus mainly on the transport layer of this architecture. This is the peer-to-peer connection between your computer and a web server. Think of it as a telephone call over the internet.

How The TCP Layer Relates to Python

When you talk to someone on your cell phone, you do not worry about how the connection is made. You simply become aware of the connection and start talking.

Use this cell phone cell phone analogy as a metaphor when making a socket inside your computer. A socket will allow Python to access web data.

Sockets

When you talk to other applications on the internet, you have to know the specific port number of the application you wish to access. TCP port numbers allow multiple applications to exist on the same server.

You can think of port numbers as extensions within a phone number. There is an IP address, and within that are numbers for various applications that may exist on the same server.

Below is a picture of common TCP port numbers. The one you will use mostly with Python is port 80.

TCP Port Numbers
The most common TCP Port Number you will use with Python is 80.

The Python Socket Library

Python has a socket library that already contains all the code you need to access web data.

There are three lines of code to use when making a socket. These three lines accomplish the following:

  1. Import the library
  2. Establish a socket.
  3. Define the end server.
Sockets in Python
Use these three lines of code when you need to make a socket.
For more information get the book Introduction to Networking. You can also take an Internet History course.

Practice Regular Expressions with Python Programs

A good way to practice regular expressions, is to take some of the Python programs you used before, and add Python regular expressions to give them sophistication.

Consider the example line below from the mbox-short.txt file.

From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008

 

Look at, and analyze the following Python regular expression, which will extract the email address.

re.findall(‘\S+@\S+’, x)

 

Assume ‘x’ has been assigned to your example line. This will match the ‘@’ character. Then it will push to the left, and to the right, until it encounters a space (‘\S+’). The ‘\S’ includes non-whitespace characters, and the ‘+’ includes those that occur one or more times.

You can practice regular expressions to fine tune it more. The following will only extract email addresses out of lines that start with ‘From ‘:

re.findall(‘^From (‘\S+@\S+’)’, x)

 

The (‘\S+@\S+’) is the only part that is returned in a list.

What if you only want to extract the domain from the example line?

Pictured below is the Python fundamental way of coding this program.

Extract Domain

You could also code this a fundamental way using a double split pattern.

Double Split Pattern

Coding this same program with a Python regular expression would result in the following:

re.findall(‘@([^ ]*)’, x)

 

Always refer to the Python regular expression guide for help with meaning of the special characters. If you do not want a special character to function with its special meaning, then prepend it with a backslash. For example ‘\$’ would be a real dollar sign, rather than match the end of a line.

Python Regular Expressions

 This post should serve as a basic guide for Python regular expressions.

It is recommended to learn Python basics before you learn Python regular expressions.

Regular expressions, in general, are a language unto themselves. They can be used in many different languages. They involve cryptic, yet very succinct ways of presenting solutions to programming problems. For some, regular expressions will come very natural, but others will prefer the step-by-step method of writing code. You do not have to know regular expressions, but if you do, you may find them quite fun to work with.

Wk2_RegExpDef

You might want to bookmark a character guide for Python regular expressions.

You can see from the guide that regular expressions are a language of characters. Certain characters have a special meaning, similar to how Python reserved words have special meaning. Shown below is a module you should follow if you want to make use of Python regular expressions in your program.

Wk2_RegExpMod

Consider these two example lines of text from the mbox-short.txt file.

X-DSPAM-Result:
X-Plane is behind schedule:

Now consider the following code:

import re
lines = open(‘mbox-short.txt’)
for line in lines:
++++line = line.rstrip()
++++if re.search(‘^X.*:’):

The “if” statement will catch lines that start with (‘^’) ‘X’, followed by any character (‘.’), zero or more times (‘*’), followed by a ‘:’. The ‘X’ and ‘:’ are not special characters, the other characters do have special meaning. This if statement should catch the two example lines of text written above.

Suppose you do not want to catch a line if it has blank spaces, or whitespace (the second example line). You would modify the regular expression as follows:

for line in lines:
++++line = line.rstrip()
++++if re.search(‘^X-\S+:’):

Now your if statement will only match lines that start with ‘X-‘, followed by non-whitespace (‘\S’), one or more times (‘+’), followed by a ‘:’.

Matching and Extracting Data with Python Regular Expressions

The method “re.search()” returns a True or False, depending if the regular expression finds a match.

Use “re.findall()” if you want matching strings to be extracted.

Consider these four lines of code below, in the Python interpreter.

Wk2_RegExpMatchExtract

The ‘[0-9]+’ represents any single digit that occurs one or more times. Therefore, the variable ‘y’ returns a Python list of matches from the parameter, which is ‘x’. So, “re.findall()” extracts matching data and returns a list.

It is important to know that matching regular expressions will return the largest possible string, by default. For example:

>>> import re
>>> x = ‘From: Using the : character’
>>> y = re.findall(‘^F.+:’, x)
>>> print y
[‘From: Using the :’]

Did you notice? The “re.findall()” did not stop at ‘From:’, because that is not the largest possible matching string. This concept is referred to as greedy matching, because it returns the largest possible match. If you wanted to stop at the first colon, then you would need to use non-greedy matching:

>>> y = re.findall(‘^F.+?:’, x)

This regular expression will return the shortest string match.