ITIL V3 Foundation Study Material – Know 5 Phases

ITIL V3 Foundation Study MaterialA good resource for ITIL V3 Foundation study material is crucial for a successful result on the exam. ITIL was acquired by AXELOS in 2013. It is a widely adopted framework for any business that needs to align its IT service with its business service.

Individuals who wish to take the ITIL Foundation exam should now that AXELOS has delegated the study courses and exam voucher to numerous accredited organizations. This is why a search for ITIL V3 Foundations study material will yield results from multiple organizations. They are all vying for your business to take their course or to sell you the exam voucher.

It is also worth knowing that some organizations provide package deals where the course and exam voucher can be purchased together. In some instances, these packages may be about the same price as purchasing a single exam voucher. Shop wisely, and look for reviews on courses that interest you.

Once you find a resource for ITIL V3 Foundation study material, you should internalize the ITIL paradigm for the IT service life-cycle. Quickly internalizing the ITIL paradigm will best prepare you for the exam.

The ITIL Service Life-Cycle

You should know that ITIL breaks down the service life-cycle into five phases.

These phases are:

  1. Service Strategy
  2. Service Design
  3. Service Transition
  4. Service Operation
  5. Continual Service Improvement

Within each phase, there are a set of processes. It is a bit of memorization work, but it is worthwhile to know the phase which any give process falls under. For example, demand management and financial management are two processes that fall within the service strategy phase. Incident management, problem management, and event management are processes that fall within the service operation phase.

Intimately knowing the five phases, and the sets of processes within each phase, is probably the best tip for anyone who has to learn the ITIL V3 Foundation study material. Another tip would be to understand that ITIL has its roots going back to a 1980 project with the UK Government’s Central Computer and Telecommunications Agency. From there, it organically grew to a globally recognized, vendor neutral, framework. Having this frame of reference should validate the time and effort it takes for someone to learn the study material.

Internet Protocol Layer – Beauty in Simplicity

internet protocol layerThe Internet Protocol Layer is one part within the four layer architecture of the TCP/IP model. This layer is responsible for transmitting packets of information across the network. It has no other concern with the other layers in the model. This narrow focus of the Internet Protocol layer allows the network engineers to deal with a small piece of a very large and complex challenge. It is sometime referred to as the Internetwork Protocol, because it deals with getting messages from network to network.

A nice feature about IP is that it does not have to be perfect. It’s designed in a way that data can sometimes get dropped, or sent different ways, but in the end it corrects itself and ultimately works. This layer had to introduce, and relies heavily on, the address of the destination host. This is what we call the IP address.

The IP address format is four numbers separated by dots. Each number is between zero and 255. The address is broken into two parts. The prefix is the network number. The second part is the computer number within the network. For example, a college campus could have one network number. So, this prefix in the IP address will be the same for every computer on that network. When a packet of information comes zooming across the internet for that campus, the routers only worry about the prefix, i.e., the network number.  This greatly simplifies the job of the router, because it only worries about the prefix. This allows routers to work very fast. Once a message reaches the destination network, it is up to that network to forward the message on the correct computer.

DHCP for Computers that Move Around

network address translationDynamic Host Configuration Protocol (DHCP) is the technology that allows someone to take their laptop to a school, then a coffee shop, and then home. Yet, everything still works. The user can still send messages back and forth regardless of their locations. This is because whenever someone opens their computer up at a coffee shop, or wherever, the computer sends out a message saying “Hey, I’m here, please give me a number to use on your network”. However, you may have noticed that wherever you are, your IP address starts with 192.168. This is actually a non-routable address that you get through a technology called Network Address Translation (NAT). You only see this non-routable address, and you do not see the real unique address assigned to you by the network.

Time to Live Saves Internet Protocol Layer From Infinite Loops

Because routers work imperfectly with imperfect information, they can occasionally send packets of information round and round through the same subset of routers. If this process were to never stop then an infinite loop forms. The router is mistaken by thinking it’s routing the packet of information correctly. It doesn’t know that it’s looping the packet. This problems gets corrected with a Time to Live (TTL) field inside the router. TTL starts a number, say 30, and each time a packet passes through that router, it subtracts one from the TTL field. If TTL goes down to zero, meaning the packet looped through 30 times, then the packet gets thrown out. When a packet gets thrown out, a notification is sent back to the sending computer to inform it that there was a problem. The computer can then send it out again until it successfully hops its way across the internet. If the sending computer wants to find out exactly when and where the package got thrown out, it can fun a program called Traceroute to diagnose the problem.

The simplicity of how routers work is one reason why the TCP/IP model succeeded. Routers don’t have to worry about the order of packets, they don’t have to store information, but rather they just forward on packets according to their best guess. They don’t have to be perfect. This allowed for the internet to be scalable, and to grow quickly.

Network Infrastructure Evolution

store and forwared networkingNetwork infrastructure evolution begins with the store and forward networking model. This model was how early internet adopters (1960s – 1980s) would send messages back and forth to host computers. While being able to send a message across a network infrastructure was a revolutionary computing breakthrough, big deficiencies did not go unnoticed. With this model, a message got sent one at a time. They would get sent through a series of hops from one computer to the next. When a message was received by an intermediary computer, it would be stored there, and then forwarded on to the next computer once the line was open. A big problem was that a long message would clog the system, and drastically slowdown the delivery of other messages waiting in que. Another problem is that there was not a built-in method for dynamically addressing outages in the network.

packet switching
The idea of packet switching lead to a shared network infrastructure.

After more than 20 years of researching ways to address problems in store and forward networking, the idea of packets was innovated. With the notion of packet-switching, a message is broken into small packets. The packets get sent out on the internet to find their way. These packets would also have to traverse a series of hops. However, because messages are broken into smaller packets, it leads to better sharing of resources for transmission of data. Further, packets of the same message are not required to take the same series of hops to reach their final destination. The packets have no regard as to how they find their way, but they do know when all the packets of the message have arrived, and how to assemble back to the complete message.

This notion of packet-switching lead to the shared network infrastructure that we use in our TCP/IP networks today. With this notion, the network of big computers evolved to a shared network of small routers. The main purpose of the routers is to forward packets. Moreover, the existence of a single router would become less relevant than one computer in store and forward networking. In that model, one computer played a critical role in the whole reliability of the network. However, with much more routers setup everywhere with the sole purpose of forwarding packets, it was to become not so critical if one router went offline. There would be other paths available for the packet to be routed through.

network infrastructure
The TCP/IP layered network model

However, this problem of reliability was still a big problem. The way you solve a big problem is to break it down to a subset of smaller problems. Then you can focus on solving each smaller problem. Breaking down this problem lead to the layered network model. There were several variations as to how many layers the problem got broken into, but the model that became most popular is the TCP/IP (Internet Protocol Suite) model.

The TCP/IP model consists of four layers. They are Application, Transport, Internet, and Link. So to solve the whole problem of internet reliability, you can focus on one layer at a time. Each layer presents a difficult problem in itself, but it is manageable.

When discussing the evolution of our shared network infrastructure, it must be noted that there is also a model called the 7 layer OSI model. The Open System Interconnection model competed with the TCP/IP model as the preferred model for building out the internet. TCP/IP has won the mind share, but the OSI model remains valid.

Definition for Open Source Software is Linux

If you searched the definition for open source software, it would make sense to find a description of Linux.

definition for open source software creator
Linsu Torvalds is the creator of Linux – the definition for open source software.

Linus Torvalds is the software engineer who wrote Linux. It started as a personal project and grew to become the largest community driven computing effort ever recorded. Linux is considered an open source version of Unix. The file system is hierarchical, with the top node referred to as the root. Additionally, processes, devices, and network sockets are all represented by file-like objects. The benefit is that these representations can be worked with like they are regular files.

Linux is a multitasking, multiuser operating system. Its built-in networking and service processes are known as daemons in the UNIX world. To understand the power and popularity of Linux, just consider that it powers roughly 80% of financial transactions, and 90% of super computers.

What probably gives it the definition for open source software is that it is a collaborative effort. Technical skills and willingness are all you need to contribute to the effort. The Linux Kernal is 15 million lines of code. A major new kernal comes out every 2-3 months. This rate of development is unmatched in the industry. Thousands of developers contribute to its evolution, but Linus Torvalds has ultimate authority over new releases.

Arguably, the most important decision Torvalds every made was to grant Linus a GPU license. This gave people freedom to use, change, and share Linux.

The Linux Community

If you work in Linux, then at some point you will want to engage with the Linux community: you can post queries on relevant discussion forums, subscribe to discussion threads, and even join local Linux groups.

The popularity of Linux at the enterprise level helped create an ecosystem of enterprise support, with contributions coming from major tech companies. IBM is recognized as one notable contributor.

Linux users connect with each other the following ways:

  • Linux User Groups (both local and online)
  • Internet Relay Chat (IRC) software (such as Pidgin and XChat)
  • Online communities and discussion boards
  • Newsgroups and mailing lists
  • Community events (such as Linux and ApacheCon)

The most powerful resource for the Linux community is This site is hosted by the Linux Foundation. It has many discussion threads, tutorials, and tips.