Skip to content

Test Driven Development

What is TDD?

In the traditional model of software development developers do unit testing after they complete coding a software component. When started writing code for a living, in my first project my project manager asked me to write down the unit test cases in a spread sheet and run these unit test cases against the component I completed developing and mark these unit test cases pass or fail.

Test-Driven Development is a software development process in which a developers should first write a unit test in code before writing component code, run it against the component that the developer is building , which is called System Under Test (SUT). The test will fail (symbolized by RED) then developers should develop the component, SUT, and re runs the tests, which would pass (GREEN) and finally developers should REFACTOR the new code if required to meet acceptable standards. TDD advocates repetition of the above RED, GREEN, REFACTOR in short cycles to develop components in software. Kent Beck is credited with developing and advocating TDD process to produce quality software. The RED, GREEN, REFACTOR cycle is shown in the below diagram.

RedGreenRefacor

Unit Tests__________________________________________________________

TDD centers on the idea of Unit Testing and automated unit testing. Unit Test is defined as a test to verify functionality of a small unit of functionality in a component. Unit tests are done by developers. These are different from the tests done by testing team or integration testing or user acceptance testing. These tests have the lowest cost in software.

In Test-Driven Development a developer creates automated unit tests that define code requirements then immediately writes the code itself. The tests contain assertions that are either true or false. Passing the tests confirms correct behavior as developers evolve and refactor the code.

Test-Driven Development Life Cycle

The following sequence is based on the book Test-Driven Development by Example.

1. Add a test:  In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented

2. Run all tests and see if the new one fails: This validates that the test harness is working correctly and that the new test does not mistakenly pass without requiring any new code.

3. Write some code: The next step is to write some code that will cause the test to pass.

4. Run the automated tests and see them succeed: If all test cases now pass, the programmer can be confident that the code meets all the tested requirements.

5. Refactor code: Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that code refactoring is not damaging any existing functionality.

6. Repeat: Starting with another new test, the cycle is then repeated to push forward the functionality.

Unit Testing Frameworks 

Unit testing infrastructure has been developed by open source community / software vendors to facilitate automated unit testing. A number of code-driven unit testing frameworks have been developed for various programming languages. From a .NET perspective the popular frameworks are :

NUnit : Most popular open source unit testing framework.
XUnit : Another popular open source unit testing framework
MsTest : Out of the box in Visual Studio professional and above.

The below diagram depicts the unit testing framework and unit tests interaction. Unit testing framework provides a unit tests runner (exe), which executes all the unit tests and shared libraries that developer unit tests reference. These common libraries provide the necessary classes, methods, attributes to identify tests and assert statements.

UnitTestFrameworkInteraction

A snapshot of NUnit unit’s test runner (exe) showing passed (GREEN) and failed (RED) test results.

clip_image008

Importance of Refactoring in TDD

The common theme of Test-Driven Development is Red, Green, and Refractor. Why is Refactoring important in TDD? Why should one bother about refactoring? Isn’t creating a failed test and passing it sufficient?

In software development, completing a component or a feature shouldn’t mean just writing code to get the required functionality working or just developing the feature. The life of software components span over multiple years. Developers should write code that is maintainable, readable, scalable and extensible.

These qualities are difficult to achieve while writing code for first time. The code should be revisited and improved to increase readability and maintainability of it. Imagine you have to write a mail or an article or writing something that a lot of people are going to read, how many times does one get it correct the first time they write it? One has to go over multiple iterations to get it right. Similarly in software development code has to be revisited multiple times to improve it. This is what is refactoring in software development.

Test Doubles 

When learning TDD, we should also learn the importance of test doubles. Let’s say that we are building a component (SUT) using Test-Driven Development, and the component is a Prescription Class which fills prescriptions. Now using TDD we are writing unit tests and we start a failing test to test FillRx method. Now we want to code Prescription class but the fill Rx depends on Patient object, prescriber object and Drug object. How do we pass in these dependency objects to SUT? How do we use TDD for developing the prescription class?

This is where Test Doubles come into picture; Test Doubles are like movie doubles, who perform stunts instead of real movie actors. These test doubles can be a simple integer that is passed a substitute for a value that the SUT is expecting or it can be a complex mock object that is configurable and closely resembles the data and behavior of the dependency object the SUT is expecting. These test doubles are passed as a reference to SUT in unit tests.

Different types of test doubles and what they do is listed below:

DUMMY: First type of Test Doubles is Dummy; it can be a simple integer or string that is passed as reference.

STUB: A minimal implementation of a class that likely implements some methods and interfaces. This doesn’t have state.

FAKE: A bit more sophisticated, contains a bit more complex implementation, has state.

SPY: A double that records the info about the interaction that it has with SUT so that the info is available to assert in the tests.

MOCK: Mock objects are the most sophisticated test doubles. Implementing this is not trivial. Many mock object libraries exist allowing us to simply configure to return results, interact with SUT. The 3 most common are TypeMock, RhinoMock.

Depending on configuration a mock can behave as a dummy, stub, fake or spy. It’s important to understand Test double and their usage while using Test-Driven Development process.

Benefits of TDD

The below diagram shows the benefits of Test-Driven Development and also shows how unit tests should be written using TDD.

benfitsofunittesting

When TDD is used as a software development process to build a component (SUT) the green layer around the SUT is how the unit tests should be written. If a new change introduced to System under Test (SUT) broke existing functionality or other features (depicted as the curve in the third picture), well written unit tests catch the broken behavior (depicted in Red) immediately. This way the developer can catch bugs that new code has introduced early increasing the quality of code and decreasing the cost of finding this bug at a later stage like testing or production itself.

Other Benefits of TDD are:

· Always accessible regression harness
· Higher quality of code with fewer defects
· Simplifies integration of components with other components
· Well written unit test might result in a living documentation, each test can be seen as a requirement
· Using TDD results in well crafted code, helps in design and results in SOLID design
· Low cost compared to other types of testing like Integration, system testing
· Test harness serves as a security blanket for the code when additional features and finds problems early in the development cycle

Conclusion

Test-Driven Development is a powerful software development process in building high quality software. It’s relatively difficult to follow and requires discipline and a different way of thinking, but once you learn TDD it has a number of advantages and improves the design of the component as well. The most important thing is the quality of unit tests, the unit tests written as part of TDD should be useful in testing the behavior of the SUT (which is what Behavior Driven Development is all about) than simply testing functions in the classes.

Advertisements

Design Principles and Patterns

In this post, I want to list down the important design principles that I have learned and I try to apply while designing and developing software. So what are design principles ? I wasn’t aware about design principles during my initial years in software development. I heard about design patterns though, my manager asked me to read up about different patterns on dofactory, while preparing for a client interview. I looked up that website and found a number of patterns listed out, separated into different categories.  Though I read about these patterns honestly I didn’t understand much about them and what they do and why we need to use them. 

Later in my career while I was working on a big program, on which  I learned a lot about software design and development, I realized that all these design patterns are different ways to enforce the fundamental design principles. I will try to list out the design principles that I learnt.

Cohesion and Coupling : Programming paradigm changed from code having many jumps(“GOTO”  statements ) and returns- spaghetti code – to using sub routines , block structures and while loops – Structured Programming in the 1970’s. Cohesion and Coupling are the core principles of Structured Programming

Cohesion, as the name suggests, indicates that software entities– a class or function or a module – should have closely related responsibilities. The tasks performed should closely relate to each other.  A good software design should have high cohesion. Having high cohesion increases maintainability and decreases dependency (coupling)

Coupling, measures the level of dependency between two software entities. Simply said two components A and B are said to be coupled if you can’t change A without changing B. Good software design should have low coupling between components. Having low coupling increases the maintainability and reuse of software.

Separation of Concerns (SoC) : Separation of Concerns is a principle that helps achieve low coupling and high cohesion. SoC, was introduced by Edsger Dijkstra in 1974. It is the process of separating a computer program into distinct features(Concerns) that overlap in functionality as little as possible. SoC suggests focusing on one concern at a time.

Progress towards SoC is achieved using modular programming(separation of code into modules) and use of encapsulation (information hiding). Modules will have their own interfaces to communicate with other modules and hide internal implementation details.

SoC is a generic principle and different Programming Paradigms supported it in different ways. Procedural Programming (PP, expressed in C and other languages ) supported SoC using functions and procedures.  Object oriented programming supported SoC using classes. However SoC is not limited to only code, the concept is applicable to many aspects of software engineering, you can apply SoC while designing a module, or while defining architecture. A number of patterns are a direct manifest of this fundamental principle

Object – Oriented Design (OOD) Principles: One major programming paradigm shift was from Procedural Programming (PP) to Object Oriented Programming. In Object Oriented programming real world entities and their interactions are represented as objects.

GangOfFour(GOF) in their book “Design Patterns: Elements of Reusable Object-Oriented Software” listed two fundamental OOD principles

1. Program to an interface, not an implementation  : This principle is really about dependency relationships.   For example you have a class called Customer and one of its method is UpdateAddress() and your Customer class uses Logger to log address update action. So your Customer class is dependent on Logger class. image

 

In this implementation the class Customer is tightly coupled with Logger class. If Logger class is broken Customer class also is broken.  This principle advocates using interfaces and programming to an interface. So if we use this principle the above implementation is refactored to

image

2. Favor Object Composition over Class Inheritance :  This principle is really about code reuse.  So what’s problem with Class Inheritance and why should we favor composition ?  Class inheritance also results in code reuse, but with class inheritance the derived class will have visibility to parent class state and derived class is broken if parent class is broken. Also when you derive from a class you should ensure that the derived class doesn’t alter the behavior of parent class and the derived class should be used interchangeably with base class.  In Object Composition, if a class needs to reuse some functionality, the class is declared and initialized using a private variable in the class.

image

SOLID Principles:
SOLID( Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion )is an acronym, coined by Uncle Bob Martin introducing basic Object Oriented Design principles.  These SOLID principles reinforce the basic OOD principles

Single Responsibility Principle (SRP) : SRP states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility.
Responsibilities are axes of change. Requirement changes map to responsibility changes. If a class has more responsibilities it has more reasons to change, and it will lead to high coupling of these responsibilities.
As an example, consider a module that compiles and prints a report. Such a module can be changed for two reasons. First, the content of the report can change. Second, the format of the report can change. These two things change for very different causes; one substantive, and one cosmetic. The single responsibility principle says that these two aspects of the problem are really two separate responsibilities, and should therefore be in separate classes or modules. It would be a bad design to couple two things that change for different reasons at different times.

Open/closed principle (OCP) : OCP principle states that a software entity(function, class) should be open for extension, but closed for modification. Bertrand Meyer proposed this design principle. As per this principle classes should be conceived in such a way that they never face change – closed for modification.
So, how does one modify when a change is required ? you should add new code and not touch the old code. In practical terms OCP is achieved by implementing a fixed interface in classes that can change, callers of this class can work against the interface (remember the first basic principle ). 

Liskov’s Substitution Principle (LSP): LSP states that when a class is derived from an existing one, the derived class can be used in any place where parent class is accepted. Languages which support polymorphism support LSP principle, but extra precautions should be taken while designing the base classes and using virtual methods.

Interface Segregation Principle (ISP) : ISP states that once an interface has become too ‘fat’ it needs to be split into smaller and more specific interfaces so that any clients of the interface will only know about the methods that pertain to them. In a nutshell, no client should be forced to depend on methods it does not use. In a nut shell it means break a bigger interface into smaller interfaces so that clients calling this class can be only aware of the smaller interfaces useful to them, decoupling these clients from all the other interfaces which they don’t need.

Dependency Inversion Principle (DIP): DIP states that High level modules should not depend upon low-level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions.  Well lets take an example to understand DIP principle. Lets say we have checkout manager and it processes payment information and updates inventory.

image

In the above code, CheckoutManager is dependent on PaymentProcessingManager and InventoryManager. The dependency is depicted in the diagram below :
DIP_1

Now DIP states that, high level modules should not depend upon low-level modules. Both should depend upon abstractions. So PaymentProcessingManager and InventoryManager will implement interfaces and CheckoutManager will depend on these interfaces and the lower level managers depend on the interfaces. Below is the code that implements DIP principle.

image

And the dependency diagram will change to:

DIP_Implemented

SOLID principles are different ways to achieve the basic principles, low coupling and high cohesion.

Don’t Repeat Your Self (DRY) : This is another popular principle, which simply states do not repeat things. Same functionality shouldn’t be duplicated. The principle has been formulated by Andy Hunt and Dave Thomas in their book The Pragmatic Programmer. When the DRY principle is applied successfully, a modification of any single element of a system does not require a change in other logically-unrelated elements. This is also known as single source of truth principle.

Keep it simple, Stupid!(KISS)  : is a design principle articulated by Kelly Johnson.The KISS principle states that most systems work best if they are kept simple rather than made complex, therefore simplicity should be a key goal in design and unnecessary complexity should be avoided.

Design Patterns : I started of with my story of how I got introduced to design patterns and that they didn’t make much sense to me initially. Lets now see what a design pattern is : A  design pattern is a known and well established core solution applicable to a family of concrete problems that might show up during implementation.  So what design patterns really do is provide a solution to problems that might come up during software implementation. They cater to specific scenarios and they originate from real world experience. 

What design patterns really do is show a implementation to specific problems by following the above design principles.  Design Patterns should never be interpreted dogmatically. I worked with a technical lead who read about some patterns and wanted to implement them whether they are really required or not. That’s not the right way to use patterns.  Usage of Design Patterns doesn’t guarantee the success of projects.

Conclusion: Design Principles, Patterns , what they really do is give some guidance on how to build good, maintainable and testable software. Remember the fundamental principles of low coupling and high cohesion and all other principles state a way to achieve this and aid in designing software. Patterns and Principles are purely from an software engineering perspective, users don’t care how many principles you have followed or how many patterns you used all users care about software is whether it helps them do their work.

References
1. Wikipedia.

Presentation Patterns : MVC, MVP, PM, MVVM

In this blog post, I will explain different presentation patterns, why do we need these patterns and how to use them.

Why do we need these patterns ?  

Why do we need these patterns in  the first place ? Well one can certainly build software applications without using any of these patterns, but by using these patterns we can achieve separation of concerns design principle. These help in improving maintainability of the application. Another important reason why these patterns became popular is implementing these patterns improve the testability of the application using automated unit tests. We all know how difficult it is to write unit tests for UI tier, these patterns try to address some of these difficulties and provide a way of increasing application’s testability.

As the name suggests these are applicable to only Presentation tier.  Model View Controller (MVC) was the first pattern that was developed by Trygve Reenskaug in 1979 for SmallTalk applications. It was developed to build the complete applications not only the presentation layer. Those days there were no UI controls and everything had to be drawn from scratch and handle interaction between your program and user input devices such as keyboard.  Presentation layer has changed a lot since then, so did the definition of the pattern. Today’s MVC pattern definition doesn’t exactly match the original definition. A number of variations of this pattern have been adapted.

I will explain the classic MVC pattern first and then introduce a web variant of it, known as Model 2, then move on to MVP, explain the two different variations of MVP pattern and then Presentation Model (PM) and its variant MVVM. The below diagram from “Architecting Applications for the Enterprise” by Dino Esposito and Andrea Saltarello’s book depicts the main presentation patterns and their variations

PresenationModelPatterns1

The Classic Model-View-Controller Pattern (MVC):

The following diagram depicts the structure of MVC pattern

MVCBase

In MVC pattern, model, view, controller triad exists for each object that can be manipulated by the user. Lets see what each of these does

Model : Model means data, that is required to display in the view. It can sometimes be the exact data entities that are retrieved from the business layer or a variation of it. Model encapsulates business tier. 

View : View is something that displays data to user. In MVC pattern view should be simple and free of business logic implementation. View invokes methods on Controller depending on user actions. In MVC pattern View monitors the model for any state change and displays updated model. Model and View interact with each other using the Observer pattern.

Controller: Controller is invoked by view, it executes interacts with the model and performs actions that updates the model. Controller doesn’t have an idea about the changes that it’s updates on the model resulted in the view. Often misunderstood in MVC pattern is the role of Controller. It doesn’t mediate between the view and model,and its not responsible for updating the view. It simply process the user action and updates model, its the view’s job to query and get the status of the changed model and render it.  The only time Controller comes into picture is if a new view has to be rendered. 

A more easier way of understanding the interaction between Model, View and Controller is using a sequence diagram, which I took from Dino Esposito’s excellent book.

MVCSequenceDiagram

In Classic MVC pattern Model and View are bound according to the rules of Observer pattern. This is one of the major drawback of Classic MVC pattern. The Classic MVC pattern is no longer in use today. Model2 is a popular variant of MVC pattern that is used in web applications

MVC Pattern for Web Applications (Model 2) 

Classic MVC pattern was designed when web didn’t exist and it was primarily designed for desktop applications (We are talking about the 70’s :-)), but the loose definition of MVC made way to different variations of MVC. One of the most popular variation is Model 2. It’s a pattern originally created for Java Server Pages (JSP) and owes a lot of its popularity to Struts framework.  It’s the same pattern implemented by the recent ASP.NET MVC framework in .NET technology stack.

In Model 2 pattern, all web post requests go to front controller, implemented as a http interceptor (http module in ASP.NET ),which in turn figures out the appropriate controller depending on the structure of the incoming request URL and services the request. The controller invokes a method that affects the model.

The following diagram depicts the structure of Model2 pattern:

MVCModel2

The main difference between classic MVC and Model2 is that there is no direct contact between view and model. The Model in this pattern is not your typical business entities or Business layer, it’s more of a ViewModel that captures the state of the view.  The controller will be the one who will talk to BLL and update the model.  The interaction between the view and model is an indirect relationship.

Below sequence diagram depicts Model2 interactions using sequence diagram :

MVCWeb2

Model2 is the most popular variant of MVC pattern applied to web applications, Model2 is MVC adapted to web. 

Model View Presenter (MVP)

Classic MVC pattern has two drawbacks:

  1. Model needs to communicate change of state to the view
  2. The view has complete knowledge of the model, there is no explicit contract between view and model , which means the view is not as passive as it should be.

Model View Presenter (MVP) pattern evolved from MVC, it tries to address the above concerns. MVP was originally developed at Taligent, in 90’s. The original MVP is also no longer in use today. According to Martin Fowler, you never use MVP, rather you use Passive View or Supervising Controller or both variants.

The MVP pattern neatly separates the Model from the View and breaks the direct relationship between them. The core of MVP is the interaction between View and Presenter. The View exposes a contract(interface in .NET) through which the Presenter interacts with the View. When users interact with the view, the view invokes a method on the presenter and the presenter performs the required task on the Model and then updates the View using the contract.

Below sequence diagram depicts the MVP pattern in action

MVPSequence

Model : Model in MVP represents business entities or domain model or object model of the business tier. 

View : In MVP pattern, View is light weight. It should only have the UI elements and shouldn’t be aware of the model. But that is in a ideal scenario, building a real passive view is quite complex in practice, hence the implementation of MVP falls into two categories :
1. Passive View :
         The view is really passive, light weight.
         View doesn’t know the model

The below triad diagram aptly depicts the Passive View pattern

MVP-PassiveView

2. Supervising Controller
        The view is active, binds the view using data binding or simple code in view
      View doesn’t know the model

The below triad diagram depicts Supervising Controller pattern in action.

MVP-SupervisingController

Presenter :
Why Presenter? Why the name change? The classic MVP triad diagram looks similar to MVC diagram, the noticeable difference is Controller replaced with Presenter. It’s not a case of just name change. The Presenter in MVP, presents user actions to the backend system; after getting the response it presents the response to the users, whereas the Controller in the MVC pattern doesn’t mediate between Model and the View, it doesn’t update the view, it just mediates between user actions and model.

MVP became quite popular in .NET world. Though it involves significant effort in using MVP pattern to build UI applications it pays of while building large scale enterprise applications, but probably a over kill for small applications.

Presentation Model (PM)
Martin Fowler developed Presentation Model pattern for Presentation layer. So what’s the difference between MVP and PM ? It adheres to the same fundamental principle, Separation of Concerns. It differs in the view Model is defined and the tasks Presenter performs.

This particular pattern is well suited for rich UI applications and it really suits the latest advances in UI technologies. Presentation Model suits well for WPF and Silverlight applications. MVVM is the variation of PM pattern implemented in WPF and Silverlight.

Lets see how the interaction diagram looks for Presentation Model pattern
PresentationModel

In MVP, Presenter talks to the View using a contract (interface in .NET) , but in PM the view doesn’t implement any interface. The view elements are directly bound to properties on the model. In PM, the view is passive.The Presenter goes by the name Presentation Model in this pattern.

Model:  Here the Model is not your typical business entities or business objects it represents the state of the view, it might contain UI elements specific properties and once the model is constructed, view will be ready for rendering.

View : View is light weight and simple. It will contain only UI specific elements. Any events raised by the user are transmitted to the presenter (Presentation Model) , the Presentation Model, updates the model with the results it gets. The presenter after updating the model orders the view to render.
Presenter: The presenter in PM receives events from view, processes them, updates the model as in MVP or MVC, but the difference is in PM the presenter holds the model object and its responsible for updating the state changes and calling the view to render once the model is updated.

Model View View Model (MVVM)
In 2005, John Gossman, Architect at Microsoft, unveiled the
Model-View-ViewModel (MVVM) pattern on his blog. MVVM is identical to Fowler’s Presentation Model, in that both patterns feature an abstraction of a View, which contains a View’s state and behavior. Fowler introduced Presentation Model as a means of creating a UI platform-independent abstraction of a View, whereas Gossman introduced MVVM as a standardized way to leverage core features of WPF and Silverlight to simplify the creation of user interfaces. MVVM is a specialization of the more general PM pattern, tailor-made for the WPF and Silverlight platforms to leverage core features of WPF such as data binding, commands , templates.

This diagram take from MSDN depicts MVVM Pattern in action.

image

View : View in MVVM is similar to view in PM. It contains only the UI elements. The interaction between view and ViewModel happens using Data Binding, Commands and Notifications implemented through INotifyPropertyChanged interface.
ViewModel: View Model is equivalent to PresentationModel in PM pattern, it encapsulates presentation logic and data for the view.  ViewModel contains the state of the view and uses Commands , DataBinding and Notifications to communicate with the view.
Model: Model is Business logic layer of the application

When you use MVVM pattern for WPF, Silverlight the view wouldn’t have the typical event handlers that’s so common in UI code, All user actions are bound to commands, which are defined in the ViewModel and invoke the necessary logic to update the model. This improves unit testability of MVVM applications.

Conclusion: MVC, MVP, PM, MVVM all are different ways of implementing the Separation of Concerns (SoC) principle. The different variants show how the pattern changed with the changes in UI technologies in both desktop and web applications. As long as you understand the fundamental SoC principle you would easily understand these patterns.

Cloud Computing : An overview

Cloud Computing refers to delivering software / storage / computation as a service rather than as a product. In today’s typical non cloud environment, organizations run applications on their own hardware and data centers or buy software products and install them on their data centers. What cloud computing offers is the ability to get computing / storage / applications on demand, lets you pay only for usage.

Cloud Computing is the next major revolution in Information Technology and it has the potential to change how organizations run their IT. The main driver for Cloud Computing is cost. Organizations can potentially reduce their IT costs by leveraging it. Cloud Computing suits well for certain business scenarios for organizations. I will mention different aspects of Cloud Computing and briefly describe them.   

The different types of cloud computing  can be categorized into :
1. Software as a Service (SaaS)
2. Infrastructure as a Service (IaaS)
3. Platform as a Service (PaaS)
4. Private Clouds

1. SaaS: Software as a Service (SaaS) is offering an application as a Service. The application is hosted by the vendor on their infrastructure and users of the application access it using internet. Users typically pay only for their usage or per user per month basis. 

SaaS

Salesforce.com  started offering its CRM application in the cloud to organizations as a service. It’s one of the most popular SaaS cloud applications today. Other examples of popular SaaS applications are Google Apps from Google and Office 365 from Microsoft.

Benefits :
1.Organizations need not build and maintain the application, its readily available
2. Usage based pricing and easier upgrades
Risks:
1. Required to depend on SaaS vendor for availability and data security.
2. Performance and limited customization

2. IaaS: Infrastructure as a Service (IaaS) is the most popular category in Cloud Computing. Today when people refer to Cloud Computing majority of them refer to this category.  IaaS provides infrastructure as a service. In this category Organizations can request Computing/ Storage on demand, run their applications on these resources and pay only for the usage. 

IaaS

Amazon is the pioneer in this category and it is credited with inventing IaaS. With IaaS, organizations can, request any number of Virtual Machines,VM(s) (Computing) from the IaaS vendor, use these VM(s) to run their applications, elastically increase or decrease the number of VM(s) depending on their requirements and pay the vendor only for usage. This is very helpful for start ups or big companies  to try out an idea quickly, or for social networking companies to scale as per load, or for big data companies to buy required computational and storage resources without owning the costly physical infrastructure.

Benefits :
1.No upfront investment on infrastructure required, use only required resources.
2. Usage based pricing for storage / computing / network
3. Identical to on premises environment and no vendor lock in
Risks:
1. Required to depend on IaaS vendor for availability
2. Performance might decrease since the application is not hosted in the organization infrastructure

3. PaaS: Platform as a Service is offering a platform on cloud instead of just Virtual Machines. PaaS abstracts infrastructure and presents the user a readily usable platform.  In IaaS the IT/Dev team is responsible for setting up the environment to run the application, managing the VM(s) and setting up Database and load balancer. Instead PaaS offers computing and storage out of the box as a platform which would result in less errors and less work setting up the VM(s) with required software.

PaaS

Windows Azure from Microsoft is the most popular in this category. App Engine from Google and Amazon Beanstalk are other products in this category
Benefits :
1.Platform manages the underlying infrastructure resulting in less work for IT/Dev teams.
2. Applications can be developed faster since administration is handled by platform
Risks:
1. PaaS less familiar to existing environment
2. Might result in vendor lock in

4. Private Clouds : Private clouds import cloud technology to organization’s own premises. By using private clouds, organizations can take advantage of existing infrastructure and also address the data security and availability issues.

PrivateClouds

Private clouds would enable an organization to efficiently use its own infrastructure and can automate building of VM(s) elastically. Typical scenario for private cloud is building test labs on demand.
Benefits :
1.Reduce costs, reuse existing infrastructure
2. Addresses data security and availability issues
Risks:
1. Require to learn private cloud technology to configure and manage private clouds.

Conclusion: Cloud Computing is the next major change in IT.  Its here and its important. The different aspects, SaaS, IaaS, PaaS an Private Clouds are useful in different scenarios. Organizations may use some of these aspects or all of them depending on their requirements.