Sunday, January 12, 2014

Popularity (Usability) Index for Programming Languages

According to Oxford Dictionaries, the word "popular" means "liked or admired by many people or by a particular person or group". I know folks who code just for their living without any love for it. So, I am not going to use the term "Popularity", but will use the term "Usability" - "Usability Index For Programming Languages". 

When a programmer decides to learn a new language, these indices may be checked to find out if this new skill will be helpful in future. Also, while starting a new project, one may not be willing to use a programming language which is slowly dying because of various reasons. 

There are different methodologies used by these indices to rank programming languages:
  • Number of times name of a language has been searched or number of times a language has been referred in the internet.
  • Number of lines code written in a particular language available in public (e.g. in repositories like github, bitbucket, Google Code etc).
  • Number of open source projects available in public repositories.
  • Number of questions posted in question answer sites like stackoverflow.
  • Number of books sold for a particular language.
  • Number of advertisement posted for a language in different job portals.

Neither of the above mentioned methodologies may actually represent real world usage of a language (For example, data related to corporate or private projects is not publicly available). Or neither of these may be able to identify the best programming language.  But, a combination of all these should give some idea about the current trend.

And here are the different indices available:

1. TIOBE Programming Community Index 

This programming language ranking is prepared based on the query "+ <language> programming" made to 25 search engines (including Google, Yahoo, Youtube, Amazon, Baidu, Blogger, Facebook, LinkedIn, Bing, StackOverflow, Twitter, Wikipedia, Wordpress etc). It covers around 229 programming languages (as of January 2014). As per the portal, this index does not consider number of lines of code for a language; Instead it's based on the number of skilled programmers, available courses and third party vendors. The index shows the trend from 2002 till date and gets updated once in a month. 

In the past, there were controversies around the methodologies followed for this index. As a result, they have fine tuned it over time. 
There are suggestions to use  queries such as "programming with <language>", "<language> development" and "<language> coding" in addition to ""<language> programming" (Not yet implemented). 

Currently the team is working on adding queries in natural languages other than English (to start with they are working on Chinese search engine Baidu). 

How frequently a language is searched for a tutorial in google - that is the basis of this index. An example query string is "python tutorial". Since python has other meanings too, searching only for "python" (as performed by other indices) may lead to inconsistent result. The data is collected from google trends. Right now, it covers only 10 languages. As indicated in the portal, the approach is different from TIOBE index :

The  TIOBE Index is a lagging indicator.  It counts the number of web pages with the language name.  Objective-c programming has over 20 million pages on the web,[s]  while C programming has only 11 million.[s] This explains why Objective-C has a high TIOBE ranking.  But who is reading those Objective-C web pages ? Hardly anyone, according to Google Trends data.  Objective C programming is searched 30 times less than C programming.[s]  In fact, the use of programming by the TIOBE index is misleading...

3. RedMonk Programming Language Rankings

This index is generated based on two things :
  1. Number of projects in github for a language.
  2. Number of questions tagged for a language in stackoverflow.
As understood, this approach has it's own set of limitations (and it accepts that fact). In addition to github, there are other repositories (e.g. bitbucket, google code etc) which are being used by developers, but, RedMonk does not consider those. Also, github is popular among the developers for their personal projects. github may not include the projects they actually work on as a part of their job. 

Programming Language Popularity Chart uses a similar approach. In addition to #2 (Number of questions tagged for a language in stackoverflow), it considers the number of lines of commit for a language in github (instead of number of projects in github).

4. The Transparent Language Popularity Index

This is an open source, fully automatic, free tool for measuring the popularity of languages (historical data is available on request). It searches for the string "+ language programming" in Google, Yahoo, Bing, Google Blogs, Amazon, YouTube, Wikipedia and languages are ranked accordingly. Being an open source tool (It's available here), the data is freely available, so that any one can reproduce and verify the ranking using this tool.

Since, yahoo has stopped returning the number of search results, this index will stop using this search engine in future.

5. langpop

This portal presents the trends based on :
  1. Number of pages returned for the string "language programming" by Google.
  2. Number of files found with a specified extension (e.g. ".java") by Google.
  3. Number of job posted in craigslist returned by Google.
  4. Based on the data returned by github and
  5. Number of times name of a language is mentioned (in the title) in the following three websites : Lambda the, Slashdot. This particular methodology (#5) highlights the languages people are talking about, but may not be actually using.    

6. Ohloh is different from code/project hosting sites like github, bitbucket etc. It's a free, public directory for open source projects. It provides a search service, so one can search for open source code, irrespective of where the code  is actually lying. It measures the activity from almost 600000 open source projects.

In this portal, one may select a number languages and compare those based on:

  1. Number of commits per month.
  2. Number of contributors who have contributed at least one line of code per month.
  3. Number of lines of codes changed per month.
  4. Number of projects with at least one line of code changed per month.
Trends are being displayed since 2005.

7. Job Advertisements

Job Tractor, Trendy Skills, Indeed etc rank languages based on the demand for languages in the job market. Job Tractor searches twitter for job postings for different languages. Trendy Skills searches for advertisements in popular job portals for the skills and technologies employees are looking for. itself is a job portal covering 50 countries and 26 languages; Here, one can check job trends for different skill sets (e.g. languages), companies etc.

8. Sale of technical books

Every year, O’Reilly publishes a series of articles (herehere and here) related to "computer book market" (based on the Nielsen BookScan's weekly top 3000 tittles sold and data from Amazon). As a part of this effort, O'Reilly tries to gauge the programming language rankings. The assumption here is employers as well as individual programmers buy books based on their current need (job for which they are being paid) and interest. The programming language used for examples used in a book identifies the language of the book. For example, "Head First Design Pattern" is considered to be a Java book, as the code snippets are written in Java. Although, Neilsen BookScan covers book markets in UK, Ireland, Australia, New Zealand, South Africa, Italy, India, US and Spain, the reports published by O'Reilly seems to concentrate only on US (I am not very sure about it).

These studies were made in the past and are not relevant for 2013/14.

 References :

1. Wikipedia : Measuring programming language popularity

2. 5 Ways to Tell Which Programming Languages are Most Popular
3. The Rise And Fall of Languages in 2013
4. TIOBE or not TIOBE – “Lies, damned lies, and statistics”

Friday, May 10, 2013

Fundamentals of good API designing

Do you write reusable OO code? Code that is being used by other modules within the same product? Or code that is exposed to the outside world and used to develop third party applications? If the answer to either of the questions is true, then you are an API developer. Either you write private APIs (used internally by other modules) or public APIs (used to develop other applications). Here are the fundamental principles which you should keep in mind while designing an API:

1. Understand the requirement: 

Once you get the requirement from customer (a real "external" customer or the developer sitting next to you and working for the other module), break it up into small use cases. That will help you to start thinking about the first few basic APIs. 

2. Think and discuss: 
  • Are you missing any use case? I am sure, at this stage, the answer might be yes. Revisit the requirements again. Think twice. That way you will be able to identify few more APIs. 
  • Once you have the preliminary list ready, write it down and send it to others (as many as possible). Discuss. This is the most important part of API designing. 
  • Listen carefully to what others are saying/suggesting. Remember once you write and expose your API to the outside world, you can't change them. So, it's better to debate, decide and then develop.
3. Not too many, not too less: 

At the end of step 2, you might have a long list. Possibly one API per use case. Number them. Do you really need those many APIs? Is it possible to combine the functionality of few of them into one? That way you may end up writing more generic API. However, your API should not try to do too many things together. Remember all of us hate complexity. 

4. Name them: 
  • A meaningful, easy to understand, self-explanatory name helps the user to use it even without reading the documentation. A complicated name indicates that you are trying to do too many things (which are not a good design).  
  • It should be consistent with the names of other existing APIs. Otherwise user may get confused.  
5. Signature: 
  • Design the signature in such a way that it could be extended in future (Use generic, enum, var-arg etc). 
  • The way an API needs to be used should be very obvious. A user (developer) always hates surprise!
6. Prototype it:  

Are you sure that your thoughts can be converted into real code? Write (code) a prototype. 

7. Implement: 
  • Not sure about a validation/functionality? don't add it. A less restrictive approach is always better to start with. 
  • While coding consider not only about the behavior, but also about the performance. 
  • Hide implementation details. Don't expose exceptions which talk about internal implementation. API should throw exceptions which is relevant to the APIs external behavior and not to the internal behavior.
8.  Document it: 
  • Document every small thing. Do it religiously. Document each and every contract, limitation, pre & post conditions. 
  • But, don't over explain. Keep it short and crispy. Exposing implementation details might be confusing. You may need to change the implementation in future, so it's better not to talk about it.
9. Test it:  

This is where your API will be used for the first time. 

10. Use it: 

Is it possible to consume the API even before exposing it to the outside world? If yes, please do it. This is your last chance to make things correct. Once exposed, you will never be able to change the external behavior. However, it will take some time (read few releases) for the internal implementation to get stable. Remember, Rome wasn't built in a day!

Further reading:  

Saturday, August 25, 2012

Promotion : Architexa

I am very much excited about a new development tool suite from a  MIT spin-off startup, Architexa! I am using it regularly and it's awesome. 

Architexa offers various tools for Java development. So far, I have tried their tool for generating UML diagrams. I know you must be thinking "What's new in that? There are thousands of tools which do the same.". But, this tool suite is developed with one particular goal. It helps to read and understand any legacy code quickly. Yes, all of us, the developers, inherit code. It might be the proprietary code or the open source one, but whatever be the case, we spend a good amount of time understanding the existing code. Architexa is helping us exactly there. 

I have used their eclipse plugin to generate class diagram, sequence diagram and layered diagram for various modules of approximately 10 year old large, complex,legacy code base. Believe me, it was quick and easy. I have used rational rose earlier. I Always felt that rational rose is a subject of it's own. While fixing a bug or developing a new feature, we often feel the need of some tool very quick and easy to adapt. Architexa does that!

IMO, following are the advantages of the Architexa tool suite over others:
  1. Quick to learn, designed brilliantly, within an hour or so anyone can learn the tool by his own. Few short videos and a brief documentation are available. You may like to go through them ONCE and that's it. You are ready to go.
  2. You can decide the complexity of each generated diagram. You decide the content based on need.
  3. Layered diagram : This is something new. At least, Architexa claims that no other tool offers this feature. I really loved it. It helps to visualize the dependencies between different packages, so that the cyclic/unwanted dependencies among the packages can be avoided at the initial stage.
  4. It's fast. Doesn't make your eclipse to hang.
  5. As mentioned earlier for individual users like you and me, it's free. 

BUT, yes, still there are some buts! 
  1. While generating the layered diagram, the tool takes good amount of time to find out dependencies between packages. It may vary between few seconds to few minutes depending on the size of the code base. 
  2. For a complex class diagram, sometimes it becomes difficult to follow the lines (relationships) between entities. User interface has some room for improvement. 
  3. Also, there are other minor issues (REAL BUGS!) which I am going to report in their user forum. For example, with eclipse helios (on Windows as well as Linux platform), I am not able to connect between two classes while creating a new class diagram.

Sunday, December 19, 2010

Inject Dependencies - Manually

In another post, “Do I Really Need a Singleton?”, I wrote about the problems introduced by the Singleton design pattern. When the single unique instance is being accessed through the getInstance() method, Singleton acts as a global variable in disguise and introduces tight coupling and unwanted dependencies. I have received two immediate questions from my readers:
  1. Should a Singleton be used only with Dependency Injection Framework?
  2. If accessing a Singleton through getInstance() creates tight coupling, creating an instance of any other class through new() also causes tight coupling. So, how should an object be created maintaining loose coupling?
As per my understanding, Dependency Injection is the answer for both the questions. But, that does not mandate the usage of a framework. Dependency Injection is a concept first and then a framework. When the application under question is small, we can always meet the needs by injecting dependencies manually, without using any framework like Spring.
In any Java application, we repeatedly encounter with two events:
  • Object creation
  • Interaction between the objects - The business logic
But, usually we mix up both of them which leads to tight coupling and unwanted dependencies which in turn makes the maintenance as well as unit testing a pain. Let me try to explain it using a very simple example:
class MyClass {

 private A a; //A is an interface
 private B b; //B is an interface

 //Object creation within constructor
 MyClass(A a, B b) {  
    a = new AImpl(); //AImpl is the concrete impl of A
    b = new BImpl(); //BImpl is the concrete impl of B

 //Application logic lies within this method
 public void doSomething() {
    //Do A specific thing
    //Do B specific thing
    C c = new CImpl(); //Object creation within the method.
    //Do C specific thing

The Problems with this class is:
  1. It has not been able to separate out the the object creation from the business logic resulting in a tight coupling.
  2. Here “programing to the implementation” has been done, not to interface. Tomorrow, if different implementations of A, B or C is required, the code inside the class has to be changed.
  3. Testing MyClass would require testing of A, B, C first.

Let me try to fix the problem:

class MyClass {

 private A a;
 private B b;
 private C c;

 MyClass(A a, B b, C c) {
    //Only Assignment
    this.a = a;
    this.b = b;
    this.c = c;

 //Application Logic
 public void doSomething() {
    //Do A specific thing
    //Do B specific thing
    //Do C specific thing

//The Factory
class MyFactory {
 public MyClass createMyClass() {
    return new MyClass(new AImpl(), new BImpl(), new CImpl());

class Main {
 public static void main(String args[]) {
    MyClass mc = new MyFactory().createMyClass();

What have been achieved here:

1. Constructor does not have a new(): 
Objects are not being created within the constructor of MyClass. The constructor is simply being used for field (A, B, C) assignments. Here the constructor asks for the dependencies as parameters, but does not create them (And that is the Simplest definition of dependency injection). However, simple Collection objects like ArrayList, HashMap OR value/leaf objects like Person/Employee (i.e the objects within the application which in turn does NOT create other objects) CAN BE created within the Constructor. Constructor should not be used for any other operation like I/O, thread creation etc.

As a thumb rule, any object should hold references ONLY to other objects whom it needs directly to get it’s work done (This is called the Law of Demeter). For exapmle, if MyClass needs some other class called X, MyClass’ constructor should directly ask for X. It should NOT ask for some other factory F which can return an instance of X. Violation of “Law of Demeter” would result in unwanted dependency between MyClass and F. So, if you find more than one Dot (.) operator be careful - something illegal is happening there.

2. Factory (MyFactory) is taking care of object creation and wiring: 
All the new operators (90%-99%) should belong to the factory. It should take care of entire object graph creation for the application and also of relating (wiring) different objects based on their declared dependencies (e.g MyClass needs A, B, C etc). It should not contain anything more - not any other logic (No I/O, thread creation etc).

Tomorrow if C starts depending on something else called D, only C and the factory would be impacted, not the entire object graph (C would have to introduce an overloaded constructor and factory would have to incorporate the object instantiation plus the object wiring related changes). 

For a large application of course there may be multiple factories. Here, thumb rule is one factory should instantiate all the objects with same life span.

3. Object creation is separate from the business logic: 
MyClass is now a Business Logic Holder. It does not have any new(). Even, it does not have any knowledge about the concrete implementations it is using for the business logic (i.e it knows about A but not about AImpl - “program to interface and not to implementation”).

You must have started thinking that I started this discussion with the context of Singleton. How does the manual dependency injection take care of a Singleton? How does it create a Singleton (minus the tight coupling, hidden dependency etc) and access it when needed? Surprisingly, we already have three Singletons in our example - AImpl, BImpl, CImpl. If the factory takes care of creating only one instance of a Class (by invoking new() only once), its a Singleton. Isn’t it? Then the factory may pass that unique instance in the form of dependencies to all other objects those need it.

4. So, where are we? MyClass, the business logic holder needs A, B and C for it’s business. It does not create them but asks for them (the dependencies). The factory (MyFactory) creates those dependencies and wire them to MyClass. But, who creates, the factory? Of course, the main method (the application launcher :-)). Let me repeat the story again, the main method first instantiates the factory, the factory in turn instantiates the object graph, each Object declares their dependencies and finally the main method itself sets the ball rolling - launch the application by invoking doSomething() of MyClass, ie. the objects starts talking to each other executing the usual business.

Let me repeat it once more: Create the factory, create the application using the factory and then start the application! For a large scale application the same thing can be achieved with a Dependency Injection framework like Spring, Google Guice etc. Of course they will come with lot of other benefits, in addition. But, for a small to medium scale application, dependency injection can be hand crafted making the app loosely coupled, more maintainable and of course unit test friendly.

Further Reading:

1. This post is strongly influenced by the writings of Misko Hevery published in his personal blog as well as in Google Testing Blog. Some more interesting articles can be found here, here and here.

2. Dependency Injection Demystified

Technorati short code : RV6SU2CB76BW

Sunday, December 5, 2010

Problems with Checked Exception in Java

Recap: Exception Hierarchy In Java

In Java, we have three types of Throwables :
        1. Error and it’s subclasses
        2. Runtime Exception and it’s subclasses
        3. And the rest (Non-Runtime Exceptions)

Exceptions those lie in the third category are called Checked Exceptions. If a method throws a checked exception, the compiler makes sure that the API user handles it either by using try-catch block or by propagating it to the higher layer by adding a throws clause to the method definition. The throwables belonging to the first and second category are unchecked in nature. Compiler does not force the API user to handle them.

The Thumb Rule

As an API author (Somewhere I read that every method is an API, if it is a public method then it is exposed to the world, if it is a private or package-private method it is an API for internal usage. I liked the concept), you should use a checked exception only when you believe that the API user can recover from the exceptional situation. If it is not recoverable, Runtime Exception (unchecked throwable) should be used.
As a rule, errors are generally reserved for JVM related issues (e.g OutOfMemoryError, StackOverflowError etc). So, if a custom exception needs to be written which you believe, should always be recoverable then extend it from the Exception. Otherwise for any custom unchecked exception derive it from Runtime Exception. Hence,
  •     Expected + Recoverable -> Checked Exception
  •     Unexpected + Not Recoverable -> Runtime Exception
Problems with Checked Exception

1.  As mentioned earlier that a method should throw Checked Exception, if the API author believes that the API user can recover from the exceptional situation. But, in real life, whether an exception is recoverable or not depends on the situation - for the same piece of code one exception may be recoverable in one context and may not be recoverable in some other context. Let’s take up one example. I work with telecom subscriber provisioning system. There lies a method which validates the cell phone numbers of subscribers.

  • If an operator tries to create a new subscriber through the UI by providing an invalid number, the Number Validation Method should throw back a checked exception, because it is recoverable. The operator would be asked to re-enter a valid phone number.
  • The operator is trying to search for the number of a particular subscriber. It is being read from the DB and then validated using the same Number Validation Method. If the number is found to be invalid, the method should throw an unchecked exception, because it is NOT recoverable this time. The subscriber data has got corrupted in the DB.
By throwing a checked exception, the API author is forcing the API user to handle it. If there is no way to “actually” handle the exception (i.e. recover from the exceptional situation), what will the user do then? In such a situation, most of us do one of the following:
  • Swallow the exception by providing an empty catch block (This is nothing but a bug)
  • Provide a catch block just to log the occurrence of the exception. (This is also the same) 
  • Catch the exception and re-throw it.
This is true in 90% of the cases. Of course, there is a so called better solution - catch the exception, convert it to a more meaningful checked exception and pass the ball to higher level. But, this may lead to another problem as indicated in point number 3 below. The ideal solution, I believe, is to catch the checked exception and convert it to an unchecked exception. Although, in Java, the concept of try-catch was introduced to handle the exceptional conditions, but it’s very rare to find a catch block where the exception is really handled.

The bottom line is:
  • Things may go beyond API authors “belief” and quickly become messy.
  • Most of the time we use Java’s (checked) exception handling machinery just to log the exception (and then raise a bug and fix it).
2.  In my application, I wrote an API which throws one checked exception E1 and published the API to the world. Now, because of some new features, the implementation of the method has to be changed and it has to throw two new exceptions, say E2 and E3. If E2 and E3 are checked exceptions, all the client code which are using the API till date, will break. The way outs are:   
  • Catch E2 and E3, then convert them to E1. I believe that defeats the use of exception.
  • Publish new version of the API.
3.  try, catch is not the only way to handle a checked exception. If the exception can not be immediately handled, it may be propagated to the higher layer by declaring throws clause. In the higher layer of the stack, the exception can be handled in a better way. This may also be problemetic in cases. Say, for a large, layered application one new API is being developed. It internally invokes three other APIs. Each of those three methods in turn throws three checked exceptions. If the new API is not in a position to handle any of those exceptions immediately (inside the method code), how many exceptions would be thrown back? NINE. Will not be it difficult for the client to use the API?

4.  Another use case of checked exception is worth mentioning. Say, for an UI application, all the methods in all the classes throw the same user defined checked exception called GuiBaseException. Different instances of same exception bubble up from different layers of the code in different exceptional situations and all of them gets handled by the same exception handler at the top most layer. With this approach, everybody is happy - the API author is happy because he believes that “recoverable” exceptions should always be checked, the API user is happy because he is handling them without any pain and effort (also without making the code dirty by introducing try and catch here and there). Wonderful solution, but it completely defeats the very idea of exception handling!

Then What?

Though Checked Exception conveys an warning message to the user “Heey! you need to pay attention”, but in most of the cases, that warning is ignored.
With the usage of ONLY Unchecked Exception (i.e. by avoiding checked exceptions), problems 1, 2, 3 can really be addressed. No body is going to force the API user to handle the exception. Every thing would be properly documented. The API user would decide whether to handle it or not depending on the context.

Ok, this one liner may not be a full proof solution. In fact, I am not trying to give a solution also. The aim of this post is just to capture the problems with checked exceptions. Yes, accept it or not; Checked Exceptions are problematic. Java did an experiment by introducing this concept with a lot of great expectations, but as per my understanding, over time it has been proved to be a failure.
C# does not have checked exceptions and the same is true for other JVM based languages like Groovy and Scala.

Dear Reader, what do you feel about it? Which camp are you in - Checked or Unchecked? :-)

Further Reading:

Sunday, November 28, 2010

Load Testing of GUI using JMeter

What is JMeter?

JMeter is a Java based Open Source application which may be used for load/traffic testing of Web Applications.

Quote from the JMeter home page:

Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.

What am I trying to do with JMeter?

I have used JMeter to do load testing of GUI. I wanted to record some regular transaction done in the GUI, like login, browse to different pages, display data (i.e search data from the DB), modify data and finally log out. Once this test case is recorded, then simulate a scenario where multiple users would do the same set of transactions simultaneously. My target was to find out the maximum number possible simultaneous sessions in my GUI.

How does JMeter do this?

  1. Initially JMeter needs to be configured to sit between the browser and the target server (on which GUI is running) and then it records all the HTTP requests sent by the browser to the server. If the target GUI accepts only HTTPS requests, then at this phase the browser should send HTTP requests (and NOT HTTPS) to the JMeter, JMeter records those requests and finally encrypt and send those to the server. Reverse thing happens for HTTPS response – JMeter receives the responses from the server, decrypts those, records and sends the HTTP responses back to the browser. The logical entity that is responsible for this HTTP message recording is called Http Proxy manager (described later).
  1. Once the recording is done, JMeter can be configured to start a large number of simultaneous threads – each thread represents one user sending/receiving the same set of requests/responses recorded in the first step. Now onwards, it’s between the JMeter and the server; browser is no longer used. Not surprisingly, JMeter maintains separate Http sessions for each user. The component responsible for this is called thread group.
  1. The result can be checked by using something called “listeners” – there are different flavors for the same.
How to use JMeter?

  1. Download JMeter from the here. 
  1. In case the server accessed and the desktop on which JMeter is launched reside in the same network run jmeter.bat from $JMETER_DIR/bin or just type “jmeter” in the command prompt.
  1. If a proxy sits in between the JMeter and the server, issue the following command from the command prompt: jmeter -H <my.proxy.server> -P <port> -u <username> -a <password>
  1. Add a Thread Group to the Test Plan by right-clicking on Test Plan and Add -> Thread Group. Number of threads indicates the number of users to be simulated. Ramp up period is the time over which all the sessions will start uniformly one by one.
  1. Add the HTTP Cookie Manager to enable session tracking by right-clicking anywhere in the Test Plan hierarchy Add -> Config Element -> HTTP Cookie Manager. Set the cookie policy to compatibility.
  1. Select the thread group. Right click “add -> Config element -> Http Request Defaults”. Set the Server name or IP to server’s IP. Keep pot number/path empty. Set Protocol to HTTP.
  1. Add a Recording Controller to the Thread Group by right-clicking on Thread Group -> Add -> Logic Controller -> Recording Controller. Rename the Recording Controller to take a suitable value. All the Http Requests recorded would be saved under this recording controller.
  1. Select workbench. Right click on workbench and add the Http proxy Server: add -> non-   test elements -> Http Proxy Server. Port field - Enter “9090” (Any other value would do). Go to Target Controller – click on the drop down and select the recording controller created in step 6. As mentioned earlier, Http Proxy Server is the element which would record the Http requests and 9090 is the port on which the JMeter recording controller would run.
  1. In the same window, click on the Attempt Http Spoofing. It ensures that while recording the test case, JMeter encrypts the HTTP request before sending to the server and decrypts the HTTP response after receiving it back from the same.
  1. Add the following to the URL patterns to exclude:

  1. Add Gaussian Random Timer under Http Proxy Server. Right click on the Http Proxy Server, add -> Timer -> Gaussian Random Timer. Put the following values:
Deviation (in milliseconds) -> 500.0
Constant Delay Offset (in milliseconds) -> {$T}

Gaussian random timer should be used if you want JMeter to record the time taken by the human user between clicking different links across the GUI. While executing the test case, the HTTP requests would be sent one after another maintaining the same time gap (as taken by the user). If Gaussian random timer is not used, all the requests would be pumped to the server at once. I am sure that no body wants to test such an unrealistic scenario.

  1. Add Listeners to the Thread Group by right-clicking on Thread Group -> Add -> Listener -> View Results in Table & View Results Tree & Aggregate Report. Different Listeners would represent the result (request sent, response received, time stamp, success/failure rate etc) in different form helping you to analyze.
  1. Go back to the Http Proxy Server. Now we are ready to record the test cases. So, click on Start button.
  1. Open the browser and configure proxy settings to point to JMeter's proxy server. For Internet Explorer, click on Tools and select Internet Options, click on the Connections tab, click on LAN Settings, check the 'Use proxy server for LAN... ' and set the address to 'localhost' and port to '9090' (i.e. the port at which JMeter is configured to run). Now the browser will send the requests to the JMeter which in turn will send it to the server. JMeter will record them in between.

  2. Go to I.E. In the address bar type the following:
http://<Test Machine’s IP>/<Link>/
Note, even if the server supports only secured connection, we are not using “https” here. Since, JMeter has already been configured to do https spoofing, browser will send normal http request to Jmeter and Jmeter will do further encryption before sending the requests to the server.

  1. Login to the GUI normally. And do some operations. Navigate through, perform actions etc as per your use case.
  1. Click on the Stop button from the Proxy Server once the recording is finished. All the requests will be stored under the recording controller. If the HTTP requests are not visible there, you must have missed something in the previous steps. Have a look. Fix it and then come back.
  1. Finally, run the test case by hitting Ctrl+R. A green box on the upper right corner should be visible indicating the test is running. The figure beside that green field indicates the number of threads (simulated users running at any instance of time).  Different listeners will give detailed information about the test case while it is running. The box will turn gray once the execution stops.  Go back to the listeners to find the results.
Trouble shooting:
  1. If you find that requests are failing with some errors and if you are not sure whether the requests are really reaching the server or not, you should take a look of the access.log (For apache tomcat, the location of this log is configurable). access.log captures all the requests reaching the server and the corresponding HTTP response code.
  2. If you are encountering the following  exception: Address already in use: connect

it’s with the Windows. You need to increase the number of ports the OS is using for establishing outbound TCP/IP connections.To fix it, Run regedit. Go to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\ Parameters

Click on the Parameters. Create a new DWORD value with name MaxUserPort, Value Data -> 65534, Base -> Decimal. Then restart the system.

Further Reading:
  1. The Official JMeter User's Manual
  2. JMeter Proxy Step By Step