Jenkins : Scheduling Jenkins Jobs for a specific time

Jenkins uses a cron expression, the different fields are :

  1. MINUTES Minutes in one hour (0-59)
  2. HOURS Hours in one day (0-23)
  3. DAYMONTH Day in a month (1-31)
  4. MONTH Month in a year (1-12)
  5. DAYWEEK Day of the week (0-7) where 0 and 7 are sunday

Since a few versions, Jenkins add a new parameter, H : (extract from the jenkins code documentation)

To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”) should be used wherever possible.

For example, using 0 0 * * * for a dozen daily jobs will cause a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but not all at the same time, better using limited resources.

Note also that :

The H symbol can be thought of as a random value over a range, but it actually is a hash of the job name, not a random function, so that the value remains stable for any given project.

Example 1 : H H(3-4) * * *  : A job which runs every Day of the week, every Month in a year , every Day in a month at a time between 3 – 4 am in the morning at any minute.

Example 2 : H (30 -45) 3 * * * : A job which runs every Day of the week, every Month in a year , every Day in a month at 3 am in the morning between 30 -45 minutes.

Example 3 : */5 * * * * : If you want to shedule your build every 5 minutes, this will do the job

Example 4 : 0 8 * * * : schedule your build every day at 8h 00, this will do the job 

Have fun as you work!

 

See Also: Parameterised Scheduler Plugin

cool-jenkins2x3

Regards,

VJ

Techie Juice : Intelligent Test Automation

In part one of this series of articles, we had a look at software testing, basics of test automation, the types of test automation and myths and realities about it.

In this second article Saket Godase looks at why the test automation projects get shelved and how the intelligent test automation techniques can be used to make test automation project a success-story.

Using Intelligent Test Automation Techniques

Abstract
If you have been on a test automation project, it’s very likely that you have heard one or more of these comments. ‘œWhat??? Only 45 % of the manual test cases have been automated.’ I thought this would be compatible with all the browsers?’ Why can’t I run this on different versions of the Windows OS?’ ‘œI thought there would not be any need for manual testing after we completed the automation.’ You mean we will have to upgrade the scripts each time the product changes. I thought it was a one-time job.’

These statements illustrate several problems that plague test automation projects:

  • A single tester performing the dual role of manual and automation. This keeps it from getting the time and focus it needs.
  • Unrealistic expectations from test automation. The most common one being, test automation a simple activity that required only record and playback.
  • A lack of experience in the testing team in which things are made worse by a high turnover.
  • An unhealthy shift of focus from testing the product to automating the testing just for the fun of it. Many find automating the testing more interesting than testing it. The outcome in such cases rarely contributes to the test effort.

In this article we will try to address issues faced by the test automation team and attempt to present a few feasible solutions. We will look at why test automation projects get shelved regularly and how the Generic Approach and the ‘œNeural Networks’ approach could be the answer to your test automation woes.

Why do test automation projects get shelved?

There are primarily two reasons for test automation projects to get shelved. The first and the foremost are all the unrealistic promises dished out by the people managing test automation projects. People are often under a wrong impression that test automation is a piece of cake. Typical approach is to record and playback the test scripts, without giving any thoughts to the selection of tool(s), automation architecture and feasibility of the entire exercise. The second and equally important one is the inability of test automation to effectively replace manual testing.

Misconceptions surrounding test automation.

Some of the issues that we discussed in the previous article were’¦

  • Test Automation tools are difficult to use.
  • Test Automation is very easy. Just record the scripts at any given point of time and replay them whenever you want to.
  • An unreasonably long time span is required to train the manual test team in usage of the tool(s).

While a more detailed explanation of these can be obtained at ‘œAn Introduction to Software Test Automation‘, this list can be further extended’¦

  • Automating manual test cases even before the unit testing is complete. Although this might seem strange and practically impossible, it is in fact quite common, thanks to an over zealous test automation team.
  • Constructing test automation suites on the fly, without bothering to correlate the test automation scripts to the existing state of the product or the test cases.
  • In case of situations where the automation scripts fail / terminate abnormally, they are often unable to start from the point where they left off. This causes the scripts to be rerun from scratch, resulting in loss of valuable testing time / effort.

Inability of test automation to effectively replace manual testing

In most of the organizations test automation is not considered as a feasible alternative to manual testing. Usually, it is a marketing gimmick used to entice customers or a prop supporting the organization’s claims of being a company that works on cutting edge technology. In reality, with each passing day the test automation software is shelved and the focus shifts back to manual testing.
Reading this it is but natural for a few questions to popup in one’s mind. Is test automation worth the investment? Can test automation ever replace manual testing?
Test Automation can be a feasible alternative to manual testing, provided that the architecture of the test automation suite is good enough and the scripts posses a certain amount of intelligence. Successful “Test Automation” is not rocket science; it just requires the right blend of planning, technical know-how and innovation.

The following section will address these issues and attempt to provide a feasible solution for the same.

Intelligent Test Automation

Test automation in its current form is not an alternative to manual testing. It is crude, ineffective and possesses no inbuilt intelligence what so ever. If test automation is to ever replace manual testing (even to a reasonable extent) it must be adaptive and intelligent. These are the two main qualities that separate a manual tester from a test automation script. Let’s now look at two complementary approaches that attempt to bridge this gap between manual and automated testing.

Generic Test Automation

I’m sure that reading these words, the first question that will come in mind is: – Generic Scripts? Is this guy reinventing the wheel? No!!! I am not. Writing generic scripts does not only mean clubbing the common functionality together. It means writing scripts, which are essentially adaptive in nature. This is illustrated by the following example.

Consider a scenario in which you have been given the task to automate manual test cases used for testing a website. This website consists of half a dozen modules and you need to automate test cases for all of these. In the traditional approach using the functional decomposition method, the scripts are usually written in the following manner.

Figure 1: Traditional approach to test automation

Figure 1

Using the generic approach the same module(s) could be automated in the following manner.

Figure 2: Generic approach to test automation

Figure 2

The above two figures clearly illustrate the differences between the two approaches to test automation. The traditional approach is analogous to an automated assembly line requiring manual intervention at a dozen places. This rather defeats the purpose of automation, ultimately resulting in even lesser productivity and efficiency. On the other hand, the generic approach derives all the benefits of test automaton resulting in less manual intervention and a marked increase in productivity and efficiency. Like any other methodology the generic approach has its own share of disadvantages. However, these are more of research areas rather than limitations.

Advantages of Generic Approach:

  1. A single script can be used for majority of the test cases, barring a few exceptions.
  2. Error handling is much more robust in these scripts, allowing unattended execution of the test scripts.
  3. Turn around time for such scripts is extremely fast.
  4. Only the test data and not the scripts need to be updated for future releases of the same product.
  5. This is a one-time job in the true sense.

Disadvantages of Generic Approach:

  1. Technical and highly skilled personnel are required to create and maintain the scripts.
  2. Creation of the initial framework and libraries can be time consuming.
  3. In case of complex products, embedding business logic in such scripts may prove to be a bit tedious. Such scripts are more suitable for sanity checks.
  4. Though these scripts are generic and highly reusable, they are still dependent on manual input and they lack the intelligence of a manual testing team.

Thus, we can see that ‘œGeneric Test Automation’ not only reduces the scripting efforts, it also offers a chance to the test automation team to deviate from the traditional approach, resulting in a greater scope for innovation. Such scripts are truly adaptive in nature as they are constantly learning on the job. Such scripts are ideal for regression testing products across a large number of builds. The only thing these scripts lack in is ‘œintelligence’. Though these scripts are adaptive in nature and bring us a step closer in our goal to replace manual testing with automation testing, they are not intelligent enough to completely replace manual testing. The following section outlines a complementary approach in an attempt to bridge this gap further.

Test Automation using Neural Networks

Before we dwell on the usage of neural networks in test automation let us have a quick look at neural networks. Before we proceed please keep in mind that this article is about the usage of neural networks in test automation and not about neural networks. The sole reason for including this section in the article is to provide a brief introduction of neural networks.

Neural Networks

A Neural Network is a computer technique modeled on the supposed structure and operation of the brain and is involved in processing information, making rational decisions and initiating behavioral responses.

In other words, a neural network is a computer software / hardware that attempt to simulate a model of the neural cells in animals and humans with the sole purpose of acquiring the intelligence embedded in these cells. The biggest strength of a neural network is its ability to learn by example. Used in many commercial applications for pattern recognition, neural networks could be of tremendous use in test automation.

Essentially, a neural network is nothing but a group of neurons connected together. Think of a neuron as a program or even better as a class, that accepts one or more inputs and produces one output. A typical neuron has 2 modes of operation: the training mode and the using mode. In the training mode a neuron is taught how to output a value, based on the input value(s). In the using mode, a neuron scans a given input and its associated output equals the current output. If the neuron does not recognize the input as a part of its training program, it applies firing rules to determine the output. Firing rules are nothing but a set of instructions to help the neuron react sensibly to all the inputs, irrespective of whether they were a part of its training program. Some of the popular neural networks are Forward Connection Neural Network, Hopfield Network, Brain-State-in-a-Box, the Kohonen Network and the Back Propagation Network. The Back Propagation Network is a type of neural network in which a training sample is presented to a neural network.

In order to bridge the gap between manual and automation testing it is quite important for the automation scripts to possess a certain amount of intelligence. Naturally, this intelligence has to be fed into the script(s). The easiest and possibly the most efficient method would be to use the test automation scripts in conjunction with neural networks.
Apart from being an active area of research, today, neural networks are used in a variety of commercial applications including character recognition and image recognition, stock forecasting etc. Also, neural networks are a concept, therefore tool/technology independent. Hence, effective usage of neural networks is possible, regardless of the software test automation tool/technology being used. Secondly, neural network programs are typically modular in nature. Hence, the same set of functions and procedures can be plugged in across various scripts. The subsequent part of this section will provide an insight on the usage of neural networks in test automation.

Neural Networks in Test Automation

Neural networks can be used very effectively in test automation, provided a strong architecture is in place, complemented by robust and generic scripts. The following section will explain the application of neural networks with respect to the previous test scenario.

Figure 3: Usage of neural networks in test automation

Figure 3

Traditionally, a neuron consists of 3 parts: input(s), weights assigned to each of these inputs and an output. Mapping the structure to the above architecture, it can be observed that neural networks can be embedded in each layer of the test automation suite:

  • The test data acts as an input to the neural network / neuron(s).
  • Each input has a value assigned to it, termed as the weight of the input. These weights are nothing but real numbers, which describe the relevance or importance of a particular input to the hidden neuron or the output neuron. The greater the weight that is assigned to an input, the more important the value of that input is to the neuron that receives it. These weights can be negative, which implies that the input can inhibit, rather than activate, a specific neuron.
  • The output of the test scripts will be determined by the inputs along with their weights. In some of the neural networks, there are additional layers between the input and the output layers, called as hidden layers. In the above architecture, a part of the common libraries can constitute the hidden layers. In these types of networks, the hidden layers are embedded between the input and the output layers. These hidden layers have the freedom to form their own representation of the input data before it is passed on to the output layers, making these networks / programs very complicated and equally powerful.
  • Each test case / script in a test automation suite can be mapped to a neuron. Ultimately, all these scripts constitute the neural network.

Import of Neural Networks in Test Automation:

Provided they are used correctly, neural networks would address some of the most common issues in test automation.

  1. The most important of these is verifying the correctness of the test result. Currently, this is the biggest drawback of any test automation script. Existing test automation scripts are unable to verify the correctness of the test results and hence require manual intervention time and again. The inherent intelligence and pattern recognition abilities of neural networks would solve this issue to a large extent. Such a test automation suite would be fed with huge amounts of test data. Over a period of time, the test scripts would be able to distinguish between the different test data and be in a position to verify the correctness of the output, based on the input data. In this case, the main functionality of the test automation suite would be:
    1. Monitoring discrepancies between the expected and the actual output while performing regression testing.
    2. Adapting to subsequent releases of the software and updating the output of the test results accordingly (after the initial learning is complete).
  2. Differentiating between genuine errors and ‘œfalse alarms’ and vice versa. This can be very useful in extremely complex software where errors and intricate functionalities are often transposed. An appropriate combination of hidden layer(s) and firing rules is the right tool to handle such a scenario. As stated previously, the hidden layers are free to construct their own representations of the input, whereas the firing rules help a neural network (automation script) to react in a sensible manner to any input that it did not encounter in its training program. This can help the automation suite to logically differentiate between functionality and defect. An appropriate combination of logic, intelligence and channeled learning would make such an automation suite a far superior tester.
  3. Traversing paths and examining possibilities previously untouched. It is of common knowledge that it is impossible for any manual tester to build and execute each and every possible test case. In other words, software is never completely tested in the true sense. Scripts with built-in artificial intelligence would be learning and adapting continuously. Over a period of time they might build up on the existing test cases and ultimately, 100 % software testing might become a reality.

Advantages of Neural Networks:

  1. These scripts, if designed, developed and implemented correctly, can substitute for manual testing.
  2. The inherent parallel nature and real time response of neural networks makes them extremely robust.
  3. Training neural networks does not require domain knowledge, only the correct training data.
  4. Unlike the traditional approach, neural networks are not based on a fixed set of rules. Hence, these can be used in complex scenarios involving a lot of dynamics.
  5. Such scripts have a high degree of fault tolerance.

Disadvantages of Neural Networks:

  1. Technical and highly skilled personnel are required to create and maintain the scripts. Resources with a programming experience are more suited for this kind of automation.
  2. Teaching and mentoring neural networks can be a time consuming activity.

Thus, we can see that implementing neural networks in test automation not only retains all the advantages of generic test automation, it also trains the automation scripts to think intelligently. Such scripts not only know what to do, they also know how to go about things without manual intervention. Requiring minimal manual intervention, these scripts are capable of completely replacing manual testing to a large extent.

Conclusion

In this article we discussed 2 complementary methodologies to test automation, generic test automation and test automation using neural networks. Each of these has its own distinct advantages and a few disadvantages, if at all any. However, the advantages far outweigh the disadvantages. Both these methodologies will take the test automation scripts a step ahead in becoming something much more than dumb work horses performing mundane tasks. Successful test automation is not just about writing scripts, it is about writing intelligent and adaptive scripts that can provide output comparable to manual testing. Sadly, test automation still has a long way to go.

Resources

Courtesy : http://www.indicthreads.com/1336/using-intelligent-test-automation-techniques/

Techie News : HCL Technologies to replace employees doing simple software testing with domain experienced staff

BENGALURU: HCL Technologies expects a drastic change in its employee structure over the next couple of years as automation, artificial intelligence (AI) and other disruptive technologies increasingly make low-level engineers doing repetitive manual tasks redundant.

The country’s fourth-largest software exporter, which employs over 95,000 engineers, believes it will have an “hour glass structure” with more engineers with “domain work experience”, replacing the current “pyramid structure” where a lot of employees do simple software testing and provide information technology support.

“The volume of work which is being done at the lowest layer of pyramid is getting automated,” said C Vijayakumar, corporate vice-president for infrastructure services delivery at HCL Technologies. “People acquiring new skills in new technology areas are the future. So, there will be flattening of the pyramid,” he told ET in an interview last month. That means the Noida-based software major will eventually have more engineers with niche skills instead of its current army of engineers for information technology support.

Just a few months ago, ET had reported that Wipro, the country’s third-largest software exporter, is working on a similar model and has started a three-year exercise to become a lean and agile company.

Wipro, which employs over 150,000 employees, aims to slim down by about a third without resorting to mass layoffs, four executives familiar with the development had told ET. The Bengaluru-based firm had declined to comment.

HCL Technologies’ acknowledgement to a structural transformation is a pointer to how artificial intelligence and automation impact the way large software exporters offer IT support to their clients.

Some analysts say more Indian firms talking AI and automation suggests that adoption of these technologies is starting to mature and that its impact will be “hugely disruptive”. “Suffice it to say this will be an uneven process,” said Thomas Reuner, principal analyst at Ovum, a London-based IT research firm. “The consensus appears to be that automation and AI will be leveraged to reduce FTEs (or, full time equivalents that indicates the workload of a full-time employee during a fixed time) for lower value activities and the focus will shift toward hiring for higher value tasks.” Analysts say it’s important for these big companies to lead well-rounded “discussions on how these technologies will impact governance, testing and most importantly hiring”.

To be sure, HCL Technologies, with revenues of $5.37 billion and Wipro making more investments in building IP-led automation platforms, puts the spotlight on companies looking to increase their “nonlinear revenues” or generating more business without correspondingly increasing headcount.

“We are seeing early trends which makes it certain that in the future we (will) need more people with domain experience,” said an executive at HCL Technologies, who requested not to be identified. “It is difficult to say what we will be like in three or five years (headcount wise). But certainly, we won’t be this big,” the person said.

A spokesman for the company declined to comment as HCL Technologies is in quiet period before it declares its third-quarter numbers at the end of the month.

For over a decade now, the IT industry, which employs more than three million people in the country, has been a driving force for creating jobs in the formal economy.

While the pace of hiring by these big companies is slowing, new opportunities are also opening up in the form of jobs at startups. However, there seems to be a division among the big four IT companies.

The top two, TCS and Infosys, which together employs more than 460,000 people, have maintained that they don’t expect automation and AI to replace any jobs.

Wipro and HCL, which together have over 250,000 employees, on the contrary, seem to be early movers in trying to cope with structural changes in the technology business.

At HCL Technologies, early signs do suggest that the company is already cutting excess flab in areas where experts believe more work is being taken over by machines.

In the year ended September 2014, the company increased the headcount in its support function by less than 5% to 8,493 from 8,091 a year earlier, while the number of people in technical functions increased over 10% to 87,029 from 79,105 during the same period.

Courtesy : Economic Times , Jan 23 , 2015

Jenkins – Migration from SVN to Git.

Hi all,

I’ve been working on Migration of one of our Jenkins Projects from SVN to Git repository as the Source Code Manager(SCM). These are the steps I needed to perform to complete migration. :-

  • Install ‘Git Plugin’ on the Jenkins Master Setup.  For more information on the plugin goto https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin
  • Install the latest ‘git’ package on the Master as well as the slave.

Both my Master and Slave Machines are Linux based machines, so by default when we do a sudo apt-get install git or a sudo yum install git we get the git version 1.7.1 installed. But Jenkins specifies the git package to be more than 1.7.10 so I got the latest version 2.1.3 installed. For this we need to get the src files from the git repository and then manually install this version. The following steps installs Git 2.1.3 for you :-

1.Make sure Git is version 1.7.10 or higher, for example 1.7.12 or 1.8.4

git –version

2.If not, install it from source. First remove the system Git:

yum -y remove git
3.Install the pre-requisite files for Git compilation:
yum install zlib-devel perl-CPAN gettext curl-devel expat-devel gettext-devel openssl-devel
4.Download and extract it:
mkdir /tmp/git && cd /tmp/git
5.Configure & install:
cd git-2.1.3/
./configure
make
make prefix=/usr/local install
6.Make sure Git is in your $PATH:
which git
You might have to logout and login again for the $PATH to take effect. Note: When editing config/gitlab.yml (step 7), change the git bin_path to /usr/local/bin/git.
Once the git packages are installed , we need to configure the Jenkins master for the git installation.
For this Goto Jenkins Home > Manage Jenkins > Configure System . Enter the git installation location in the git section of this page :-
Screen Shot 2015-01-22 at 5.21.56 pm
Once this is done, in you Jenkins Job select ‘Git’ as the SCM & give in the fields like the repository URL , branch specifier etc.
I set the ‘Additional Behaviours’ field to ‘Wipe out repository and force clone’. This can be set according to ur requirements .
This did it for me. 🙂
Let me know if this works for you as well and if you ended up in roadblocks somewhere.
Regards,
VJ

In the shell, what is “ 2>&1 ”?

1 is stdout. 2 is stderr.

Here is one way to remember this construct (altough it is not entirely accurate): at first, 2>1 may look like a good way to redirect stderr to stdout. However, it will actually be interpreted as “redirect stderr to a file named 1“. & indicates that what follows is a file descriptor and not a filename. So the construct becomes: 2>&1.

Courtesy: Stackoverflow

Stronger Fundamentals : tee command in shell scripts

Tee Command Usage Examples

Tee command is used to store and view (both at the same time) the output of any other command.

Tee command writes to the STDOUT, and to a file at a time as shown in the examples below.

Example 1: Write output to stdout, and also to a file

The following command displays output only on the screen (stdout).

$ ls

The following command writes the output only to the file and not to the screen.

$ ls > file

The following command (with the help of tee command) writes the output both to the screen (stdout) and to the file.

$ ls | tee file

Example 2: Write the output to two commands

You can also use tee command to store the output of a command to a file and redirect the same output as an input to another command.

The following command will take a backup of the crontab entries, and pass the crontab entries as an input to sed command which will do the substituion. After the substitution, it will be added as a new cron job.

$ crontab -l | tee crontab-backup.txt | sed 's/old/new/' | crontab –

Misc Tee Command Operations

By default tee command overwrites the file. You can instruct tee command to append to the file using the option –a as shown below.

$ ls | tee –a file

You can also write the output to multiple files as shown below.

$ ls | tee file1 file2 file3

***********************************************************************************************************************************************************

Courtesy: http://linux.101hacks.com/

Stronger Fundamentals: The null Device in Unix Systems

/dev/null is a simple device (implemented in software and not corresponding to any hardware device on the system).

  • /dev/null looks empty when you read from it.
  • Writing to /dev/null does nothing: data written to this device simply “disappear.”

Often a command’s standard output is silenced by redirecting it to /dev/null, and this is perhaps thenull device’s commonest use in shell scripting:

command > /dev/null

You’re using /dev/null differently. cat /dev/null outputs the “contents” of /dev/null, which is to say its output is blank. > messages (or > wtmp) causes this blank output to be redirected to the file on the right side of the > operator.

Since messages and wtmp are regular files (rather than, for example, device nodes), they are turned into blank files (i.e., emptied).

You could use any command that does nothing and produces no output, to the left of >.

An alternative way to clear these files would be to run:

echo -n > messages
echo -n > wtmp

The -n flag is required, or echo writes a newline character.

(This always works in bash. And I believe the default sh in every GNU/Linux distribution and other Unix-like system popularly used today supports the -n flag in its echo builtin. But jlliagre is right thatecho -n should be avoided for a truly portable shell script, as it’s not required to work. Maybe that’s why the guide you’re using teaches the cat /dev/null way instead.)

The echo -n way is equivalent in its effects but arguably is a better solution, in that it’s simpler.
cat /dev/null > file opens three “files”:

  • The cat executable (usually /bin/cat), a regular file.
  • The /dev/null device.
  • file

In contrast, echo -n > file opens only file (echo is a shell builtin).

Although this should be expected to improve performance, that’s not the benefit–not when just running a couple of these commands by hand, anyway. Instead, the benefit is that it’s easier to understand what’s going on.

Redirection and the trivial (blank/empty) command.

As jlliagre has pointed out (see also jlliagre’s answer), this can be shortened further by simply omitting the command on the left of > altogether. While you cannot omit the right side of a > or >>expression, the blank command is valid (it’s the command you’re running when you just press Enter on an empty prompt), and in omitting the left side you’re just redirecting the output of that command.

  • Note that this output does not contain a newline. When you press Enter on a command prompt–whether or not you’ve typed anything–the shell (running interactively) prints a newline before running the command issued. This newline is not part of the command’s output.

Redirecting from the blank command (instead of from cat /dev/null or echo -n) looks like:

>messages

>wtmp

******************************************************************************************************************************************************

Courtesy: http://askubuntu.com/questions/514748/what-does-dev-null-mean-in-a-shell-script