Shell Scripting – Grep

Grep searches the named input FILEs (or standard input if no files are named, or the file name is given) for lines containing a match to the given PATTERN. By default, grep prints the matching lines.

More info can be seen at

Eg : grep “STRING %pathtofile% | cut -d -f2

        Search for a string in a file  , the “pipe” connects the stdout of one command to the stdin of another , cut with dlimiter ” and select the second term (-f2)

Selenium Error: StaleElementReferenceException

Most automation tools depend on the concept of the page has finished loading. With AJAX and Web 2.0 this has become a grey area. META tags can refresh the page and Javascript can update the DOM at regular intervals.

For Selenium this means that StaleElementException can occur. StaleElementException occurs if I find an element, the DOM gets updated then I try to interact with the element.

Actions like:


are not atomic. Just because it was all entered on one line, the code generated is no different than:

By fooID ="foo");
WebElement foo = driver.findElement(fooID);;

If Javascript updates the page between the findElement call and the click call then I’ll get a StaleElementException. It is not uncommon for this to occur on modern web pages. It will not happen consistently however. The timing has to be just right for this bug to occur.

Generally speaking, if you know the page has Javascript which automatically updates the DOM, you should assume a StaleElementException will occur. It might not occur when you are writing the test or running it on your local machine but it will happen. Often it will happen after you have 5000 test cases and haven’t touched this code for over a year. Like most developers, if it worked yesterday and stopped working today you’ll look at what you changed recently and never find this bug.

So how do I handle it? I use the following click method:

public boolean retryingFindClick(By by) {
        boolean result = false;
        int attempts = 0;
        while(attempts < 2) {
            try {
                result = true;
            } catch(StaleElementReferenceException e) {
        return result;

This will attempt to find and click the element. If the DOM changes between the find and click, it will try again. The idea is that if it failed and I try again immediately the second attempt will succeed. If the DOM changes are very rapid then this will not work. At that point you need to get development to slow down the DOM change so this works or you need to make a custom solution for that particular project.

The method takes as input a locator for the element you want to click. If it is successful it will return true. Otherwise it returns false. If it makes it past theclick call, it will return true. All other failures will return false.

Personally, I would argue this should always work. If the developers are refreshing the page too quickly then it will be overloading the browser on the client machine.


Migration from Selenium RC to Webdriver – Executing Javascript APIs

Executing Javascript Doesn’t Return Anything

WebDriver’s JavascriptExecutor will wrap all JS and evaluate it as an anonymous expression. This means that you need to use the “return” keyword:

String title = selenium.getEval("browserbot.getCurrentWindow().document.title");


((JavascriptExecutor) driver).executeScript("return document.title;");

My Life On February 23, 2030

I just woke up at my optimal REM sleep time calculated by bedroom sensors. My bed reads my brain waves all night and sensors in the room monitor the amount of oxygen that my lungs converted into carbon dioxyde. I go to the bathroom and anything leaving my body is instantly analyzed and uploaded to my personal medical data cloud.

My breakfast food has just been 3D printed from ingredients genetically modified to decrease my cholesterol and glucose to optimal levels. My ham and cheese omelette tastes delicious and no animals were killed; it has become forbidden in most countries to kill any live animal. No need to be vegan to avoid killing anymore.

I have an appointment with Elon Musk to offer him an investment in my electric plane startup and even though business meetings are mostly done via hologram representations, I still really enjoy in-person meetings. My self-driving Telsa takes me to the nearest hyperloop station where I can get to L.A. in twenty minutes, so it’s not a big deal to travel anymore. It has become really expensive to have your own private car as governments only want self driving cars everywhere — these have reduced deaths from auto accidents by 95%. Private cars might soon be entirely forbidden as they cause too many problems. They’re a very expensive luxury for the time being.

There is no traffic anywhere as there are very few cars. There is very little parking space in cities as most cars are self driving. Uber has replaced all drivers with self driving cars, which are pretty much always moving. I remember when we used to have those idle parked cars everywhere. Tens of millions of jobs in the car industry have disappeared, car and truck drivers, car insurance companies, car dealers and repair stations, all gone. You can cross any street without even looking as the self driving cars’ sensors became so good cars just stop automatically as they “see” you about to cross the street. A lot of the data needed to create the machine learning algorithms behind self-driving cars was created by people in once-poor countries through an NGO called Samasource, which closed its doors when extreme poverty was eradicated in the last decade.

As I get to Elon’s office, I get my messages projected on the latest version of Google Glass that Tony Fadell has managed to fit in a contact lens. Voice recognition has become so good that nobody types anything anymore — you can just say what you want to answer. Since Tony and I are good friends, he let me test the latest beta version of his mind-reading software update for Glass. Now I just think my reply and it shows up instantly on my retina and I can just think “send it” to get it to anyone I want. Nobody has a smartphone anymore, though I remember what it was like to use my thumbs to send text messages.

I look around and remember how we used to see so many ads everywhere — they completely disappeared. Marketing is now only highly personalized and targeted to your specific needs. I touch a door handle and it senses my hands are a little dry; as I opted-in for personalized marketing, I get an offer to try a new hand cream by Laxmi. I accept and it is instantly delivered to me by an Amazon drone. Drones are so small and silent you barely see them anymore. Laxmi has become very successful by distributing most of the profits of their beauty products to people who would otherwise be poor. Most businesses that don’t have a social or environmental mission have died out, as nobody wants to buy their products.

There is no more hunger in the world as we can 3D print pretty much any food. Extreme poverty disappeared when governments around the world signed onto the Universal Floor movement, led by the Gates Foundation and funded by the Billionaire’s Pledge that became famous decades ago. I give 5% of my income automatically as a “voluntary tax” to the Universal Floor Foundation, which is one of the most popular charities on earth and posts their results in real time to my Glass feed. We also eradicated illiteracy — I remember when education became universal and free through technology. In large cities, people still send their kids to school but anywhere else free hologram teachers are always available.

I just arrived at Elon’s office and he invites me to space for the afternoon! We take the latest spaceship resulting from the joint venture Richard Branson and Elon Musk launched a few years ago, GalaktiX. I can see the Earth from above for the first time, a planet that thanks to advances in technology is verdant and blue, despite the old threat of global warming. Space travel is amazing and has become much more affordable — a few hours’ trip to space will cost only about one thousand dollars when they launch this new product.

I feel a little sick when we come back. I don’t even have to call my virtual doctor as she was already warned by the results of my live body analysis sensors. I am testing this new under-the-skin fitbit nanodevice — it communicates with Google Glass and sends data continuously to my doctor. The prescription is delivered on a drone and I feel better in a few hours. Time to head back home.

On my way back, I get a notification that someone is trying to deliver something at my home. The dropcams identify a trusted Fedex person and I only have to think about opening the garage door so they can put the huge delivery inside. It is a 1982 mechanic Haunted House pinball machine my father had gifted me at Christmas — these throwbacks to the analog past mean a lot to me.

With all the 3D printing and automation, handmade products have become the most in-demand objects. A huge number of jobs are constantly being created around anything made entirely by humans with raw materials. Artisanal producers are all the rage —there is even a quality label that has become the new status symbol: “Certified All-Human Made”. Art explodes as people have much more free time and can make a good living out of it. I have learned to play the guitar, and use my Glass interface to practice while I ride back up to San Francisco.

Back home the first thing I do is meditate for an hour entirely disconnected, a practice I started 15 years ago. Creating space, reconnecting with my body and my mind, slowing down when everything is fast and disconnecting when everything is more connected has become as important for me as taking a shower.


The Conscious Web: When the Internet of Things Becomes Artificially Intelligent

By-  Influencer

Founder and CEO, Family Online Safety Institute

When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention.

All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of AI could “spell the end of the human race.” Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our “biggest existential threat” and said that playing around with AI was like “summoning the demon.” Gates, who knows a thing or two about tech, puts himself in the “concerned” camp when it comes to machines becoming too intelligent for us humans to control.

What are these wise souls afraid of? AI is broadly described as the ability of computer systems to ape or mimic human intelligent behavior. This could be anything from recognizing speech, to visual perception, making decisions and translating languages. Examples run from Deep Blue who beat chess champion Garry Kasparov to supercomputer Watson who out guessed the world’s bestJeopardy player. Fictionally, we have “Her,” the movie that depicts the protagonist, played by Joaquin Phoenix, falling in love with his operating system, seductively voiced by Scarlet Johansson. And coming soon, “Chappie” stars a stolen police robot who is reprogrammed to make conscious choices and to feel emotions.

An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. This could take the form of a computer reprogramming itself in the face of an obstacle or restriction. In other words, to think for itself and to take action accordingly.

Needless to say, there are those in the tech world who have a more sanguine view of AI and what it could bring. Kevin Kelly, the founding editor of Wired magazine does not see the future inhabited by HAL’s – the homicidal computer on board the spaceship in “2001 A Space Odyssey.” Kelly sees a more prosaic world that looks more like Amazon Web Services, a cheap, smart, utility which is also exceedingly boring simply because it will run in the background of our lives. He says AI will enliven inert objects in the way that electricity did over a hundred years ago. “Everything that we formerly electrified, we will now cognitize.” And he sees the business plans of the next 10,000 start-ups as easy to predict: “ Take X and add AI.

While he acknowledges the concerns about artificial intelligence, Kelly writes, “As AI develops, we might have to engineer ways to prevent consciousness in them – our most premium AI services will be advertised as consciousness-free.” (my emphasis). And this from the author of a book called, “What Technology Wants”.

Running parallel to the extraordinary advances in the field of AI is the even bigger development of what is loosely called, The Internet of Things or IoT. This can be broadly described as the emergence of countless objects, animals and even people who have uniquely identifiable, embedded devices that are wirelessly connected to the Internet. These “nodes” can send or receive information without the need for human intervention. There are estimates that there will be 50 billion connected devices by 2020. Current examples of these “smart” devices include Nest thermostats, wifi-enabled washing machines and the increasingly connected cars with their built-in sensors that can avoid accidents and even park for you.

The US Federal Trade Commission is sufficiently concerned about the security and privacy implications of the Internet of Things, and has conducted a public workshop and released a report urging companies to adopt best practices and “bake in” procedures to minimize data collection and to ensure consumers trust in the new networked environment.

Tim O’Reilly, coiner of the phrase, “Web 2.0” sees the Internet of Things as the most important online development yet. He thinks the name is misleading – that the IoT will simply mean giving people greater access to human intelligence and that it is “really about human augmentation” and that we will shortly “expect our devices to anticipate us in all sorts of ways”. He uses the “intelligent personal assistant”,Google Now, to make his point.

So what happens with these millions of embedded devices connect to artificially intelligent machines? What does AI + IoT = ? Will it mean the end of civilization as we know it? Will our self-programming computers send out hostile orders to the chips we’ve added to our everyday objects? Or is this just another disruptive moment, similar to the harnessing of steam or the splitting of the atom? An important step in our own evolution as a species, but nothing to be too concerned about?

The answer may lie in some new thinking about consciousness. As a concept, as well as an experience, consciousness has proved remarkably hard to pin down. We all know that we have it (or at least we think we do), but scientists are unable to prove that we have it or, indeed, exactly what it is and how it arises. Dictionaries describe consciousness as the state of being awake and aware of our own existence. It is an “internal knowledge” characterized by sensation, emotions and thought.

Just over 20 years ago, an obscure Australian philosopher named David Chalmers created controversy in philosophical circles by raising what became known as the Hard Problem of Consciousness. He asked how the grey matter inside our heads gave rise to the mysterious experience of being. What makes us different than, say, a very efficient robot, one with, perhaps, artificial intelligence? And are we humans the only ones with consciousness?

Some scientists propose that consciousness is an illusion, a trick of the brain. Still others believe we will never solve the consciousness riddle. But a fewneuroscientists think we may finally figure it out, provided we accept the remarkable idea that soon computers or the Internet might one day become conscious.

In an extensive Guardian article, the author Oliver Burkeman writes that Chalmers and others have put forth a notion that all things in the universe might be or potentially be conscious “providing the information it contains is sufficiently interconnected and organized.” So could an iPhone or a thermostat be conscious? And, if so, could we have a “Conscious Web”?

Back in the earliest days of the web, the author, Jennifer Cobb Kreisberg wrote an influential piece entitled, “A Globe, Clothing Itself with a Brain.” In it she described the work of a little known Jesuit priest and paleontologist, Teilhard de Chardin, who fifty years earlier described a global sphere of thought, the “living unity of a single tissue” containing our collective thoughts, experiences and consciousness.

Teilhard called it the “nooshphere” (noo is Greek for mind). He saw it as the evolutionary step beyond our geosphere (physical world) and biosphere (biological world). The informational wiring of a being, whether it is made up of neurons or electronics, gives birth to consciousness. As the diversification of nervous connections increase, de Chardin argued, evolution is led towards greater consciousness. Or as John Perry Barlow, Grateful Dead lyricist, cyber advocate and Teilhard de Chardin fan said, “With cyberspace, we are, in effect, hard-wiring the collective consciousness”.

So, perhaps we shouldn’t be so alarmed. Maybe we are on the cusp of a breakthrough not just in the field of artificial intelligence and the emerging Internet of Things, but also in our understanding of consciousness itself. If we can resolve the privacy, security and trust issues that both AI and the IoT present, we might make an evolutionary leap of historic proportions. And it’s just possible Teilhard’s remarkable vision of an interconnected “thinking layer” is what the web has been all along.


Python – Accessing Command Line Arguments

Python provides a getopt module that helps you parse command-line options and arguments.

$ python arg1 arg2 arg3

The Python sys module provides access to any command-line arguments via the sys.argv. This serves two purpose:

  • sys.argv is the list of command-line arguments.
  • len(sys.argv) is the number of command-line arguments.

Here sys.argv[0] is the program ie. script name.


Consider the following script


import sys

print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)

Now run above script as follows:

$ python arg1 arg2 arg3

This will produce following result:

Number of arguments: 4 arguments.
Argument List: ['', 'arg1', 'arg2', 'arg3']

NOTE: As mentioned above, first argument is always script name and it is also being counted in number of arguments.

Courtesy :

Additional notes from VJ : –

If there are arguments that are provided along with the python file, then within the python file this argument can be accessed like this : –

sys.argv[1]  -> for 2nd argument
sys.argv[2] -> for 3rd argument

Note: sys.argv[0] is the python script name as mentioned above