Here’s how Evernote moved 3 petabytes of data to Google’s cloud

Article by

Evernote decided last year that it wanted to move away from running its own data centers and start using the public cloud to operate its popular note-taking service. On Wednesday, it announced that the lion’s share of the work is done, save for some last user attachments.

The company signed up to work with Google, and as part of the migration process, the tech titan sent a team of engineers (in one case, bearing doughnuts) over to work with its customer on making sure the process was a success.

Evernote wanted to take advantage of the cloud to help with features based on machine learning that it has been developing. It also wanted to leverage the flexibility that comes from not having to run a data center.

The move is part of a broader trend of companies moving their workloads away from data centers that they own and increasingly using public cloud providers. While the transition required plenty of work and adaptation, Evernote credited Google for pitching in to help with the migration.

Why move to the cloud?

There was definitely plenty of work to do. Evernote’s backend was built on the assumption that its application would be running on the company’s twin California data centers, not in a public cloud. So why go through all the work?

Many of the key drivers behind the move will be familiar to cloud devotees. Evernote employees had to spend time maintaining the company’s data center, doing things like replacing hard drives, moving cables and evaluating new infrastructure options.

While those functions were key to maintaining the overall health and performance of the Evernote service, they weren’t providing additional value to customers, according to Ben McCormack, the company’s vice president of operations.

“We were just very realistic that with a team the size of Evernote’s operations team, we couldn’t compete with the level of maturity that the cloud providers have got…on provisioning, on management systems, et cetera,” McCormack said.“ We were always going to be playing catch-up, and it’s just a crazy situation to be in.”

When Evernote employees thought about refreshing a data center, one of the key issues that they encountered is that they didn’t know what they would need from a data center in five years, McCormack said.

Evernote had several public cloud providers it could choose from, including Amazon Web Services and Microsoft Azure, which are both larger players in the public cloud market. But McCormack said the similarities between the company’s current focus and Google’s areas of expertise were important to the choice. Evernote houses a large amount of unstructured data, and the company is looking to do more with machine learning.

“You add those two together, Google is the leader in that space,” McCormack said. “So effectively, I would say, we were making a strategic decision and a strategic bet that the areas that are important to Evernote today, and the areas we think will be important in the future, are the same areas that Google excels in.”

Machine learning was a highlight of Google’s platform for Evernote CTO Anirban Kundu, who said that higher-level services offered by Google help provide the foundation for new and improved features. Evernote has been driving toward a set of new capabilities based on machine learning, and Google services like its Cloud Machine Learning API help with that.

While cost is often touted as a benefit of cloud migrations, McCormack said that it wasn’t a primary driver of Evernote’s migration. While the company will be getting some savings out of the move, he said that cost wasn’t a limitation for the transition.

The decision to go with Google over another provider like AWS or Azure was driven by the technology team at Evernote, according to Greg Chiemingo, the company’s senior director of communications. He said in an email that CEO Chris O’Neill, who was at Google for roughly a decade before joining Evernote, came in to help with negotiations after the decision was made.

How it happened

Once Evernote signed its contract with Google in October, the clock was ticking. McCormack said that the company wanted to get the migration done before the new year, when users looking to get their life on track hammer the service with a flurry of activity.

Before the start of the year, Evernote needed to migrate 5 billion notes and 5 billion attachments. Because of metadata, like thumbnail images, included with those attachments, McCormack said that the company had to migrate 12 billion attachment files. Not only that, but the team couldn’t lose any of the roughly 3 petabytes of data it had. Oh yeah, and the Evernote service needed to stay up the entire time.

McCormack said that one of the Evernote team’s initial considerations was figuring out what core parts of its application could be entirely lifted and shifted into Google’s cloud, and what components would need to be modified in some way as part of the transition.

Part of the transformation involved reworking the way that the Evernote service handled networking. It previously used UDP Multicast to handle part of its image recognition workflow, which worked well in the company’s own data center where it could control the network routers involved.

But that same technology wasn’t available in Google’s cloud. Kundu said Evernote had to rework its application to use a queue-based model leveraging Google’s Cloud Pub/Sub service, instead.

Evernote couldn’t just migrate all of its user data over and then flip a switch directing traffic from its on-premises servers to Google’s cloud in one fell swoop. Instead, the company had to rearchitect its backend application to handle a staged migration with some data stored in different places.

The good news is that the transition didn’t require changes to the client. Kundu said that was key to the success of Evernote’s migration, because not all of the service’s users upgrade their software in a timely manner.

Evernote’s engagement with Google engineers was a pleasant surprise to McCormack. The team was available 24/7 to handle Evernote’s concerns remotely, and Google also sent a team of its engineers over to Evernote’s facilities to help with the migration.

Those Google employees were around to help troubleshoot any technical challenges Evernote was having with the move. That sort of engineer-to-engineer engagement is something Google says is a big part of its approach to service.

For one particularly important part of the migration, Google’s engineers came on a Sunday, bearing doughnuts for all in attendance. More than that, however, McCormack said that he was impressed with the engineers’ collaborative spirit.

“We had times when…we had written code to interface with Google Cloud Storage, we had [Google] engineers who were peer-reviewing that code, giving feedback and it genuinely felt like a partnership, which you very rarely see,” McCormack said. “Google wanted to see us be successful, and were willing to help across the boundaries to help us get there.”

In the end, it took roughly 70 days for the whole migration to take place from the signing of the contract to its final completion. The main part of the migration took place over a course of roughly 10 days in December, according to McCormack.

Lessons learned

If there was one thing Kundu and McCormack were crystal clear about, it’s that even the best-laid plans require a team that’s willing to adapt on the fly to a new environment. Evernote’s migration was a process of taking certain steps, evaluating what happened, and modifying the company’s approach in response to the situation they were presented with, even after doing extensive testing and simulation.

Furthermore, they also pointed out that work on a migration doesn’t stop once all the bytes are loaded into the cloud. Even with extensive testing, the Evernote team encountered new constraints working in Google’s environment once it was being used in production and bombarded with activity from live Evernote users.

For example, Google uses live migration techniques to move virtual machines from one host to another in order to apply patches and work around hardware issues. While that happens incredibly quickly, the Evernote service under full load had some problem with it, which required (and still requires) optimization.

Kundu said that Evernote had tested live migration prior to making the switch over to GCP, but that wasn’t enough.

When an application is put into production, user behavior and load on it might be different from test conditions, Kundu said. “And that’s where you have to be ready to handle those edge cases, and you have to realize that the day the migration happens or completes is not the day that you’re all done with the effort. You might see the problem in a month or whatever.”

Another key lesson, in McCormack’s opinion, is that the cloud is ready to handle any sort of workload. Evernote evaluated a migration roughly once every year, and it was only about 13 months ago that the company felt confident a cloud transition would be successful.

“Cloud has reached a maturity level and a breadth of features that means it’s unlikely that you’ll be unable to run in the cloud,” McCormack said.

That’s not to say it doesn’t require effort. While the cloud does provide benefits to Evernote that the company wasn’t going to get from running its own data center, they still had to cede control of their environment, and be willing to lose some of the telemetry they’re used to getting from a private data center.

Evernote’s engineers also did a lot of work on automating the transition. Moving users’ attachments over from the service’s on-premises infrastructure to Google Cloud Storage is handled by a pair of bespoke automated systems. The company used Puppet and Ansible for migrating the hundreds of shards holding user note data.

The immediate benefits of a migration

One of the key benefits of Evernote’s move to Google’s cloud is the company’s ability to provide reduced latency and improved connection consistency to its international customers. Evernote’s backend isn’t running in a geographically distributed manner right now, but Google’s worldwide networking investments provide an improvement right away.

“We have seen page loading times reducing quite significantly across some parts of our application,” McCormack said. “I wouldn’t say it’s everywhere yet, but we are starting to see that benefit of the Google power and the Google reach in terms of bridging traffic over their global fiber network.”

Right now, the company is still in the process of migrating the last of its users’ attachments to GCP. When that’s done, however, the company will be able to tell its users that all the data they have in the service is encrypted at rest, thanks to the capabilities of Google’s cloud.

From an Evernote standpoint, the company’s engineers have increased freedom to get their work done using cloud services. Rather than having to deal with provisioning physical infrastructure to power new features, developers now have a whole menu of options when it comes to using new services for developing features.

“Essentially, any GCP functionality that exists, they’re allowed to access, play with — within constraints of budget, obviously — and be able to build against.”

In addition, the cloud provides the company with additional flexibility and peace of mind when it comes to backups, outages and failover.

What comes next?

Looking further out, the company is interested in taking advantage of some of Google’s existing and forthcoming services. Evernote is investigating how it can use Google Cloud Functions, which lets developers write snippets of code that then run in response to event triggers.

Evernote is also alpha testing some Google Cloud Platform services that haven’t been released or revealed to the public yet. Kundu wouldn’t provide any details about those services.

In a similar vein, Kundu wouldn’t go into details about future Evernote functionality yet. However, he said that there are “a couple” of new features that have been enabled as a result of the migration.


To comment on this article and other CIO content, visit on Facebook, LinkedIn or Twitter.

Minimum Viable Technology (MVT) — Move Fast & Keep Shipping

Article by –

Technology teams can be the biggest asset or worst bottleneck for a growing company based on the strategy taken by them. In name of future proofing engineering, the technology teams become a hurdle to company’s goals. You can see the ‘hidden frustration” in Bezos words below ..

Engineers should be fast acting cowboys instead of calm clear-headed computer scientists — Jeff Bezos, Founder & CEO, Amazon

Rampant Problem in Industry: When the task is to build a bike, the product and technology teams would plan for a product, which can later run on motor, seat four people, sail in sea and even fly in the future. This hypothetical building of castle in air, digresses the focus from the real problem to be fixed. This is what Bezos is suggesting to refrain from, as it wastes resources and agonising delays the time to market.

Being defensive, the Product/Technology teams usually build a cannon for killing a bird.

Minimum Viable Product (MVP) philosophy evolved, to avoid this “unnecessarily over-thinking and over-preparation” problem which plagued products in all companies. It encouraged building the minimum required at a certain point of time and then iterating and improving it going forward. MVP approach enables much needed fast experimentation, fail fast and invest where needed strategy.

No such philosophy evolved for Technology. Therefore, the decades old defensive and paranoid philosophy still prevails (which was much needed during older 1–2 year long waterfall releases). This becomes competitive disadvantage for startups usually fighting for survival or growing fast.

Fundamental problem is that the engineers blindly copy the large company’s strategies, considering them to be the standard. Corporate and startups differ widely on their needs of scale, brand, speed, impact of a feature, loss by a bug, etc. Startups enjoy more freedom to make mistakes and that they should exploit to their benefit.

Strategies used in big companies are more often irrelevant and even detrimental to a small growing company’s interests.

Minimum Viable Technology: The solution to above problems is to Build the Minimum Technology, that makes the product and its foreseeable further iterations Viable. Make it live a.s.a.p. and then iterate and improve it based on real usage learnings. Every company is in different stage of evolution. Something that is MVT for a big company, can be over-engineering for startups.

If the task is to kill a bird, we should build a catapult/small-gun to begin with. If that becomes successful and there is a need to kill more or bigger animals, then bigger-guns/cannons should be built as required.

There is nothing so useless as doing efficiently that which should not be done at all. ~ Peter Drucker

Startups experiment a lot and only a few of them sustain the test of time. As per 80–20 rule, only those 20% successful ones should get deeper technology investments.

Principles of Minimum Viable Technology (MVT):

  • Most decisions can be reversed or fixed easily. Choose wisely by bucketing the decision properly into reversible or non-reversible. And judiciously decide how much to prepare for that case. (Read Jeff Bezos’ two types of decisions).

It’s important to internalise how irreversible, fatal, or non-fatal a decision may be. Very few can’t be undone. — Dave Girouard

  • Build MVT — Fast & cost effective. Build the Minimum Technology that makes the product and their foreseeable iterations Viable. Side towards operational familiarity while choosing technology rather than falling of the latest buzzword (a sure sign of inexperience and not being in trenches before).
  • Embrace change with open heart — iterate and rebuild as needed: Never try to force fit newer realities into the older model itself. Be ready to re-factor or throw away and rebuild where justified.
  • Keep fundamentals right & a Rule of Thumb: It’s a fine line between under-engineering and MVT approach, that has to be tread properly. Fundamentals have to be well deliberated and clear. Don’t rush into execution without thinking completely, otherwise it will lead to more resource waste later. Thinking has to complete and deliberate choices must be there to cut scope. The rule of thumb, is  discuss the ideal solution on board and then decide what to take out of scope to make it MVT.
  • Speed and Quality can go hand in hand: Never justify the bad quality of your work by using the speed of execution as excuse.

MVT is for scope reduction, not for quality reduction.

  • MVP/MVT is applicable for every iteration/release: People relate MVP to the First release of product only. In fact, it applies to every stage. MVP/MVT needs to be chosen from the remaining next tasks at every stage. At no stage, it is ok to waste time and resources.
  • Deep understanding, conviction and confidence is needed for MVT. Both MVP and MVT approach is about taking bold calls like — “Out of these tasks, only this much is enough to win this stage of game”. While defensive traditional approach is like — “we can’t win or sustain if we do not do most of the known tasks”.

Move Fast. Keep Shipping!!

* The term “Minimum Viable Technology – MVT” is coined by the author.


Courtesy: LinkedIn

A Managers Guide to NoSQL

-Article by Erik Weibust

Software design and development has undergone tremendous change over the last 30 years. Once a particular change captures the interest and imagination of the community, innovation accelerates and becomes self-propelled and change turns exponential. One such development in the last 5 years has been the development of NoSQL Database technology.

Software applications have become highly interactive with various delivery platforms and infrastructure. A modern application has to support millions of concurrent users and the data requirements have shifted from just application data to usage and analytics data. Application behavior has changed from static data capture and display, to dynamic, context-driven applications. With the above changes, relational database technology has lagged behind in innovation. Database providers have relied on 30 year old technological concepts and have applied multiple band-aids to the existing platforms to meet modern requirements.

Glossary of a few terms you need to know as you read on:

Database Schema is a well defined, strict representation of a real-world domain (such as the elements of a shopping application) within a database. All items to be stored in a database schema are expected to conform to the rules and constraints set by the schema design and no single-item can vary from the definition.

Database Replication is the process of sharing data between the primary and one or more redundant databases to improve reliability, fault-tolerance, or accessibility. Typically, data is immediately copied over to the backup location, upon write, so as to be available for recovery and/or read-only resources.

Sharding (or Horizontal partitioning) is a database design principle whereby the contents of a database table are split across physical locations, by rows instead of by columns (using referential integrity). Each partition forms part of a shard. Multiple shards together provide a complete data set, but the partitioned shards are split logically to ensure faster reads and writes.

What is NoSQL?
NoSQL is the name given to the engineering movement that birthed these next-generation databases. NoSQL stands for Not only SQL. The common misunderstanding is that it stands for No SQL, which is not true. NoSQL databases were created to solve real-world needs that existing relational databases were unable to solve. They are non-relational, distributed, schema-less and horizontally scalable with commodity hardware.

No SQL Databases are:

  1. Schema-less: Data can be inserted without being in a particular form. The format of the data can change at any time without affecting existing data. The unique identifier is the only required value for a data element.
  2. Auto-Sharding is by design an out of the box feature. All NoSQL database are built to be distributed and sharded without any further effort to the application design. They are built to support data replication, high availability and fail-over.
  3. Distributed Query support is available due to sharding.
  4. Maintaining a NoSQL cluster does not require complex software, or several layers of IT personnel and security measures. Of course, that does not mean reduced security of your data.
  5. Caching is built-in and low-latency is the expectation. Caching is transparent to application developers and the infrastructure teams.

In relation to Gartner’s Hype Cycle diagram, NoSQL is perhaps at the Slope of Enlightenment stage, with tremendous strides being made in the last 2 years towards Maturing with some of the NoSQL offerings.

Gartner's Hype Cycle

What are my Options?
There are many options to consider when choosing a NoSQL solution. They are mostly open source and schema-less. The key distinguishing factor between NoSQL databases is their design decision on how they handle data storage.

  • Key-value Storage: Membase, Redis, Riak
  • Graph Storage: Neo4j, InfoGrid, Bigdata
  • Wide-column Storage: Cassandra, Hadoop
  • Document Storage: MongoDB, CouchDB
  • Eventually Consistent Key-Value Storage: Amazon Dynamo, Voldemort
  • NewSQL: Almost relational, much simpler and easily scalable than RDBMS. Examples are voltDB, scaledb

How do I get buy-in from the team (above and below me)?
As with most organizations, new (or what is considered latest/greatest) technology is met with apprehension at best and suspicion at worst. The best and proven way to introduce something into the organization is to build prototypes of real-world scenarios, highlighting the advantages specific to your organization.

The most common place to introduce a NoSQL engine in your organization is most likely through building an application-logging prototype. With technology such as a NoSQL database, which is more of an infrastructure element, it is important to demonstrate business continuity with the new technology compared to existing technologies; thus demonstrating minimal risk to business stakeholders. It is likely that your developers may have already heard of this technology and are highly interested and motivated to use NoSQL databases. It is up to you to educate yourself on the new technology, and then educate your organization on the benefits of NoSQL based on the results of your prototype. Lastly, you can make the point that NoSQL is not an invention waiting to be implemented. Rather, it grew out of necessity for companies like Google and Amazon who built it, used it, and then open-sourced for the community at-large.

Next Steps
For more details on each NoSQL option visit We will also publish follow-up blogs posts on selected NoSQL databases in the coming weeks here at The follow-ups will be an in-depth review of the selected NoSQL databases with sample data and use cases for each.


Infrastructure as code tops IT’s DevOps challenges

IT operations pros have some work to do to automate the infrastructure underpinning DevOps initiatives.

While cultural barriers are some of the most daunting DevOps challenges, IT operations practitioners say that capturing infrastructure as code is the most significant technical hurdle to supporting modern application development practices.

Even though configuration management tools such as Puppet and Chef that enable infrastructure as code have been on the market for years, the concept can still be difficult for some IT pros to grasp.

Not everyone has yet bought into the concept of taking a traditional rack and stack infrastructure with management of IPs in Excel spreadsheets, and automation through Bash scripts and Ruby code, according to Pauly Comtois, vice president of DevOps for a multi-national media company.

“A lot of our customer organizations barely have operations automated in any way,” echoed Nirmal Mehta, senior lead technologist for the strategic innovation group at Booz Allen Hamilton Inc., a consulting firm based in McLean, Va., who works with government organizations to establish a DevOps culture.

“It’s 2016, and we should be able to automate those deployments,” he said. “Once you do that, you can start to use the exact same tools to manage the infrastructure that you use for your application code.”

A big reason why companies have been slow to automate their operations is that infrastructure as code work can be more easily discussed than done — legacy applications often weren’t designed with tools such as Chef or Puppet in mind.

Third-party software that runs on Windows isn’t conducive to automation via the command line, Comtois pointed out. “What makes that really technically challenging is when that piece of software also happens to be critical to the workflow of that organization, so I can’t just go in and rip it out and replace it with something else.”

These issues can be overcome, but “some transitions are more painful than others,” he said.

Security teams also have to be brought on board with managing infrastructure as code, according to Mehta.

“Infrastructure as code and configuration management make compliance a lot easier, but that also means that compliance is no longer a thing that you do once a year,” he said. “It gets enveloped in the DevOps process [just like] any piece of code needs to go through.”

The majority of time spent by IT operations into the foreseeable future will be transitioning manual processes into infrastructure as code, or automated steps that follow the same pipeline that application code does, according to Mehta.

DevOps and IT ops: a two-way street

You know the top challenges, but do you know where they stem from? Learn how a changing DevOps culture affect IT pros’ day-to-day responsibilities and the tools you can use to bridge that gap.

Infrastructure as code benefits

So why go through the technical headaches to establish infrastructure as code?

According to experienced DevOps practitioners, it’s the only way to create an automated IT infrastructure that adequately supports automated application development testing and release cycles.

“In our environment, Jenkins makes many calls into Ansible to build stuff and deploy and configure it,” said Baron Schwartz, founder and CEO of VividCortex, a database monitoring SaaS provider based in Charlottesville, Va. “Whatever we want to be automated, we have CircleCI calling a Web service that pokes Jenkins, which runs Ansible — it sounds like a Rube Goldberg machine, but it works well.”

Even things the VividCortex team wants to kick off manually use a chat bot to call into Jenkins and kick off a build job with Ansible, Schwartz said.

Getting IT ops staffs used to the concept of infrastructure as code is key to securing their buy-in as DevOps is more broadly rolled out in an environment, according to Caedman Oakley, DevOps evangelist for Ooyala Inc., a video processing service headquartered in Mountain View, Calif.

“Operations doesn’t want to see things change unless [it] know[s] what controls are in place,” Oakley said. “Everything being written in a Chef recipe or in cookbooks means [operations] can see what the change was and … knows exactly who did the change and why it’s happening — and that actually is the greatest opener to adoption on the operations side.”

Ultimately infrastructure as code simplifies infrastructure management, Oakley said.

“Operations can just go manage the infrastructure now, and don’t have to worry about figuring out why one server is slightly different from another,” he said.  “You can just fire up an instance any way you want to.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at or follow @PariseauTT on Twitter.



Key Parameters while choosing an Automation Testing Tool

Hey all,

Hi Techies! How are you doing? Everything alright? How is the software industry treating you?

I hope it is keeping you all at the tip of your toes, as always. At least here in India it is! The industry is evolving at a blazing pace. There are start-ups propping up in every nook and corner. There are huge rounds of funding for those which show promise & all my buddies working in such start-ups are earning big bucks. Older more established companies are either withering away or are reinventing themselves to stay relevant & competitive.

Software Development  has moved on from the ‘Steam Engine phase’ to the ‘Bullet Train phase’. Cloud Technologies like AWS, Azure are enabling companies to set shop quicker,cheaper,lighter & to scale their systems more easily. Release times are getting shorter and shorter.To aid these faster release cycles there is a new set of Software Ninjas called Devops Engineers who are bringing a paradigm shift to the way the industry was working uptil now. There is a cultural and ideological change happening and conventional boundaries of Dev, QA and Support are slowing fading away for the better.

Coming straight to the subject of this post…

There is a huge necessity of software automation for various tasks & processes in today’s industry.Each company needs to have a robust automation framework in place. This is to match up to the speed of software development and test out the iterative builds. A quick reliable feedback loop on the code quality thus ensures moving to a ‘Continuous Delivery’ model where builds are deployment ready. It is close to impossible to test the quality of every single release with manual testing alone.I’m sure you’ll agree with me on this. Hence automation is the pressing need of the hour.

Automation requirements are massive in the industry – from UI automation for Desktop Applications,Browsers and Mobiles, to API level automation for backend architecture, to machine level automation for various tasks. There are a vast many automation tools out there in the market which can be used for different automation requirements. In my 5 years of experience in the Software Industry, I’ve come across many instances wherein I’ve had to select a tool for a type of automation & I’ve always been spoilt for choices. I had very little time to decide on the tool & design my automation framework. To add to this, an engineer is always expected to deliver fast tangible results. Simply put, you have the opportunity to make it or break it.

It becomes very important that you make a quick but well balanced decision while selecting your tool. Else sooner or later you will be in a situation that you have to scrap the tool you selected for more reasons than one. The tools all look similar in the beginning, but on careful reading and research, you will be able to identify the variations in each tool & the limitations of each one.

In this article I would like to share few key parameters to keep in mind while selecting an Automation Tool for your software quality assurance needs :-

1.Ease of setup- setup must be pretty straightforward and all dependencies must be handled internally during installation of the tool.

2.Ease of implementation-the APIs provided by the tool must be simple to understand. There must be good documentation for beginners.

3.Execution time of the scripts- each command in the script must be executed fast and overall time needed to execute a test-suite must be in acceptable limits.

4.Reliability- the tool must be reliable and must not give different results for the same test. Execution times & results must be consistent when run on a day to day basis. The tool must also work as per the documentation. Any deviation from that is a sign of unreliability.

5.Big developer/user community-Very important parameter. The bigger the community of users, the faster you will be able to resolve any roadblocks.

6.Support for issues/upgrades-There must be frequent updates to the tool with bug fixes. This shows that there is an active development team working to make the tool more and more robust. See the release notes too if possible and get an idea of the issues that have been fixed and how quickly they were fixed.

7.Integration with Continuous Integration- The tool must have hooks to be integrated with CI tools like Jenkins,Bamboo,Travis etc. Without that a user must then create his own hooks to plug the automation scripts with these CI tools which again requires time and effort.

8.Ease of maintenance-The tool must have a good overall framework which can handle maintenance related work. Eg: An update version of the tool must not break existing workflows. Google for such issues and if the occurrences of such issues are frequent, then that is a warning signal.

9.Open Source- I would prefer to select a tool that is open-source. Period.

10.Choice of Language- Lastly, select a tool which supports a programming language you are familiar with so that script development can be faster for you.

So there it is! These are the key parameters which came to my mind while selecting automation tools for my workflows. Hope it helps!

As you venture out to select your automation tool, it is best to create a table with all these parameters and evaluate all of your prospective tools against these parameters carefully. Believe me this definitely helps your decision making process and you will come out with a well balanced, level headed decision.

Until next time… From the Silicon Valley of India… Love to all 🙂




Avoiding the “years of experience” trap

The quality of your technical experience matters. It isn’t a simple product of time.​ It is easy to fall into the trap, focusing on some narrow aspect of technology, getting comfortable with what you know. Becoming an in-house expert on a specific vendor’s offering. Collecting a paycheck, and never bothering to sharpen your tools, much lessadd new ones to the toolbox.

You’ve probably heard of the “years of experience fallacy” before. You know, the guy who’s been doing the exact. same. thing. for the last 10 years, and hasn’t advanced much in their knowledge or their career. The tech industry equivalent of a factory worker that spends a lifetime pulling the same lever, until one day they discover they are obsolete and a robot is taking their job.

All experience is not created equal.

The world of web development moves forward at light speed. What’s popular today will be obsolete tomorrow. We all know this — the last few decades of history have made this clear.

Unfortunately, it’s way too easy to get left behind. We’re busy… we’re busy with our jobs, our clients, our family, and our friends. Finding time to keep your skills up to date is hard, and many of the resources out there are dense, boring, and take hours and hours of digging to find what you’re looking for.

And that’s why John and I created Egghead — we wanted to provide easy-to-digest, bite size morsels of knowledge that will keep you current and keep you in high demand. We’ve both experienced this (we were both Adobe Flex/Flash developers… ugh), and had to scramble when a vendor effectively cut the throat of our careers, terminating a platform, sending legions of developers scrambling to discover “what’s next”.

Years of experience can be a glorious thing that uplifts and propels you forward. If you wake up one morning, and discover that your years of experience are obsolete, it is total BS. It hurts.

Joel Hooks
Co-founder, Egghead

Similar articles: India’s IT Party is over. Reinvent yourself or suffer

India’s IT Party is over. Reinvent yourself or suffer

India’s  IT industry is unlikely to remain the amazing job-engine that it has been. For the past two decades, the fastest way to increase your income has been to land a job with an IT company. The industry has provided a ticket to prosperity for millions of young Indians; children of security guards, drivers, peons and cooks catapulted themselves and their families firmly into the middle class in a single generation by landing a job in a BPO. Hundreds of engineering colleges mushroomed overnight churning out over a million graduates a year to feel the insatiable demand of India’s IT factories.

This party is coming to an end. A combination of slowing demand, rising competition and technological change means that companies will hire far fewer people. And this is not a temporary blip- this is the new normal. Wipro’s CEO has bravely admitted that automation can displace a third of all jobs within three years while Infosys CEO Sikka aims to increase revenue per employee by 50%. Even NASSCOM, the chronically optimistic industry association, admits that companies will hire far fewer people.  Not only will the lines of new graduate waiting for job offers grow rapidly longer every year, but so too will the lines of the newly unemployed as all companies focus more on utilization, employee productivity and performance. Employees doing tasks that can be automated, the armies of middle managers who supervise them and all those with mediocre performance reviews and without hot skills are living on borrowed time.

So what do you do if you are a member of these endangered species? What constitutes good career advice in these times? I’d say that the first thing is to embrace reality and recognize that the game has changed for good. The worst thing to do is be wishful and wait for the good times to return. They won’t. But there’re still lots of opportunities. What’s happening in the industry is ‘creative destruction”. New technologies are destroying old jobs but creating many new ones. There is an insatiable demand for developers of mobile and web applications. For data engineers and scientists. For cyber security expertise. So for anyone who is a quick learner, anyone with real expertise, there will be abundant opportunities.

There has also never been a better time for anyone with an iota of entrepreneurial instinct.  India is still a supply constrained economy and so there is room to start every kind of business: beauty parlour,  bakery, catering, car-washing, mobile/electronics repair, laundry, housekeeping, tailoring. For entrepreneurs with a social conscience, there is a massive need for social enterprises that deliver affordable healthcare, education and financial services. Not only are there abundant opportunities but startups are “in” and there is no shame at all in failure.  The ranks of Angel investors is swelling and it has never been so easy to get funded. There is even a website that provides step by step instructions to would-be entrepreneurs.

For those who prefer a good old-fashioned job, there are abundant jobs in old economy companies which are struggling to find every kind of talent- accountants, manufacturing and service engineers, salesreps.  Technology is enabling the emergence of new ‘sharing services” such as Uber or Ola that enable lucrative self-employment; it is not uncommon to find cab drivers who make 30-40,000  rupees/month.

My main point should be clear. While India may have a big challenge overall in creating enough jobs   for its youthful population, at the individual level, there is no shortage of opportunities. The most important thing is a positive attitude. The IT boom was a tide that lifted all boats- even the most mediocre ones. However, this has bred an entitlement mentality and a lot of mediocrity. To prosper in the new world, two things will really matter. The first is the right attitude. This means a hunger to succeed. Being proactive in seeking opportunities not waiting either till you are fired or for something to drop into your lap . A willingness to take risks and the tenacity to work hard and make something a success. Humility. Frugality.  The second is the ability to try and learn new things. The rate of change in our world is astonishing; whatever skills we have will largely be irrelevant in a decade. People are also living much longer. So the ability to learn new things, develop new competencies and periodically reinvent ourselves is a crucial one. Sadly too many of us have no curiosity and no interest in reading nor learning. The future will not be kind to such people.

“The snake which cannot cast its skin has to die.”- Friedrich Nietzsche

(First published as an opinion piece in Times of India)


Similar articles : Interview on Emerging IT trends by T.K. Kurien,CEO of Wipro

Is the Web Browser Dying?

-Mar 8, 2015

The web isn’t dying, but in checking its pulse, we could be worried about the wrong patient, maybe the web browser is?

Wired magazine proclaimed the death of the web browser way back in 1997, when push technology was going to take over the world, and again there was a panic in 2010. Now it’s The Wall Street Journal’s turn in a recent article by Christopher Mims proclaiming the rise of the native app.

Mountains of data tell us that, in aggregate, we are spending more time in apps that, previously, we once spent surfing the Web. We’re in love with apps, and they’ve taken over. On phones, 86% of our time is spent in apps, and just 14% is spent on the Web, according to mobile-analytics company Flurry.

This might seem like just a % change however in the old days, we printed out directions from the website MapQuest that were often wrong or confusing. Today we call up Waze or g-maps on our phones and are routed around traffic in real time. For those who remember the old way, this is a miracle.

Everything about apps feels like a win for users—they are faster and easier to use than what came before. But underneath all that convenience is something sinister: the end of the very openness that allowed Internet companies to grow into some of the most powerful or important companies of the 21st century.

Take that most essential of activities for e-commerce: accepting credit cards. When made its debut on the Web, it had to pay a few percentage points in transaction fees. But Apple takes 30% of every transaction conducted within an app sold through its app store, and “very few businesses in the world can withstand that haircut,” says Chris Dixon, a venture capitalist at Andreessen Horowitz.

App stores, which are shackled to particular operating systems and devices, are walled gardens where Apple, Google , Microsoft and Amazon get to set the rules. For a while, that meant Apple banned Bitcoin, an alternative currency that many technologists believe is the most revolutionary development on the Internet since the hyperlink. Apple regularly bans apps that offend its politics, taste, or compete with its own software and services.

But the problem with apps runs much deeper than the ways they can be controlled by centralized gatekeepers. The Web was invented by academics whose goal was sharing information. Tim Berners-Lee was just trying to make it easy for scientists to publish data they were putting together during construction of CERN, the world’s biggest particle accelerator.

No one involved knew they were giving birth to the biggest creator and destroyer of wealth anyone had ever seen. So, unlike with app stores, there was no drive to control the early Web. Standards bodies arose—like the United Nations, but for programming languages. Companies that would have liked to wipe each other off the map were forced, by the very nature of the Web, to come together and agree on revisions to the common language for Web pages.

The result: Anyone could put up a Web page or launch a new service, and anyone could access it. Google was born in a garage. Facebook was born in Mark Zuckerberg ’s dorm room.

But app stores don’t work like that. The lists of most-downloaded apps now drive consumer adoption of those apps. Search on app stores is broken.

The Web was intended to expose information. It was so devoted to sharing above all else that it didn’t include any way to pay for things—something some of its early architects regret to this day, since it forced the Web to survive on advertising.

The Web wasn’t perfect, but it created a commons where people could exchange information and goods. It forced companies to build technology that was explicitly designed to be compatible with competitors’ technology. Microsoft’s Web browser had to faithfully render Apple’s website. If it didn’t, consumers would use another one, such as Firefox or Google’s Chrome, which has since taken over.

Today, as apps take over, the Web’s architects are abandoning it. Google’s newest experiment in email nirvana, called Inbox, is available for both Android and Apple’s iOS, but on the Web it doesn’t work in any browser except Chrome. The process of creating new Web standards has slowed to a crawl. Meanwhile, companies with app stores are devoted to making those stores better than—and entirely incompatible with—app stores built by competitors

The contrary and more positive view comes from John Gruber of ‘daring fireball’ who asserts that: “The rise of native apps has brought more innovation, rather than diminished it.
If you expand your view of ‘the Web’ from merely that which renders inside the confines of a Web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the Web,”

Gruber is asserting a definition of the web that suits his argument. It’s a good argument and fair in its analysis however what does this say about the issue of control?

We shall wait and see however I am looking to spread my buying online and decrease my ‘apple app’ spends because a 30% cut just aint right 🙂


Jeffrey Favaloro


My Life On February 23, 2030

I just woke up at my optimal REM sleep time calculated by bedroom sensors. My bed reads my brain waves all night and sensors in the room monitor the amount of oxygen that my lungs converted into carbon dioxyde. I go to the bathroom and anything leaving my body is instantly analyzed and uploaded to my personal medical data cloud.

My breakfast food has just been 3D printed from ingredients genetically modified to decrease my cholesterol and glucose to optimal levels. My ham and cheese omelette tastes delicious and no animals were killed; it has become forbidden in most countries to kill any live animal. No need to be vegan to avoid killing anymore.

I have an appointment with Elon Musk to offer him an investment in my electric plane startup and even though business meetings are mostly done via hologram representations, I still really enjoy in-person meetings. My self-driving Telsa takes me to the nearest hyperloop station where I can get to L.A. in twenty minutes, so it’s not a big deal to travel anymore. It has become really expensive to have your own private car as governments only want self driving cars everywhere — these have reduced deaths from auto accidents by 95%. Private cars might soon be entirely forbidden as they cause too many problems. They’re a very expensive luxury for the time being.

There is no traffic anywhere as there are very few cars. There is very little parking space in cities as most cars are self driving. Uber has replaced all drivers with self driving cars, which are pretty much always moving. I remember when we used to have those idle parked cars everywhere. Tens of millions of jobs in the car industry have disappeared, car and truck drivers, car insurance companies, car dealers and repair stations, all gone. You can cross any street without even looking as the self driving cars’ sensors became so good cars just stop automatically as they “see” you about to cross the street. A lot of the data needed to create the machine learning algorithms behind self-driving cars was created by people in once-poor countries through an NGO called Samasource, which closed its doors when extreme poverty was eradicated in the last decade.

As I get to Elon’s office, I get my messages projected on the latest version of Google Glass that Tony Fadell has managed to fit in a contact lens. Voice recognition has become so good that nobody types anything anymore — you can just say what you want to answer. Since Tony and I are good friends, he let me test the latest beta version of his mind-reading software update for Glass. Now I just think my reply and it shows up instantly on my retina and I can just think “send it” to get it to anyone I want. Nobody has a smartphone anymore, though I remember what it was like to use my thumbs to send text messages.

I look around and remember how we used to see so many ads everywhere — they completely disappeared. Marketing is now only highly personalized and targeted to your specific needs. I touch a door handle and it senses my hands are a little dry; as I opted-in for personalized marketing, I get an offer to try a new hand cream by Laxmi. I accept and it is instantly delivered to me by an Amazon drone. Drones are so small and silent you barely see them anymore. Laxmi has become very successful by distributing most of the profits of their beauty products to people who would otherwise be poor. Most businesses that don’t have a social or environmental mission have died out, as nobody wants to buy their products.

There is no more hunger in the world as we can 3D print pretty much any food. Extreme poverty disappeared when governments around the world signed onto the Universal Floor movement, led by the Gates Foundation and funded by the Billionaire’s Pledge that became famous decades ago. I give 5% of my income automatically as a “voluntary tax” to the Universal Floor Foundation, which is one of the most popular charities on earth and posts their results in real time to my Glass feed. We also eradicated illiteracy — I remember when education became universal and free through technology. In large cities, people still send their kids to school but anywhere else free hologram teachers are always available.

I just arrived at Elon’s office and he invites me to space for the afternoon! We take the latest spaceship resulting from the joint venture Richard Branson and Elon Musk launched a few years ago, GalaktiX. I can see the Earth from above for the first time, a planet that thanks to advances in technology is verdant and blue, despite the old threat of global warming. Space travel is amazing and has become much more affordable — a few hours’ trip to space will cost only about one thousand dollars when they launch this new product.

I feel a little sick when we come back. I don’t even have to call my virtual doctor as she was already warned by the results of my live body analysis sensors. I am testing this new under-the-skin fitbit nanodevice — it communicates with Google Glass and sends data continuously to my doctor. The prescription is delivered on a drone and I feel better in a few hours. Time to head back home.

On my way back, I get a notification that someone is trying to deliver something at my home. The dropcams identify a trusted Fedex person and I only have to think about opening the garage door so they can put the huge delivery inside. It is a 1982 mechanic Haunted House pinball machine my father had gifted me at Christmas — these throwbacks to the analog past mean a lot to me.

With all the 3D printing and automation, handmade products have become the most in-demand objects. A huge number of jobs are constantly being created around anything made entirely by humans with raw materials. Artisanal producers are all the rage —there is even a quality label that has become the new status symbol: “Certified All-Human Made”. Art explodes as people have much more free time and can make a good living out of it. I have learned to play the guitar, and use my Glass interface to practice while I ride back up to San Francisco.

Back home the first thing I do is meditate for an hour entirely disconnected, a practice I started 15 years ago. Creating space, reconnecting with my body and my mind, slowing down when everything is fast and disconnecting when everything is more connected has become as important for me as taking a shower.