Error Handling – Core Design Decision

Error handling in a software is very critical.
We often under-engineer our implementations around it.
Handling a few generic error messages is the easy part.

But,
1. How can the software recover gracefully from these error messages?
2. How can the customer experience not degrade post the error?
3. How is the error logged and iterated upon with an intelligent fix?

These are the core questions that come to my mind to have a clean implementation around error handling in software development.

#software #design #errorhandling #builditbetter

3 Sneaky Cyber Security Threats to watch out for in 2022.

3 Cyber Security Threats to watch out for in 2022.

2022 seems to be an interesting time in the Cyber Security landscape as the number of cyber crimes are increasing at an alarming rate. Three sneaky threats to watch out for are :-

Magecart Attack

Magecart is a type of data skimming that is used by attackers to capture sensitive information. Attackers are termed as ‘Threat Actors’ in the Cyber Security domain and, from here on in this article, we will refer to them in the same way.

In Magecart Attacks, threat actors capture sensitive information like email addresses, passwords, credit card information through malicious code they implant in websites. They sell this stolen data in the dark web. These attacks mostly happen on consumer facing browser/apps.

Credential Stuffing Attack

In this type of attack, threat actors use a list of compromised user-credentials to breach multiple systems. Many users reuse usernames and passwords across multiple platforms and their accounts can potentially be compromised with this method. The attacks are usually carried out with the help of a well automated system of software bots. Statistically about 0.1% of breached credentials result in a successful login on a new service. Sadly even now, many users keep the same password on multiple platforms, thereby making them plum victims to these sophisticated threat actors.

Password Spraying Attack

Password spraying, as the name goes, ‘sprays’ a single password across multiple usernames on a platform to get unauthorized access into it. Contrary to brute-force attacks that try out multiple passwords on a single username, this attack uses a password only once with a username before moving on to the next username. Hence, this neatly avoids an account from being locked-out due to multiple login attempts. Thus the threat actor remains undetected by the system and continues to be on the prowl, searching for vulnerable accounts.

Here’s how Evernote moved 3 petabytes of data to Google’s cloud

Article by

Evernote decided last year that it wanted to move away from running its own data centers and start using the public cloud to operate its popular note-taking service. On Wednesday, it announced that the lion’s share of the work is done, save for some last user attachments.

The company signed up to work with Google, and as part of the migration process, the tech titan sent a team of engineers (in one case, bearing doughnuts) over to work with its customer on making sure the process was a success.

Evernote wanted to take advantage of the cloud to help with features based on machine learning that it has been developing. It also wanted to leverage the flexibility that comes from not having to run a data center.

The move is part of a broader trend of companies moving their workloads away from data centers that they own and increasingly using public cloud providers. While the transition required plenty of work and adaptation, Evernote credited Google for pitching in to help with the migration.

Why move to the cloud?

There was definitely plenty of work to do. Evernote’s backend was built on the assumption that its application would be running on the company’s twin California data centers, not in a public cloud. So why go through all the work?

Many of the key drivers behind the move will be familiar to cloud devotees. Evernote employees had to spend time maintaining the company’s data center, doing things like replacing hard drives, moving cables and evaluating new infrastructure options.

While those functions were key to maintaining the overall health and performance of the Evernote service, they weren’t providing additional value to customers, according to Ben McCormack, the company’s vice president of operations.

“We were just very realistic that with a team the size of Evernote’s operations team, we couldn’t compete with the level of maturity that the cloud providers have got…on provisioning, on management systems, et cetera,” McCormack said.“ We were always going to be playing catch-up, and it’s just a crazy situation to be in.”

When Evernote employees thought about refreshing a data center, one of the key issues that they encountered is that they didn’t know what they would need from a data center in five years, McCormack said.

Evernote had several public cloud providers it could choose from, including Amazon Web Services and Microsoft Azure, which are both larger players in the public cloud market. But McCormack said the similarities between the company’s current focus and Google’s areas of expertise were important to the choice. Evernote houses a large amount of unstructured data, and the company is looking to do more with machine learning.

“You add those two together, Google is the leader in that space,” McCormack said. “So effectively, I would say, we were making a strategic decision and a strategic bet that the areas that are important to Evernote today, and the areas we think will be important in the future, are the same areas that Google excels in.”

Machine learning was a highlight of Google’s platform for Evernote CTO Anirban Kundu, who said that higher-level services offered by Google help provide the foundation for new and improved features. Evernote has been driving toward a set of new capabilities based on machine learning, and Google services like its Cloud Machine Learning API help with that.

While cost is often touted as a benefit of cloud migrations, McCormack said that it wasn’t a primary driver of Evernote’s migration. While the company will be getting some savings out of the move, he said that cost wasn’t a limitation for the transition.

The decision to go with Google over another provider like AWS or Azure was driven by the technology team at Evernote, according to Greg Chiemingo, the company’s senior director of communications. He said in an email that CEO Chris O’Neill, who was at Google for roughly a decade before joining Evernote, came in to help with negotiations after the decision was made.

How it happened

Once Evernote signed its contract with Google in October, the clock was ticking. McCormack said that the company wanted to get the migration done before the new year, when users looking to get their life on track hammer the service with a flurry of activity.

Before the start of the year, Evernote needed to migrate 5 billion notes and 5 billion attachments. Because of metadata, like thumbnail images, included with those attachments, McCormack said that the company had to migrate 12 billion attachment files. Not only that, but the team couldn’t lose any of the roughly 3 petabytes of data it had. Oh yeah, and the Evernote service needed to stay up the entire time.

McCormack said that one of the Evernote team’s initial considerations was figuring out what core parts of its application could be entirely lifted and shifted into Google’s cloud, and what components would need to be modified in some way as part of the transition.

Part of the transformation involved reworking the way that the Evernote service handled networking. It previously used UDP Multicast to handle part of its image recognition workflow, which worked well in the company’s own data center where it could control the network routers involved.

But that same technology wasn’t available in Google’s cloud. Kundu said Evernote had to rework its application to use a queue-based model leveraging Google’s Cloud Pub/Sub service, instead.

Evernote couldn’t just migrate all of its user data over and then flip a switch directing traffic from its on-premises servers to Google’s cloud in one fell swoop. Instead, the company had to rearchitect its backend application to handle a staged migration with some data stored in different places.

The good news is that the transition didn’t require changes to the client. Kundu said that was key to the success of Evernote’s migration, because not all of the service’s users upgrade their software in a timely manner.

Evernote’s engagement with Google engineers was a pleasant surprise to McCormack. The team was available 24/7 to handle Evernote’s concerns remotely, and Google also sent a team of its engineers over to Evernote’s facilities to help with the migration.

Those Google employees were around to help troubleshoot any technical challenges Evernote was having with the move. That sort of engineer-to-engineer engagement is something Google says is a big part of its approach to service.

For one particularly important part of the migration, Google’s engineers came on a Sunday, bearing doughnuts for all in attendance. More than that, however, McCormack said that he was impressed with the engineers’ collaborative spirit.

“We had times when…we had written code to interface with Google Cloud Storage, we had [Google] engineers who were peer-reviewing that code, giving feedback and it genuinely felt like a partnership, which you very rarely see,” McCormack said. “Google wanted to see us be successful, and were willing to help across the boundaries to help us get there.”

In the end, it took roughly 70 days for the whole migration to take place from the signing of the contract to its final completion. The main part of the migration took place over a course of roughly 10 days in December, according to McCormack.

Lessons learned

If there was one thing Kundu and McCormack were crystal clear about, it’s that even the best-laid plans require a team that’s willing to adapt on the fly to a new environment. Evernote’s migration was a process of taking certain steps, evaluating what happened, and modifying the company’s approach in response to the situation they were presented with, even after doing extensive testing and simulation.

Furthermore, they also pointed out that work on a migration doesn’t stop once all the bytes are loaded into the cloud. Even with extensive testing, the Evernote team encountered new constraints working in Google’s environment once it was being used in production and bombarded with activity from live Evernote users.

For example, Google uses live migration techniques to move virtual machines from one host to another in order to apply patches and work around hardware issues. While that happens incredibly quickly, the Evernote service under full load had some problem with it, which required (and still requires) optimization.

Kundu said that Evernote had tested live migration prior to making the switch over to GCP, but that wasn’t enough.

When an application is put into production, user behavior and load on it might be different from test conditions, Kundu said. “And that’s where you have to be ready to handle those edge cases, and you have to realize that the day the migration happens or completes is not the day that you’re all done with the effort. You might see the problem in a month or whatever.”

Another key lesson, in McCormack’s opinion, is that the cloud is ready to handle any sort of workload. Evernote evaluated a migration roughly once every year, and it was only about 13 months ago that the company felt confident a cloud transition would be successful.

“Cloud has reached a maturity level and a breadth of features that means it’s unlikely that you’ll be unable to run in the cloud,” McCormack said.

That’s not to say it doesn’t require effort. While the cloud does provide benefits to Evernote that the company wasn’t going to get from running its own data center, they still had to cede control of their environment, and be willing to lose some of the telemetry they’re used to getting from a private data center.

Evernote’s engineers also did a lot of work on automating the transition. Moving users’ attachments over from the service’s on-premises infrastructure to Google Cloud Storage is handled by a pair of bespoke automated systems. The company used Puppet and Ansible for migrating the hundreds of shards holding user note data.

The immediate benefits of a migration

One of the key benefits of Evernote’s move to Google’s cloud is the company’s ability to provide reduced latency and improved connection consistency to its international customers. Evernote’s backend isn’t running in a geographically distributed manner right now, but Google’s worldwide networking investments provide an improvement right away.

“We have seen page loading times reducing quite significantly across some parts of our application,” McCormack said. “I wouldn’t say it’s everywhere yet, but we are starting to see that benefit of the Google power and the Google reach in terms of bridging traffic over their global fiber network.”

Right now, the company is still in the process of migrating the last of its users’ attachments to GCP. When that’s done, however, the company will be able to tell its users that all the data they have in the service is encrypted at rest, thanks to the capabilities of Google’s cloud.

From an Evernote standpoint, the company’s engineers have increased freedom to get their work done using cloud services. Rather than having to deal with provisioning physical infrastructure to power new features, developers now have a whole menu of options when it comes to using new services for developing features.

“Essentially, any GCP functionality that exists, they’re allowed to access, play with — within constraints of budget, obviously — and be able to build against.”

In addition, the cloud provides the company with additional flexibility and peace of mind when it comes to backups, outages and failover.

What comes next?

Looking further out, the company is interested in taking advantage of some of Google’s existing and forthcoming services. Evernote is investigating how it can use Google Cloud Functions, which lets developers write snippets of code that then run in response to event triggers.

Evernote is also alpha testing some Google Cloud Platform services that haven’t been released or revealed to the public yet. Kundu wouldn’t provide any details about those services.

In a similar vein, Kundu wouldn’t go into details about future Evernote functionality yet. However, he said that there are “a couple” of new features that have been enabled as a result of the migration.

Courtesy: www.cio.com

To comment on this article and other CIO content, visit on Facebook, LinkedIn or Twitter.

Minimum Viable Technology (MVT) — Move Fast & Keep Shipping

Article by –

Technology teams can be the biggest asset or worst bottleneck for a growing company based on the strategy taken by them. In name of future proofing engineering, the technology teams become a hurdle to company’s goals. You can see the ‘hidden frustration” in Bezos words below ..

Engineers should be fast acting cowboys instead of calm clear-headed computer scientists — Jeff Bezos, Founder & CEO, Amazon

Rampant Problem in Industry: When the task is to build a bike, the product and technology teams would plan for a product, which can later run on motor, seat four people, sail in sea and even fly in the future. This hypothetical building of castle in air, digresses the focus from the real problem to be fixed. This is what Bezos is suggesting to refrain from, as it wastes resources and agonising delays the time to market.

Being defensive, the Product/Technology teams usually build a cannon for killing a bird.

Minimum Viable Product (MVP) philosophy evolved, to avoid this “unnecessarily over-thinking and over-preparation” problem which plagued products in all companies. It encouraged building the minimum required at a certain point of time and then iterating and improving it going forward. MVP approach enables much needed fast experimentation, fail fast and invest where needed strategy.

No such philosophy evolved for Technology. Therefore, the decades old defensive and paranoid philosophy still prevails (which was much needed during older 1–2 year long waterfall releases). This becomes competitive disadvantage for startups usually fighting for survival or growing fast.

Fundamental problem is that the engineers blindly copy the large company’s strategies, considering them to be the standard. Corporate and startups differ widely on their needs of scale, brand, speed, impact of a feature, loss by a bug, etc. Startups enjoy more freedom to make mistakes and that they should exploit to their benefit.

Strategies used in big companies are more often irrelevant and even detrimental to a small growing company’s interests.

Minimum Viable Technology: The solution to above problems is to Build the Minimum Technology, that makes the product and its foreseeable further iterations Viable. Make it live a.s.a.p. and then iterate and improve it based on real usage learnings. Every company is in different stage of evolution. Something that is MVT for a big company, can be over-engineering for startups.

If the task is to kill a bird, we should build a catapult/small-gun to begin with. If that becomes successful and there is a need to kill more or bigger animals, then bigger-guns/cannons should be built as required.

There is nothing so useless as doing efficiently that which should not be done at all. ~ Peter Drucker

Startups experiment a lot and only a few of them sustain the test of time. As per 80–20 rule, only those 20% successful ones should get deeper technology investments.

Principles of Minimum Viable Technology (MVT):

  • Most decisions can be reversed or fixed easily. Choose wisely by bucketing the decision properly into reversible or non-reversible. And judiciously decide how much to prepare for that case. (Read Jeff Bezos’ two types of decisions).

It’s important to internalise how irreversible, fatal, or non-fatal a decision may be. Very few can’t be undone. — Dave Girouard

  • Build MVT — Fast & cost effective. Build the Minimum Technology that makes the product and their foreseeable iterations Viable. Side towards operational familiarity while choosing technology rather than falling of the latest buzzword (a sure sign of inexperience and not being in trenches before).
  • Embrace change with open heart — iterate and rebuild as needed: Never try to force fit newer realities into the older model itself. Be ready to re-factor or throw away and rebuild where justified.
  • Keep fundamentals right & a Rule of Thumb: It’s a fine line between under-engineering and MVT approach, that has to be tread properly. Fundamentals have to be well deliberated and clear. Don’t rush into execution without thinking completely, otherwise it will lead to more resource waste later. Thinking has to complete and deliberate choices must be there to cut scope. The rule of thumb, is  discuss the ideal solution on board and then decide what to take out of scope to make it MVT.
  • Speed and Quality can go hand in hand: Never justify the bad quality of your work by using the speed of execution as excuse.

MVT is for scope reduction, not for quality reduction.

  • MVP/MVT is applicable for every iteration/release: People relate MVP to the First release of product only. In fact, it applies to every stage. MVP/MVT needs to be chosen from the remaining next tasks at every stage. At no stage, it is ok to waste time and resources.
  • Deep understanding, conviction and confidence is needed for MVT. Both MVP and MVT approach is about taking bold calls like — “Out of these tasks, only this much is enough to win this stage of game”. While defensive traditional approach is like — “we can’t win or sustain if we do not do most of the known tasks”.

Move Fast. Keep Shipping!!

* The term “Minimum Viable Technology – MVT” is coined by the author.

 

Courtesy: LinkedIn

A Managers Guide to NoSQL

-Article by Erik Weibust

Introduction
Software design and development has undergone tremendous change over the last 30 years. Once a particular change captures the interest and imagination of the community, innovation accelerates and becomes self-propelled and change turns exponential. One such development in the last 5 years has been the development of NoSQL Database technology.

Software applications have become highly interactive with various delivery platforms and infrastructure. A modern application has to support millions of concurrent users and the data requirements have shifted from just application data to usage and analytics data. Application behavior has changed from static data capture and display, to dynamic, context-driven applications. With the above changes, relational database technology has lagged behind in innovation. Database providers have relied on 30 year old technological concepts and have applied multiple band-aids to the existing platforms to meet modern requirements.


Glossary of a few terms you need to know as you read on:

Database Schema is a well defined, strict representation of a real-world domain (such as the elements of a shopping application) within a database. All items to be stored in a database schema are expected to conform to the rules and constraints set by the schema design and no single-item can vary from the definition.

Database Replication is the process of sharing data between the primary and one or more redundant databases to improve reliability, fault-tolerance, or accessibility. Typically, data is immediately copied over to the backup location, upon write, so as to be available for recovery and/or read-only resources.

Sharding (or Horizontal partitioning) is a database design principle whereby the contents of a database table are split across physical locations, by rows instead of by columns (using referential integrity). Each partition forms part of a shard. Multiple shards together provide a complete data set, but the partitioned shards are split logically to ensure faster reads and writes.


What is NoSQL?
NoSQL is the name given to the engineering movement that birthed these next-generation databases. NoSQL stands for Not only SQL. The common misunderstanding is that it stands for No SQL, which is not true. NoSQL databases were created to solve real-world needs that existing relational databases were unable to solve. They are non-relational, distributed, schema-less and horizontally scalable with commodity hardware.

No SQL Databases are:

  1. Schema-less: Data can be inserted without being in a particular form. The format of the data can change at any time without affecting existing data. The unique identifier is the only required value for a data element.
  2. Auto-Sharding is by design an out of the box feature. All NoSQL database are built to be distributed and sharded without any further effort to the application design. They are built to support data replication, high availability and fail-over.
  3. Distributed Query support is available due to sharding.
  4. Maintaining a NoSQL cluster does not require complex software, or several layers of IT personnel and security measures. Of course, that does not mean reduced security of your data.
  5. Caching is built-in and low-latency is the expectation. Caching is transparent to application developers and the infrastructure teams.

In relation to Gartner’s Hype Cycle diagram, NoSQL is perhaps at the Slope of Enlightenment stage, with tremendous strides being made in the last 2 years towards Maturing with some of the NoSQL offerings.

Gartner's Hype Cycle

What are my Options?
There are many options to consider when choosing a NoSQL solution. They are mostly open source and schema-less. The key distinguishing factor between NoSQL databases is their design decision on how they handle data storage.

  • Key-value Storage: Membase, Redis, Riak
  • Graph Storage: Neo4j, InfoGrid, Bigdata
  • Wide-column Storage: Cassandra, Hadoop
  • Document Storage: MongoDB, CouchDB
  • Eventually Consistent Key-Value Storage: Amazon Dynamo, Voldemort
  • NewSQL: Almost relational, much simpler and easily scalable than RDBMS. Examples are voltDB, scaledb

How do I get buy-in from the team (above and below me)?
As with most organizations, new (or what is considered latest/greatest) technology is met with apprehension at best and suspicion at worst. The best and proven way to introduce something into the organization is to build prototypes of real-world scenarios, highlighting the advantages specific to your organization.

The most common place to introduce a NoSQL engine in your organization is most likely through building an application-logging prototype. With technology such as a NoSQL database, which is more of an infrastructure element, it is important to demonstrate business continuity with the new technology compared to existing technologies; thus demonstrating minimal risk to business stakeholders. It is likely that your developers may have already heard of this technology and are highly interested and motivated to use NoSQL databases. It is up to you to educate yourself on the new technology, and then educate your organization on the benefits of NoSQL based on the results of your prototype. Lastly, you can make the point that NoSQL is not an invention waiting to be implemented. Rather, it grew out of necessity for companies like Google and Amazon who built it, used it, and then open-sourced for the community at-large.

Next Steps
For more details on each NoSQL option visit www.nosql-database.org. We will also publish follow-up blogs posts on selected NoSQL databases in the coming weeks here at Credera.com. The follow-ups will be an in-depth review of the selected NoSQL databases with sample data and use cases for each.

Courtesy: www.credera.com

Infrastructure as code tops IT’s DevOps challenges

IT operations pros have some work to do to automate the infrastructure underpinning DevOps initiatives.

While cultural barriers are some of the most daunting DevOps challenges, IT operations practitioners say that capturing infrastructure as code is the most significant technical hurdle to supporting modern application development practices.

Even though configuration management tools such as Puppet and Chef that enable infrastructure as code have been on the market for years, the concept can still be difficult for some IT pros to grasp.

Not everyone has yet bought into the concept of taking a traditional rack and stack infrastructure with management of IPs in Excel spreadsheets, and automation through Bash scripts and Ruby code, according to Pauly Comtois, vice president of DevOps for a multi-national media company.

“A lot of our customer organizations barely have operations automated in any way,” echoed Nirmal Mehta, senior lead technologist for the strategic innovation group at Booz Allen Hamilton Inc., a consulting firm based in McLean, Va., who works with government organizations to establish a DevOps culture.

“It’s 2016, and we should be able to automate those deployments,” he said. “Once you do that, you can start to use the exact same tools to manage the infrastructure that you use for your application code.”

A big reason why companies have been slow to automate their operations is that infrastructure as code work can be more easily discussed than done — legacy applications often weren’t designed with tools such as Chef or Puppet in mind.

Third-party software that runs on Windows isn’t conducive to automation via the command line, Comtois pointed out. “What makes that really technically challenging is when that piece of software also happens to be critical to the workflow of that organization, so I can’t just go in and rip it out and replace it with something else.”

These issues can be overcome, but “some transitions are more painful than others,” he said.

Security teams also have to be brought on board with managing infrastructure as code, according to Mehta.

“Infrastructure as code and configuration management make compliance a lot easier, but that also means that compliance is no longer a thing that you do once a year,” he said. “It gets enveloped in the DevOps process [just like] any piece of code needs to go through.”

The majority of time spent by IT operations into the foreseeable future will be transitioning manual processes into infrastructure as code, or automated steps that follow the same pipeline that application code does, according to Mehta.

DevOps and IT ops: a two-way street

You know the top challenges, but do you know where they stem from? Learn how a changing DevOps culture affect IT pros’ day-to-day responsibilities and the tools you can use to bridge that gap.

Infrastructure as code benefits

So why go through the technical headaches to establish infrastructure as code?

According to experienced DevOps practitioners, it’s the only way to create an automated IT infrastructure that adequately supports automated application development testing and release cycles.

“In our environment, Jenkins makes many calls into Ansible to build stuff and deploy and configure it,” said Baron Schwartz, founder and CEO of VividCortex, a database monitoring SaaS provider based in Charlottesville, Va. “Whatever we want to be automated, we have CircleCI calling a Web service that pokes Jenkins, which runs Ansible — it sounds like a Rube Goldberg machine, but it works well.”

Even things the VividCortex team wants to kick off manually use a chat bot to call into Jenkins and kick off a build job with Ansible, Schwartz said.

Getting IT ops staffs used to the concept of infrastructure as code is key to securing their buy-in as DevOps is more broadly rolled out in an environment, according to Caedman Oakley, DevOps evangelist for Ooyala Inc., a video processing service headquartered in Mountain View, Calif.

“Operations doesn’t want to see things change unless [it] know[s] what controls are in place,” Oakley said. “Everything being written in a Chef recipe or in cookbooks means [operations] can see what the change was and … knows exactly who did the change and why it’s happening — and that actually is the greatest opener to adoption on the operations side.”

Ultimately infrastructure as code simplifies infrastructure management, Oakley said.

“Operations can just go manage the infrastructure now, and don’t have to worry about figuring out why one server is slightly different from another,” he said.  “You can just fire up an instance any way you want to.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

 

Courtesy: http://searchitoperations.techtarget.com/news/450280797/Infrastructure-as-code-tops-ITs-DevOps-challenges

Key Parameters while choosing an Automation Testing Tool

Hey all,

Hi Techies! How are you doing? Everything alright? How is the software industry treating you?

I hope it is keeping you all at the tip of your toes, as always. At least here in India it is! The industry is evolving at a blazing pace. There are start-ups propping up in every nook and corner. There are huge rounds of funding for those which show promise & all my buddies working in such start-ups are earning big bucks. Older more established companies are either withering away or are reinventing themselves to stay relevant & competitive.

Software Development  has moved on from the ‘Steam Engine phase’ to the ‘Bullet Train phase’. Cloud Technologies like AWS, Azure are enabling companies to set shop quicker,cheaper,lighter & to scale their systems more easily. Release times are getting shorter and shorter.To aid these faster release cycles there is a new set of Software Ninjas called Devops Engineers who are bringing a paradigm shift to the way the industry was working uptil now. There is a cultural and ideological change happening and conventional boundaries of Dev, QA and Support are slowing fading away for the better.

Coming straight to the subject of this post…

There is a huge necessity of software automation for various tasks & processes in today’s industry.Each company needs to have a robust automation framework in place. This is to match up to the speed of software development and test out the iterative builds. A quick reliable feedback loop on the code quality thus ensures moving to a ‘Continuous Delivery’ model where builds are deployment ready. It is close to impossible to test the quality of every single release with manual testing alone.I’m sure you’ll agree with me on this. Hence automation is the pressing need of the hour.

Automation requirements are massive in the industry – from UI automation for Desktop Applications,Browsers and Mobiles, to API level automation for backend architecture, to machine level automation for various tasks. There are a vast many automation tools out there in the market which can be used for different automation requirements. In my 5 years of experience in the Software Industry, I’ve come across many instances wherein I’ve had to select a tool for a type of automation & I’ve always been spoilt for choices. I had very little time to decide on the tool & design my automation framework. To add to this, an engineer is always expected to deliver fast tangible results. Simply put, you have the opportunity to make it or break it.

It becomes very important that you make a quick but well balanced decision while selecting your tool. Else sooner or later you will be in a situation that you have to scrap the tool you selected for more reasons than one. The tools all look similar in the beginning, but on careful reading and research, you will be able to identify the variations in each tool & the limitations of each one.

In this article I would like to share few key parameters to keep in mind while selecting an Automation Tool for your software quality assurance needs :-

1.Ease of setup- setup must be pretty straightforward and all dependencies must be handled internally during installation of the tool.

2.Ease of implementation-the APIs provided by the tool must be simple to understand. There must be good documentation for beginners.

3.Execution time of the scripts- each command in the script must be executed fast and overall time needed to execute a test-suite must be in acceptable limits.

4.Reliability- the tool must be reliable and must not give different results for the same test. Execution times & results must be consistent when run on a day to day basis. The tool must also work as per the documentation. Any deviation from that is a sign of unreliability.

5.Big developer/user community-Very important parameter. The bigger the community of users, the faster you will be able to resolve any roadblocks.

6.Support for issues/upgrades-There must be frequent updates to the tool with bug fixes. This shows that there is an active development team working to make the tool more and more robust. See the release notes too if possible and get an idea of the issues that have been fixed and how quickly they were fixed.

7.Integration with Continuous Integration- The tool must have hooks to be integrated with CI tools like Jenkins,Bamboo,Travis etc. Without that a user must then create his own hooks to plug the automation scripts with these CI tools which again requires time and effort.

8.Ease of maintenance-The tool must have a good overall framework which can handle maintenance related work. Eg: An update version of the tool must not break existing workflows. Google for such issues and if the occurrences of such issues are frequent, then that is a warning signal.

9.Open Source- I would prefer to select a tool that is open-source. Period.

10.Choice of Language- Lastly, select a tool which supports a programming language you are familiar with so that script development can be faster for you.

So there it is! These are the key parameters which came to my mind while selecting automation tools for my workflows. Hope it helps!

As you venture out to select your automation tool, it is best to create a table with all these parameters and evaluate all of your prospective tools against these parameters carefully. Believe me this definitely helps your decision making process and you will come out with a well balanced, level headed decision.

Until next time… From the Silicon Valley of India… Love to all 🙂

Regards,

VJ

 

Avoiding the “years of experience” trap

The quality of your technical experience matters. It isn’t a simple product of time.​ It is easy to fall into the trap, focusing on some narrow aspect of technology, getting comfortable with what you know. Becoming an in-house expert on a specific vendor’s offering. Collecting a paycheck, and never bothering to sharpen your tools, much lessadd new ones to the toolbox.

You’ve probably heard of the “years of experience fallacy” before. You know, the guy who’s been doing the exact. same. thing. for the last 10 years, and hasn’t advanced much in their knowledge or their career. The tech industry equivalent of a factory worker that spends a lifetime pulling the same lever, until one day they discover they are obsolete and a robot is taking their job.

All experience is not created equal.

The world of web development moves forward at light speed. What’s popular today will be obsolete tomorrow. We all know this — the last few decades of history have made this clear.

Unfortunately, it’s way too easy to get left behind. We’re busy… we’re busy with our jobs, our clients, our family, and our friends. Finding time to keep your skills up to date is hard, and many of the resources out there are dense, boring, and take hours and hours of digging to find what you’re looking for.

And that’s why John and I created Egghead — we wanted to provide easy-to-digest, bite size morsels of knowledge that will keep you current and keep you in high demand. We’ve both experienced this (we were both Adobe Flex/Flash developers… ugh), and had to scramble when a vendor effectively cut the throat of our careers, terminating a platform, sending legions of developers scrambling to discover “what’s next”.

Years of experience can be a glorious thing that uplifts and propels you forward. If you wake up one morning, and discover that your years of experience are obsolete, it is total BS. It hurts.

Joel Hooks
Co-founder, Egghead

Similar articles: India’s IT Party is over. Reinvent yourself or suffer

India’s IT Party is over. Reinvent yourself or suffer

India’s  IT industry is unlikely to remain the amazing job-engine that it has been. For the past two decades, the fastest way to increase your income has been to land a job with an IT company. The industry has provided a ticket to prosperity for millions of young Indians; children of security guards, drivers, peons and cooks catapulted themselves and their families firmly into the middle class in a single generation by landing a job in a BPO. Hundreds of engineering colleges mushroomed overnight churning out over a million graduates a year to feel the insatiable demand of India’s IT factories.

This party is coming to an end. A combination of slowing demand, rising competition and technological change means that companies will hire far fewer people. And this is not a temporary blip- this is the new normal. Wipro’s CEO has bravely admitted that automation can displace a third of all jobs within three years while Infosys CEO Sikka aims to increase revenue per employee by 50%. Even NASSCOM, the chronically optimistic industry association, admits that companies will hire far fewer people.  Not only will the lines of new graduate waiting for job offers grow rapidly longer every year, but so too will the lines of the newly unemployed as all companies focus more on utilization, employee productivity and performance. Employees doing tasks that can be automated, the armies of middle managers who supervise them and all those with mediocre performance reviews and without hot skills are living on borrowed time.

So what do you do if you are a member of these endangered species? What constitutes good career advice in these times? I’d say that the first thing is to embrace reality and recognize that the game has changed for good. The worst thing to do is be wishful and wait for the good times to return. They won’t. But there’re still lots of opportunities. What’s happening in the industry is ‘creative destruction”. New technologies are destroying old jobs but creating many new ones. There is an insatiable demand for developers of mobile and web applications. For data engineers and scientists. For cyber security expertise. So for anyone who is a quick learner, anyone with real expertise, there will be abundant opportunities.

There has also never been a better time for anyone with an iota of entrepreneurial instinct.  India is still a supply constrained economy and so there is room to start every kind of business: beauty parlour,  bakery, catering, car-washing, mobile/electronics repair, laundry, housekeeping, tailoring. For entrepreneurs with a social conscience, there is a massive need for social enterprises that deliver affordable healthcare, education and financial services. Not only are there abundant opportunities but startups are “in” and there is no shame at all in failure.  The ranks of Angel investors is swelling and it has never been so easy to get funded. There is even a website www.deasra.in that provides step by step instructions to would-be entrepreneurs.

For those who prefer a good old-fashioned job, there are abundant jobs in old economy companies which are struggling to find every kind of talent- accountants, manufacturing and service engineers, salesreps.  Technology is enabling the emergence of new ‘sharing services” such as Uber or Ola that enable lucrative self-employment; it is not uncommon to find cab drivers who make 30-40,000  rupees/month.

My main point should be clear. While India may have a big challenge overall in creating enough jobs   for its youthful population, at the individual level, there is no shortage of opportunities. The most important thing is a positive attitude. The IT boom was a tide that lifted all boats- even the most mediocre ones. However, this has bred an entitlement mentality and a lot of mediocrity. To prosper in the new world, two things will really matter. The first is the right attitude. This means a hunger to succeed. Being proactive in seeking opportunities not waiting either till you are fired or for something to drop into your lap . A willingness to take risks and the tenacity to work hard and make something a success. Humility. Frugality.  The second is the ability to try and learn new things. The rate of change in our world is astonishing; whatever skills we have will largely be irrelevant in a decade. People are also living much longer. So the ability to learn new things, develop new competencies and periodically reinvent ourselves is a crucial one. Sadly too many of us have no curiosity and no interest in reading nor learning. The future will not be kind to such people.

“The snake which cannot cast its skin has to die.”- Friedrich Nietzsche

(First published as an opinion piece in Times of India)

Courtesy:https://www.linkedin.com/pulse/indias-party-over-reinvent-yourself-suffer-ravi-venkatesan?trk=hp-feed-article-title

Similar articles : Interview on Emerging IT trends by T.K. Kurien,CEO of Wipro

Decreasing False Positives in Automated Testing

Hey all,

I attended a webinar last night on “Decreasing False Positives in Automated Testing”. This event was conducted by Sauce Labs and webinar presented by Anand Ramakrishnan, QA Director, QASource

There was very good learning in it and I would like to share in brief various things which were discussed.

All attendees of the meeting were initially asked to fill a small survey question…

Q. In your organization, what is the primary objective of using automation?

Options: A. Save money

                B. Save time/ release faster –>This option got the maximum no. of votes 

                C.More QA coverage

My vote too was for the same option. Looks like the industry on the whole is facing a similar challenge of meeting faster release times.

Moving on to the topic…

 Q.What are False Positives?

Ans. Tests that are marked as failure, when in reality they should have passed. In other words, they are false alarms.

 We had another survey question just then :-

  1. What is the percentage of false positives within your respective automation tests?

Options: A. <5%

                B. 5-15

                C.15-25%        –> This option got the maximum no. of vote

                D. >25%

My vote was for option B. Though we are on the better side of things, we still need to further reduce these false positives.

Reasons why tests encounter false positives : –

  1. Flawed automation approach
  2. Choosing the wrong framework
  3. Inadequate time to accommodate a test plan/design.
  4. Hard-coded waits/delays
  5. Less modularity of code
  6. Relying on co-ordinates & xpath of objects
  7. Change in UI element id, classname etc
  8. Shared environment for QA as well as automation.
  9. Slow performance of application, or particular environment
  10. Manual intervention prior to execution of automation scripts
  11. Browser incompatibilities.

Impact of False Positives :-

  • Frustration within engineering team
  • Sooner or later, test failures are ignored by stakeholders
  • Risk of overlooking potential bug
  • Babysitting of automation tests
  • Maintenance cost of automation increases

Now the most important part of the discussion.

Ways to Reduce False Positives :-

  • Deploy application on optimal configurations, for automation
  • Keep tests short and simple. Avoid trying to do too many things in a single test case.
  • Keep tests independent. No sequencing of tests
  • Provide unique identifiers while developing application itself.
  • Using right locators i.e in decreasing priority of id>classname>css locator
  • Tear-down approach. Bringing test machine to base state before/after every testcase
  • Dynamic object synchronisations. No hard-coded waits
  • Re-execution capability of test framework. In case test fails, re-execute test and if it passes to ignore the previous failure.

Finally,

Benefits of Eliminating False Positives:-

  • Will not miss potential bugs
  • Certainty of application health
  • Increase in productivity
  • Save time not babysitting
  • Decrease cost of automation

Summary: It was an awesome webinar and also brings perspective on how critical automation is to meet current day SDLC challenges and how to make it more effective.

Your feedback both good & bad are always welcome.

Regards,

VJ