3 Sneaky Cyber Security Threats to watch out for in 2022.

3 Cyber Security Threats to watch out for in 2022.

2022 seems to be an interesting time in the Cyber Security landscape as the number of cyber crimes are increasing at an alarming rate. Three sneaky threats to watch out for are :-

Magecart Attack

Magecart is a type of data skimming that is used by attackers to capture sensitive information. Attackers are termed as ‘Threat Actors’ in the Cyber Security domain and, from here on in this article, we will refer to them in the same way.

In Magecart Attacks, threat actors capture sensitive information like email addresses, passwords, credit card information through malicious code they implant in websites. They sell this stolen data in the dark web. These attacks mostly happen on consumer facing browser/apps.

Credential Stuffing Attack

In this type of attack, threat actors use a list of compromised user-credentials to breach multiple systems. Many users reuse usernames and passwords across multiple platforms and their accounts can potentially be compromised with this method. The attacks are usually carried out with the help of a well automated system of software bots. Statistically about 0.1% of breached credentials result in a successful login on a new service. Sadly even now, many users keep the same password on multiple platforms, thereby making them plum victims to these sophisticated threat actors.

Password Spraying Attack

Password spraying, as the name goes, ‘sprays’ a single password across multiple usernames on a platform to get unauthorized access into it. Contrary to brute-force attacks that try out multiple passwords on a single username, this attack uses a password only once with a username before moving on to the next username. Hence, this neatly avoids an account from being locked-out due to multiple login attempts. Thus the threat actor remains undetected by the system and continues to be on the prowl, searching for vulnerable accounts.

How much Virtualization is too much?

This is one of the best explanations of virtualization I’ve read:-


“Virtualization allows us to slice up a physical server into
separate hosts, each of which can run different things. So if we want one service per host,
can’t we just slice up our physical infrastructure into smaller and smaller pieces?
Well, for some people, you can. However, slicing up the machine into ever increasing
VMs isn’t free. Think of our physical machine as a sock drawer. If we put lots of wooden
dividers into our drawer, can we store more socks or fewer? The answer is fewer: the
dividers themselves take up room too! Our drawer might be easier to deal with and
organize, and perhaps we could decide to put T-shirts in one of the spaces now rather than
just socks, but more dividers means less overall space.”

Book: Building Microservices by Sam Newman

Here’s how Evernote moved 3 petabytes of data to Google’s cloud

Article by

Evernote decided last year that it wanted to move away from running its own data centers and start using the public cloud to operate its popular note-taking service. On Wednesday, it announced that the lion’s share of the work is done, save for some last user attachments.

The company signed up to work with Google, and as part of the migration process, the tech titan sent a team of engineers (in one case, bearing doughnuts) over to work with its customer on making sure the process was a success.

Evernote wanted to take advantage of the cloud to help with features based on machine learning that it has been developing. It also wanted to leverage the flexibility that comes from not having to run a data center.

The move is part of a broader trend of companies moving their workloads away from data centers that they own and increasingly using public cloud providers. While the transition required plenty of work and adaptation, Evernote credited Google for pitching in to help with the migration.

Why move to the cloud?

There was definitely plenty of work to do. Evernote’s backend was built on the assumption that its application would be running on the company’s twin California data centers, not in a public cloud. So why go through all the work?

Many of the key drivers behind the move will be familiar to cloud devotees. Evernote employees had to spend time maintaining the company’s data center, doing things like replacing hard drives, moving cables and evaluating new infrastructure options.

While those functions were key to maintaining the overall health and performance of the Evernote service, they weren’t providing additional value to customers, according to Ben McCormack, the company’s vice president of operations.

“We were just very realistic that with a team the size of Evernote’s operations team, we couldn’t compete with the level of maturity that the cloud providers have got…on provisioning, on management systems, et cetera,” McCormack said.“ We were always going to be playing catch-up, and it’s just a crazy situation to be in.”

When Evernote employees thought about refreshing a data center, one of the key issues that they encountered is that they didn’t know what they would need from a data center in five years, McCormack said.

Evernote had several public cloud providers it could choose from, including Amazon Web Services and Microsoft Azure, which are both larger players in the public cloud market. But McCormack said the similarities between the company’s current focus and Google’s areas of expertise were important to the choice. Evernote houses a large amount of unstructured data, and the company is looking to do more with machine learning.

“You add those two together, Google is the leader in that space,” McCormack said. “So effectively, I would say, we were making a strategic decision and a strategic bet that the areas that are important to Evernote today, and the areas we think will be important in the future, are the same areas that Google excels in.”

Machine learning was a highlight of Google’s platform for Evernote CTO Anirban Kundu, who said that higher-level services offered by Google help provide the foundation for new and improved features. Evernote has been driving toward a set of new capabilities based on machine learning, and Google services like its Cloud Machine Learning API help with that.

While cost is often touted as a benefit of cloud migrations, McCormack said that it wasn’t a primary driver of Evernote’s migration. While the company will be getting some savings out of the move, he said that cost wasn’t a limitation for the transition.

The decision to go with Google over another provider like AWS or Azure was driven by the technology team at Evernote, according to Greg Chiemingo, the company’s senior director of communications. He said in an email that CEO Chris O’Neill, who was at Google for roughly a decade before joining Evernote, came in to help with negotiations after the decision was made.

How it happened

Once Evernote signed its contract with Google in October, the clock was ticking. McCormack said that the company wanted to get the migration done before the new year, when users looking to get their life on track hammer the service with a flurry of activity.

Before the start of the year, Evernote needed to migrate 5 billion notes and 5 billion attachments. Because of metadata, like thumbnail images, included with those attachments, McCormack said that the company had to migrate 12 billion attachment files. Not only that, but the team couldn’t lose any of the roughly 3 petabytes of data it had. Oh yeah, and the Evernote service needed to stay up the entire time.

McCormack said that one of the Evernote team’s initial considerations was figuring out what core parts of its application could be entirely lifted and shifted into Google’s cloud, and what components would need to be modified in some way as part of the transition.

Part of the transformation involved reworking the way that the Evernote service handled networking. It previously used UDP Multicast to handle part of its image recognition workflow, which worked well in the company’s own data center where it could control the network routers involved.

But that same technology wasn’t available in Google’s cloud. Kundu said Evernote had to rework its application to use a queue-based model leveraging Google’s Cloud Pub/Sub service, instead.

Evernote couldn’t just migrate all of its user data over and then flip a switch directing traffic from its on-premises servers to Google’s cloud in one fell swoop. Instead, the company had to rearchitect its backend application to handle a staged migration with some data stored in different places.

The good news is that the transition didn’t require changes to the client. Kundu said that was key to the success of Evernote’s migration, because not all of the service’s users upgrade their software in a timely manner.

Evernote’s engagement with Google engineers was a pleasant surprise to McCormack. The team was available 24/7 to handle Evernote’s concerns remotely, and Google also sent a team of its engineers over to Evernote’s facilities to help with the migration.

Those Google employees were around to help troubleshoot any technical challenges Evernote was having with the move. That sort of engineer-to-engineer engagement is something Google says is a big part of its approach to service.

For one particularly important part of the migration, Google’s engineers came on a Sunday, bearing doughnuts for all in attendance. More than that, however, McCormack said that he was impressed with the engineers’ collaborative spirit.

“We had times when…we had written code to interface with Google Cloud Storage, we had [Google] engineers who were peer-reviewing that code, giving feedback and it genuinely felt like a partnership, which you very rarely see,” McCormack said. “Google wanted to see us be successful, and were willing to help across the boundaries to help us get there.”

In the end, it took roughly 70 days for the whole migration to take place from the signing of the contract to its final completion. The main part of the migration took place over a course of roughly 10 days in December, according to McCormack.

Lessons learned

If there was one thing Kundu and McCormack were crystal clear about, it’s that even the best-laid plans require a team that’s willing to adapt on the fly to a new environment. Evernote’s migration was a process of taking certain steps, evaluating what happened, and modifying the company’s approach in response to the situation they were presented with, even after doing extensive testing and simulation.

Furthermore, they also pointed out that work on a migration doesn’t stop once all the bytes are loaded into the cloud. Even with extensive testing, the Evernote team encountered new constraints working in Google’s environment once it was being used in production and bombarded with activity from live Evernote users.

For example, Google uses live migration techniques to move virtual machines from one host to another in order to apply patches and work around hardware issues. While that happens incredibly quickly, the Evernote service under full load had some problem with it, which required (and still requires) optimization.

Kundu said that Evernote had tested live migration prior to making the switch over to GCP, but that wasn’t enough.

When an application is put into production, user behavior and load on it might be different from test conditions, Kundu said. “And that’s where you have to be ready to handle those edge cases, and you have to realize that the day the migration happens or completes is not the day that you’re all done with the effort. You might see the problem in a month or whatever.”

Another key lesson, in McCormack’s opinion, is that the cloud is ready to handle any sort of workload. Evernote evaluated a migration roughly once every year, and it was only about 13 months ago that the company felt confident a cloud transition would be successful.

“Cloud has reached a maturity level and a breadth of features that means it’s unlikely that you’ll be unable to run in the cloud,” McCormack said.

That’s not to say it doesn’t require effort. While the cloud does provide benefits to Evernote that the company wasn’t going to get from running its own data center, they still had to cede control of their environment, and be willing to lose some of the telemetry they’re used to getting from a private data center.

Evernote’s engineers also did a lot of work on automating the transition. Moving users’ attachments over from the service’s on-premises infrastructure to Google Cloud Storage is handled by a pair of bespoke automated systems. The company used Puppet and Ansible for migrating the hundreds of shards holding user note data.

The immediate benefits of a migration

One of the key benefits of Evernote’s move to Google’s cloud is the company’s ability to provide reduced latency and improved connection consistency to its international customers. Evernote’s backend isn’t running in a geographically distributed manner right now, but Google’s worldwide networking investments provide an improvement right away.

“We have seen page loading times reducing quite significantly across some parts of our application,” McCormack said. “I wouldn’t say it’s everywhere yet, but we are starting to see that benefit of the Google power and the Google reach in terms of bridging traffic over their global fiber network.”

Right now, the company is still in the process of migrating the last of its users’ attachments to GCP. When that’s done, however, the company will be able to tell its users that all the data they have in the service is encrypted at rest, thanks to the capabilities of Google’s cloud.

From an Evernote standpoint, the company’s engineers have increased freedom to get their work done using cloud services. Rather than having to deal with provisioning physical infrastructure to power new features, developers now have a whole menu of options when it comes to using new services for developing features.

“Essentially, any GCP functionality that exists, they’re allowed to access, play with — within constraints of budget, obviously — and be able to build against.”

In addition, the cloud provides the company with additional flexibility and peace of mind when it comes to backups, outages and failover.

What comes next?

Looking further out, the company is interested in taking advantage of some of Google’s existing and forthcoming services. Evernote is investigating how it can use Google Cloud Functions, which lets developers write snippets of code that then run in response to event triggers.

Evernote is also alpha testing some Google Cloud Platform services that haven’t been released or revealed to the public yet. Kundu wouldn’t provide any details about those services.

In a similar vein, Kundu wouldn’t go into details about future Evernote functionality yet. However, he said that there are “a couple” of new features that have been enabled as a result of the migration.

Courtesy: www.cio.com

To comment on this article and other CIO content, visit on Facebook, LinkedIn or Twitter.

Installation of MongoDB on Ubuntu

1.Import the public key used by the package management system.

The Ubuntu package management tools (i.e. dpkg and apt) ensure package consistency and authenticity by requiring that distributors sign packages with GPG keys. Issue the following command to import the MongoDB public GPG Key:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

2.Create a list file for MongoDB.

Create the /etc/apt/sources.list.d/mongodb-org-3.2.list list file using the command appropriate for your version of Ubuntu:

Ubuntu 12.04

echo "deb http://repo.mongodb.org/apt/ubuntu precise/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

Ubuntu 14.04

echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

Ubuntu 16.04

echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
3.Reload local package database.

Issue the following command to reload the local package database:

sudo apt-get update
4.Install the MongoDB packages.

You can install either the latest stable version of MongoDB or a specific version of MongoDB.

Install the latest stable version of MongoDB.

Issue the following command:

sudo apt-get install -y mongodb-org

Install a specific release of MongoDB.

To install a specific release, you must specify each component package individually along with the version number, as in the following example:

sudo apt-get install -y mongodb-org=3.2.9 mongodb-org-server=3.2.9 mongodb-org-shell=3.2.9 mongodb-org-mongos=3.2.9 mongodb-org-tools=3.2.9

If you only install mongodb-org=3.2.9 and do not include the component packages, the latest version of each MongoDB package will be installed regardless of what version you specified.

Pin a specific version of MongoDB.

Although you can specify any available version of MongoDB, apt-get will upgrade the packages when a newer version becomes available. To prevent unintended upgrades, pin the package. To pin the version of MongoDB at the currently installed version, issue the following command sequence:

echo "mongodb-org hold" | sudo dpkg --set-selections
echo "mongodb-org-server hold" | sudo dpkg --set-selections
echo "mongodb-org-shell hold" | sudo dpkg --set-selections
echo "mongodb-org-mongos hold" | sudo dpkg --set-selections
echo "mongodb-org-tools hold" | sudo dpkg --set-selections
(Ubuntu 16.04-only) Create systemd service file

NOTE

Follow this step ONLY if you are running Ubuntu 16.04.

Create a new file at /lib/systemd/system/mongod.service with the following contents:

[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target
Documentation=https://docs.mongodb.org/manual

[Service]
User=mongodb
Group=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf

[Install]
WantedBy=multi-user.target

Run MongoDB Community Edition

The MongoDB instance stores its data files in /var/lib/mongodb and its log files in/var/log/mongodb by default, and runs using the mongodb user account. You can specify alternate log and data file directories in /etc/mongod.conf. See systemLog.path and storage.dbPath for additional information.

If you change the user that runs the MongoDB process, you must modify the access control rights to the/var/lib/mongodb and /var/log/mongodb directories to give this user access to these directories.

Start MongoDB.

Issue the following command to start mongod:

sudo service mongod start

Verify that MongoDB has started successfully

Verify that the mongod process has started successfully by checking the contents of the log file at/var/log/mongodb/mongod.log for a line reading

[initandlisten] waiting for connections on port <port>

where <port> is the port configured in /etc/mongod.conf, 27017 by default.

Stop MongoDB.

As needed, you can stop the mongod process by issuing the following command:

sudo service mongod stop
Restart MongoDB.

Issue the following command to restart mongod:

sudo service mongod restart
Begin using MongoDB.

To help you start using MongoDB, MongoDB provides Getting Started Guides in various driver editions. See Getting Started for the available editions.

Before deploying MongoDB in a production environment, consider the Production Notes document.

Later, to stop MongoDB, press Control+C in the terminal where the mongod instance is running.

 

Courtesy: MongoDB Website