AWS – Extending EBS Block linked to EC2 Instance

Say you are working on an EC2 instance with an EBS block provisioned.
But later you find that the storage already provisioned is insufficient. You might need to increase the volume size.

It’s pretty straight-forward on the AWS console. With the click of a button you can tell AWS to increase your EBS size.

Post this change,
Though AWS takes a few mins to extend this volume, this does not reflect on the EC2 instance.

Manual commands need to be run, to extend an existing EBS volume from the small to the newsly assigned bigger size.

Here’s the steps that got it working for me after connecting to the particular EC2 instance related to the volume.



1. df -hT -> Confirm existing storage size

2. lsblk -> display all information of the volumes.

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

nvme1n1 259:0 0 30G 0 disk /data nvme0n1 259:1 0 16G 0 disk

└─nvme0n1p1 259:2 0 8G 0 part /

└─nvme0n1p128 259:3 0 1M 0 part

3. sudo growpart /dev/nvme0n1 1

4. sudo resize2fs /dev/nvme0
sudo resize2fs /dev/nvme0n1

This results in the EC2 instance now having access to the whole upgraded EBS volume.

Signing out,
VJ

AWS Elastic IP Pricing: A tricky affair

Elastic IP Pricing is tricky 🙂 Contrary to their pay-as-you-go model, AWS Elastic IP charges are a pay-as-you-don’t-use model.

As observed on AWS documentation, when either of these conditions are met, then Elastic IP Addresses are NOT CHARGED :-

  • The Elastic IP address is associated with an EC2 instance.
  • The instance associated with the Elastic IP address is running.
  • The instance has only one Elastic IP address attached to it.
  • The Elastic IP address is associated with an attached network interface, such as a Network Load Balancer or NAT gateway.

To summarise, basically this means Elastic IPs are only charged if they are idle or not attached to any AWS resource. So just ensure that every Elastic IP you provision is being actively used 🙂

Problem Solved!

Signing Out,
VJ


This article was previously published on Medium.

Synthetic Monitoring: An effective way of monitoring a Microservices Application

While monitoring a Microservices application, most of the time we look at values of CPU and Memory of various instances to understand if the system is doing well or not. Given below is another approach to monitor a system without actually checking low level machine stats like CPU and Memory usage of individual instances. This is an approach to understand the overall health of an application.

“I first did this back in 2005. I was part of a small ThoughtWorks team that was build‐
ing a system for an investment bank. Throughout the trading day, lots of events came
in representing changes in the market. Our job was to react to these changes, and
look at the impact on the bank’s portfolio. We were working under some fairly tight
deadlines, as the goal was to have done all our calculations in less than 10 seconds
after the event arrived. The system itself consisted of around five discrete services, at
least one of which was running on a computing grid that, among other things, was
scavenging unused CPU cycles on around 250 desktop hosts in the bank’s disaster
recovery center.
The number of moving parts in the system meant a lot of noise was being generated
from many of the lower-level metrics we were gathering. We didn’t have the benefit of
scaling gradually or having the system run for a few months to understand what good
looked like for metrics like our CPU rate or even the latencies of some of the individ‐
ual components. Our approach was to generate fake events to price part of the portfo‐
lio that was not booked into the downstream systems. Every minute or so, we had
Nagios run a command-line job that inserted a fake event into one of our queues.
Our system picked it up and ran all the various calculations just like any other job,
except the results appeared in the junk book, which was used only for testing. If a re-
pricing wasn’t seen within a given time, Nagios reported this as an issue.”

Book: Building Microservices by Sam Newman

Solr: Using Analysers & Filters to Analyse Queries

A filter may also do more complex analysis by looking ahead to consider multiple tokens at once, although this is less common. One hypothetical use for such a filter might be to normalize state names that would be tokenized as two words. For example, the single token “california” would be replaced with “CA”, while the token pair “rhode” followed by “island” would become the single token “RI”.

Because filters consume one TokenStream and produce a new TokenStream, they can be chained one after another indefinitely. Each filter in the chain in turn processes the tokens produced by its predecessor. The order in which you specify the filters is therefore significant. Typically, the most general filtering is done first, and later filtering stages are more specialized.

<fieldType name="text" class="solr.TextField">
  <analyzer>
    <tokenizer class="solr.StandardTokenizerFactory"/>
    <filter class="solr.StandardFilterFactory"/>
    <filter class="solr.LowerCaseFilterFactory"/>
    <filter class="solr.EnglishPorterFilterFactory"/>
  </analyzer>
</fieldType>

This example starts with Solr’s standard tokenizer, which breaks the field’s text into tokens. Those tokens then pass through Solr’s standard filter, which removes dots from acronyms, and performs a few other common operations. All the tokens are then set to lowercase, which will facilitate case-insensitive matching at query time.

 

The last filter in the above example is a stemmer filter that uses the Porter stemming algorithm. A stemmer is basically a set of mapping rules that maps the various forms of a word back to the base, or stem, word from which they derive. For example, in English the words “hugs”, “hugging” and “hugged” are all forms of the stem word “hug”. The stemmer will replace all of these terms with “hug”, which is what will be indexed. This means that a query for “hug” will match the term “hugged”, but not “huge”.

Conversely, applying a stemmer to your query terms will allow queries containing non stem terms, like “hugging”, to match documents with different variations of the same stem word, such as “hugged”. This works because both the indexer and the query will map to the same stem (“hug”).

Word stemming is, obviously, very language specific. Solr includes several language-specific stemmers created by the Snowball generator that are based on the Porter stemming algorithm. The generic Snowball Porter Stemmer Filter can be used to configure any of these language stemmers. Solr also includes a convenience wrapper for the English Snowball stemmer. There are also several purpose-built stemmers for non-English languages. These stemmers are described in Language Analysis.

 

Courtesy: lucene.apache.org

Method: ActiveRecord::Base.import

Defined in:
lib/activerecord-import/import.rb

.import(*args) ⇒ Object

Imports a collection of values to the database.

This is more efficient than using ActiveRecord::Base#create or ActiveRecord::Base#save multiple times. This method works well if you want to create more than one record at a time and do not care about having ActiveRecord objects returned for each record inserted.

This can be used with or without validations. It does not utilize the ActiveRecord::Callbacks during creation/modification while performing the import.

Usage

Model.import array_of_models
Model.import column_names, array_of_values
Model.import column_names, array_of_values, options

Model.import array_of_models

With this form you can call import passing in an array of model objects that you want updated.

Model.import column_names, array_of_values

The first parameter column_names is an array of symbols or strings which specify the columns that you want to update.

The second parameter, array_of_values, is an array of arrays. Each subarray is a single set of values for a new record. The order of values in each subarray should match up to the order of the column_names.

Model.import column_names, array_of_values, options

The first two parameters are the same as the above form. The third parameter, options, is a hash. This is optional. Please see below for what options are available.

Options

  • validate – true|false, tells import whether or not to use \
    ActiveRecord validations. Validations are enforced by default.
  • on_duplicate_key_update – an Array or Hash, tells import to \
    use MySQL's ON DUPLICATE KEY UPDATE ability. See On Duplicate\
    Key Update below.
  • synchronize – an array of ActiveRecord instances for the model that you are currently importing data into. This synchronizes existing model instances in memory with updates from the import.
  • timestamps – true|false, tells import to not add timestamps \ (if false) even if record timestamps is disabled in ActiveRecord::Base
  • +recursive – true|false, tells import to import all autosave association if the adapter supports setting the primary keys of the newly imported objects.

Arraying your arguments – Ruby

The list of parameters passed to an object is, in fact, available as a list. To do this, we use what is called the splat operator – which is just an asterisk (*).

The splat operator is used to handle methods which have a variable parameter list. Let’s use it to create an add method that can handle any number of parameters.

We use the inject method to iterate over arguments, which is covered in the chapter on Collections. It isn’t directly relevant to this lesson, but do look it up if it piques your interest.

Example Code:

def add(*numbers)
  numbers.inject(0) { |sum, number| sum + number }
end

puts add(1)
puts add(1, 2)
puts add(1, 2, 3)
puts add(1, 2, 3, 4)

The splat operator works both ways – you can use it to convert arrays to parameter lists as easily as we just converted a parameter list to an array.

Inject in Ruby

The syntax for the inject method is as follows:

inject (value_initial) { |result_memo, object| block }

Let’s solve the above example i.e.

[1, 2, 3, 4].inject(0) { |result, element| result + element }

which gives the 10 as the output.

So, before starting let’s see what are the values stored in each variables:

result = 0 The zero came from inject(value) which is 0

element = 1 It is first element of the array.

Okey!!! So, let’s start understanding the above example

Step :1 [1, 2, 3, 4].inject(0) { |0, 1| 0 + 1 }

Step :2 [1, 2, 3, 4].inject(0) { |1, 2| 1 + 2 }

Step :3 [1, 2, 3, 4].inject(0) { |3, 3| 3 + 3 }

Step :4 [1, 2, 3, 4].inject(0) { |6, 4| 6 + 4 }

Step :5 [1, 2, 3, 4].inject(0) { |10, Now no elements left in the array, so it'll return 10 from this step| }

Here Bold-Italic values are elements fetch from array and the simply Bold values are the resultant values.

I hope that you understand the working of the #inject method of the #ruby.

 

Active Record Basics

1 What is Active Record?

Active Record is the M in MVC – the model – which is the layer of the system responsible for representing business data and logic. Active Record facilitates the creation and use of business objects whose data requires persistent storage to a database. It is an implementation of the Active Record pattern which itself is a description of an Object Relational Mapping system.

1.1 The Active Record Pattern

Active Record was described by Martin Fowler in his book Patterns of Enterprise Application Architecture. In Active Record, objects carry both persistent data and behavior which operates on that data. Active Record takes the opinion that ensuring data access logic as part of the object will educate users of that object on how to write to and read from the database.

1.2 Object Relational Mapping

Object Relational Mapping, commonly referred to as its abbreviation ORM, is a technique that connects the rich objects of an application to tables in a relational database management system. Using ORM, the properties and relationships of the objects in an application can be easily stored and retrieved from a database without writing SQL statements directly and with less overall database access code.

1.3 Active Record as an ORM Framework

Active Record gives us several mechanisms, the most important being the ability to:

  • Represent models and their data.
  • Represent associations between these models.
  • Represent inheritance hierarchies through related models.
  • Validate models before they get persisted to the database.
  • Perform database operations in an object-oriented fashion.

2 Convention over Configuration in Active Record

When writing applications using other programming languages or frameworks, it may be necessary to write a lot of configuration code. This is particularly true for ORM frameworks in general. However, if you follow the conventions adopted by Rails, you’ll need to write very little configuration (in some cases no configuration at all) when creating Active Record models. The idea is that if you configure your applications in the very same way most of the time then this should be the default way. Thus, explicit configuration would be needed only in those cases where you can’t follow the standard convention.

2.1 Naming Conventions

By default, Active Record uses some naming conventions to find out how the mapping between models and database tables should be created. Rails will pluralize your class names to find the respective database table. So, for a class Book, you should have a database table called books. The Rails pluralization mechanisms are very powerful, being capable of pluralizing (and singularizing) both regular and irregular words. When using class names composed of two or more words, the model class name should follow the Ruby conventions, using the CamelCase form, while the table name must contain the words separated by underscores. Examples:

  • Database Table – Plural with underscores separating words (e.g., book_clubs).
  • Model Class – Singular with the first letter of each word capitalized (e.g., BookClub).
Model / Class Table / Schema
Article articles
LineItem line_items
Deer deers
Mouse mice
Person people

2.2 Schema Conventions

Active Record uses naming conventions for the columns in database tables, depending on the purpose of these columns.

  • Foreign keys – These fields should be named following the pattern singularized_table_name_id (e.g., item_id, order_id). These are the fields that Active Record will look for when you create associations between your models.
  • Primary keys – By default, Active Record will use an integer column named id as the table’s primary key. When using Active Record Migrations to create your tables, this column will be automatically created.

There are also some optional column names that will add additional features to Active Record instances:

  • created_at – Automatically gets set to the current date and time when the record is first created.
  • updated_at – Automatically gets set to the current date and time whenever the record is updated.
  • lock_version – Adds optimistic locking to a model.
  • type – Specifies that the model uses Single Table Inheritance.
  • (association_name)_type – Stores the type for polymorphic associations.
  • (table_name)_count – Used to cache the number of belonging objects on associations. For example, a comments_count column in an Article class that has many instances of Comment will cache the number of existent comments for each article.

While these column names are optional, they are in fact reserved by Active Record. Steer clear of reserved keywords unless you want the extra functionality. For example, type is a reserved keyword used to designate a table using Single Table Inheritance (STI). If you are not using STI, try an analogous keyword like “context”, that may still accurately describe the data you are modeling.

3 Creating Active Record Models

It is very easy to create Active Record models. All you have to do is to subclass the ApplicationRecord class and you’re good to go:

class Product < ApplicationRecord
end

This will create a Product model, mapped to a products table at the database. By doing this you’ll also have the ability to map the columns of each row in that table with the attributes of the instances of your model. Suppose that the products table was created using an SQL statement like:

CREATE TABLE products (
   id int(11) NOT NULL auto_increment,
   name varchar(255),
   PRIMARY KEY  (id)
);

Following the table schema above, you would be able to write code like the following:

p = Product.new
p.name = "Some Book"
puts p.name # "Some Book"

4 Overriding the Naming Conventions

What if you need to follow a different naming convention or need to use your Rails application with a legacy database? No problem, you can easily override the default conventions.

ApplicationRecord inherits from ActiveRecord::Base, which defines a number of helpful methods. You can use the ActiveRecord::Base.table_name= method to specify the table name that should be used:

class Product < ApplicationRecord
  self.table_name = "my_products"
end

If you do so, you will have to define manually the class name that is hosting the fixtures (my_products.yml) using the set_fixture_class method in your test definition:

class ProductTest < ActiveSupport::TestCase
  set_fixture_class my_products: Product
  fixtures :my_products
  ...
end

It’s also possible to override the column that should be used as the table’s primary key using the ActiveRecord::Base.primary_key= method:

class Product < ApplicationRecord
  self.primary_key = "product_id"
end

5 CRUD: Reading and Writing Data

CRUD is an acronym for the four verbs we use to operate on data: Create, Read, Update and Delete. Active Record automatically creates methods to allow an application to read and manipulate data stored within its tables.

5.1 Create

Active Record objects can be created from a hash, a block or have their attributes manually set after creation. The new method will return a new object while create will return the object and save it to the database.

For example, given a model User with attributes of name and occupation, the create method call will create and save a new record into the database:

user = User.create(name: "David", occupation: "Code Artist")

Using the new method, an object can be instantiated without being saved:

user = User.new
user.name = "David"
user.occupation = "Code Artist"

A call to user.save will commit the record to the database.

Finally, if a block is provided, both create and new will yield the new object to that block for initialization:

user = User.new do |u|
  u.name = "David"
  u.occupation = "Code Artist"
end

5.2 Read

Active Record provides a rich API for accessing data within a database. Below are a few examples of different data access methods provided by Active Record.

# return a collection with all users
users = User.all
# return the first user
user = User.first
# return the first user named David
david = User.find_by(name: 'David')
# find all users named David who are Code Artists and sort by created_at in reverse chronological order
users = User.where(name: 'David', occupation: 'Code Artist').order(created_at: :desc)

You can learn more about querying an Active Record model in the Active Record Query Interfaceguide.

5.3 Update

Once an Active Record object has been retrieved, its attributes can be modified and it can be saved to the database.

user = User.find_by(name: 'David')
user.name = 'Dave'
user.save

A shorthand for this is to use a hash mapping attribute names to the desired value, like so:

user = User.find_by(name: 'David')
user.update(name: 'Dave')

This is most useful when updating several attributes at once. If, on the other hand, you’d like to update several records in bulk, you may find the update_all class method useful:

User.update_all "max_login_attempts = 3, must_change_password = 'true'"

5.4 Delete

Likewise, once retrieved an Active Record object can be destroyed which removes it from the database.

user = User.find_by(name: 'David')
user.destroy

6 Validations

Active Record allows you to validate the state of a model before it gets written into the database. There are several methods that you can use to check your models and validate that an attribute value is not empty, is unique and not already in the database, follows a specific format and many more.

Validation is a very important issue to consider when persisting to the database, so the methods save and update take it into account when running: they return false when validation fails and they didn’t actually perform any operation on the database. All of these have a bang counterpart (that is, save! and update!), which are stricter in that they raise the exception ActiveRecord::RecordInvalid if validation fails. A quick example to illustrate:

class User < ApplicationRecord
  validates :name, presence: true
end
user = User.new
user.save  # => false
user.save! # => ActiveRecord::RecordInvalid: Validation failed: Name can't be blank

You can learn more about validations in the Active Record Validations guide.

7 Callbacks

Active Record callbacks allow you to attach code to certain events in the life-cycle of your models. This enables you to add behavior to your models by transparently executing code when those events occur, like when you create a new record, update it, destroy it and so on. You can learn more about callbacks in the Active Record Callbacks guide.

8 Migrations

Rails provides a domain-specific language for managing a database schema called migrations. Migrations are stored in files which are executed against any database that Active Record supports using rake. Here’s a migration that creates a table:

class CreatePublications < ActiveRecord::Migration[5.0]
  def change
    create_table :publications do |t|
      t.string :title
      t.text :description
      t.references :publication_type
      t.integer :publisher_id
      t.string :publisher_type
      t.boolean :single_issue
      t.timestamps
    end
    add_index :publications, :publication_type_id
  end
end

Rails keeps track of which files have been committed to the database and provides rollback features. To actually create the table, you’d run rails db:migrate and to roll it back, rails db:rollback.

Note that the above code is database-agnostic: it will run in MySQL, PostgreSQL, Oracle and others. You can learn more about migrations in the Active Record Migrations guide.

Courtesy: guides.rubyonrails.org

Login loop issue on Ubuntu

Had an issue with Ubuntu 14.04 version where in login into the system would result in the screen going through various screens and end up back at login page. I had previously had the same issue but was able to resolve it with the help of my friend. This time I thought I’d try to fix this myself and was able to faster than I thought.

Here’s how I resolved it after going through a few solutions :-

So basically lightdm is the display manager which comes by default with 14.04. So when you google for lightdm here’s what you find …

LightDM is an X display manager that aims to be lightweight, fast, extensible and multi-desktop. It uses various front-ends to draw login interfaces, also called Greeters.

Basically this package manages the login interface.To me that’s not a show stopper, in fact all my work starts after login.So I just thought I’d try another display manager. There are different display managers that work with ubuntu, another one being gdm. I just ran the following command to remove lightdm and install gdm.

CNTRL + ALT + F1 launches the terminal window even when user is’nt logged in.

sudo apt-get purge lightdm && sudo apt-get install gdm

This fixed my issue. Now I’m able to login to my machine without a prob. Case closed!

Comparison Query Operators

For details on specific operator, including syntax and examples, click on the specific operator to go to its reference page.

For comparison of different BSON type values, see the specified BSON comparison order.

Name Description
$eq Matches values that are equal to a specified value.
$gt Matches values that are greater than a specified value.
$gte Matches values that are greater than or equal to a specified value.
$lt Matches values that are less than a specified value.
$lte Matches values that are less than or equal to a specified value.
$ne Matches all values that are not equal to a specified value.
$in Matches any of the values specified in an array.
$nin Matches none of the values specified in an array.