Method: ActiveRecord::Base.import

Defined in:
lib/activerecord-import/import.rb

.import(*args) ⇒ Object

Imports a collection of values to the database.

This is more efficient than using ActiveRecord::Base#create or ActiveRecord::Base#save multiple times. This method works well if you want to create more than one record at a time and do not care about having ActiveRecord objects returned for each record inserted.

This can be used with or without validations. It does not utilize the ActiveRecord::Callbacks during creation/modification while performing the import.

Usage

Model.import array_of_models
Model.import column_names, array_of_values
Model.import column_names, array_of_values, options

Model.import array_of_models

With this form you can call import passing in an array of model objects that you want updated.

Model.import column_names, array_of_values

The first parameter column_names is an array of symbols or strings which specify the columns that you want to update.

The second parameter, array_of_values, is an array of arrays. Each subarray is a single set of values for a new record. The order of values in each subarray should match up to the order of the column_names.

Model.import column_names, array_of_values, options

The first two parameters are the same as the above form. The third parameter, options, is a hash. This is optional. Please see below for what options are available.

Options

  • validate – true|false, tells import whether or not to use \
    ActiveRecord validations. Validations are enforced by default.
  • on_duplicate_key_update – an Array or Hash, tells import to \
    use MySQL's ON DUPLICATE KEY UPDATE ability. See On Duplicate\
    Key Update below.
  • synchronize – an array of ActiveRecord instances for the model that you are currently importing data into. This synchronizes existing model instances in memory with updates from the import.
  • timestamps – true|false, tells import to not add timestamps \ (if false) even if record timestamps is disabled in ActiveRecord::Base
  • +recursive – true|false, tells import to import all autosave association if the adapter supports setting the primary keys of the newly imported objects.

Arraying your arguments – Ruby

The list of parameters passed to an object is, in fact, available as a list. To do this, we use what is called the splat operator – which is just an asterisk (*).

The splat operator is used to handle methods which have a variable parameter list. Let’s use it to create an add method that can handle any number of parameters.

We use the inject method to iterate over arguments, which is covered in the chapter on Collections. It isn’t directly relevant to this lesson, but do look it up if it piques your interest.

Example Code:

def add(*numbers)
  numbers.inject(0) { |sum, number| sum + number }
end

puts add(1)
puts add(1, 2)
puts add(1, 2, 3)
puts add(1, 2, 3, 4)

The splat operator works both ways – you can use it to convert arrays to parameter lists as easily as we just converted a parameter list to an array.

Inject in Ruby

The syntax for the inject method is as follows:

inject (value_initial) { |result_memo, object| block }

Let’s solve the above example i.e.

[1, 2, 3, 4].inject(0) { |result, element| result + element }

which gives the 10 as the output.

So, before starting let’s see what are the values stored in each variables:

result = 0 The zero came from inject(value) which is 0

element = 1 It is first element of the array.

Okey!!! So, let’s start understanding the above example

Step :1 [1, 2, 3, 4].inject(0) { |0, 1| 0 + 1 }

Step :2 [1, 2, 3, 4].inject(0) { |1, 2| 1 + 2 }

Step :3 [1, 2, 3, 4].inject(0) { |3, 3| 3 + 3 }

Step :4 [1, 2, 3, 4].inject(0) { |6, 4| 6 + 4 }

Step :5 [1, 2, 3, 4].inject(0) { |10, Now no elements left in the array, so it'll return 10 from this step| }

Here Bold-Italic values are elements fetch from array and the simply Bold values are the resultant values.

I hope that you understand the working of the #inject method of the #ruby.

 

Mongo::Error::OperationFailure: Cursor not found

Lately, I’ve been running into this error while running my nightly automation scripts. From my experience in resolving nagging errors, this one too was another of those annoying/inconsistent errors which did not have any concrete solutions on the internet.  This post is for the benefit of those fortunate people(unlike me) who will encounter this error in the future.

Small Background on the task:I had to query all data which was in my MongoDB server one-by-one and compare it on real-time with my api responses. I am using the ‘mongo’ ruby driver gem to interact with the db.The total data in the db was ~3.5 lac records but while running the script – at around the ~350 iteration mark I was getting this error :-

Mongo::Error::OperationFailure:
  Cursor not found, cursor id: 79727049273 (43)
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/operation/result.rb:256:in `validate!'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/operation/executable.rb:37:in `block in execute'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/server/connection_pool.rb:107:in `with_connection'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/server.rb:242:in `with_connection'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/operation/executable.rb:35:in `execute'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/cursor.rb:188:in `block in get_more'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/retryable.rb:51:in `read_with_retry'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/cursor.rb:187:in `get_more'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/cursor.rb:113:in `each'
/home/qaserver/.rvm/gems/ruby-2.3.0/gems/mongo-2.4.0/lib/mongo/collection/view/iterable.rb:44:in `each'
./spec/all_usecases_spec/rovi_ott_validation_spec/rovi_ott_links_validation_for_all_programs_spec.rb:66:in `block (2 levels) in '

div>

254      # @since 2.0.0
255      def validate!
256        !successful? ? raise(Error::OperationFailure.new(parser.message)) : self
257      end
258
259# Install the coderay gem to get syntax highlightingpre>

The crazy part was this error was not at all consistent but would happen at times. I would overlook this by re-running  my scripts.One day suddenly out of nowhere this error became almost 100% consistent! On priority, I had to find a solution for it.

After doing some amount of reading, I realised that this error is to do with the cursor which gets created while querying the db. So what happens is MongoDB returns a cursor when the query happens. In my case as my query is one which ‘finds all’ I do not fully know if multiple cursors are returned for each sub-query or a single cursor is returned which loops through the whole db. I need some more clarification on that.

But what I understand is that MongoDB closes all cursors that have been inactive for 10 minutes.It has something called a cursor timeout to do the same. So maybe one such cursor created was getting inactive after a particular time.

On more exploring I understood that there is a way to disable this cursor timeout. The hard part was to find the key word for this cursor timeout for the ruby driver which I was using, in my case the ruby driver ‘mongo’. Going through multiple stackoverflows which gave some incorrect solutions like use ‘:timeout => false’ I had to struggle my way to find this answer.

After going through the Mongo Ruby Driver documentation(which has a very confusing sequence) thouroghly, I found my answer!

There is an option while querying called ‘no_cursor_timeout’ which must be used to disable this cursor timeout. Here’s how you implement it :-

coll.find({:date => { ‘$eq’ => Date.today }}).no_cursor_timeout.each do |doc|

          ########## Code goes in here ###########

end

Here’s how Evernote moved 3 petabytes of data to Google’s cloud

Article by

Evernote decided last year that it wanted to move away from running its own data centers and start using the public cloud to operate its popular note-taking service. On Wednesday, it announced that the lion’s share of the work is done, save for some last user attachments.

The company signed up to work with Google, and as part of the migration process, the tech titan sent a team of engineers (in one case, bearing doughnuts) over to work with its customer on making sure the process was a success.

Evernote wanted to take advantage of the cloud to help with features based on machine learning that it has been developing. It also wanted to leverage the flexibility that comes from not having to run a data center.

The move is part of a broader trend of companies moving their workloads away from data centers that they own and increasingly using public cloud providers. While the transition required plenty of work and adaptation, Evernote credited Google for pitching in to help with the migration.

Why move to the cloud?

There was definitely plenty of work to do. Evernote’s backend was built on the assumption that its application would be running on the company’s twin California data centers, not in a public cloud. So why go through all the work?

Many of the key drivers behind the move will be familiar to cloud devotees. Evernote employees had to spend time maintaining the company’s data center, doing things like replacing hard drives, moving cables and evaluating new infrastructure options.

While those functions were key to maintaining the overall health and performance of the Evernote service, they weren’t providing additional value to customers, according to Ben McCormack, the company’s vice president of operations.

“We were just very realistic that with a team the size of Evernote’s operations team, we couldn’t compete with the level of maturity that the cloud providers have got…on provisioning, on management systems, et cetera,” McCormack said.“ We were always going to be playing catch-up, and it’s just a crazy situation to be in.”

When Evernote employees thought about refreshing a data center, one of the key issues that they encountered is that they didn’t know what they would need from a data center in five years, McCormack said.

Evernote had several public cloud providers it could choose from, including Amazon Web Services and Microsoft Azure, which are both larger players in the public cloud market. But McCormack said the similarities between the company’s current focus and Google’s areas of expertise were important to the choice. Evernote houses a large amount of unstructured data, and the company is looking to do more with machine learning.

“You add those two together, Google is the leader in that space,” McCormack said. “So effectively, I would say, we were making a strategic decision and a strategic bet that the areas that are important to Evernote today, and the areas we think will be important in the future, are the same areas that Google excels in.”

Machine learning was a highlight of Google’s platform for Evernote CTO Anirban Kundu, who said that higher-level services offered by Google help provide the foundation for new and improved features. Evernote has been driving toward a set of new capabilities based on machine learning, and Google services like its Cloud Machine Learning API help with that.

While cost is often touted as a benefit of cloud migrations, McCormack said that it wasn’t a primary driver of Evernote’s migration. While the company will be getting some savings out of the move, he said that cost wasn’t a limitation for the transition.

The decision to go with Google over another provider like AWS or Azure was driven by the technology team at Evernote, according to Greg Chiemingo, the company’s senior director of communications. He said in an email that CEO Chris O’Neill, who was at Google for roughly a decade before joining Evernote, came in to help with negotiations after the decision was made.

How it happened

Once Evernote signed its contract with Google in October, the clock was ticking. McCormack said that the company wanted to get the migration done before the new year, when users looking to get their life on track hammer the service with a flurry of activity.

Before the start of the year, Evernote needed to migrate 5 billion notes and 5 billion attachments. Because of metadata, like thumbnail images, included with those attachments, McCormack said that the company had to migrate 12 billion attachment files. Not only that, but the team couldn’t lose any of the roughly 3 petabytes of data it had. Oh yeah, and the Evernote service needed to stay up the entire time.

McCormack said that one of the Evernote team’s initial considerations was figuring out what core parts of its application could be entirely lifted and shifted into Google’s cloud, and what components would need to be modified in some way as part of the transition.

Part of the transformation involved reworking the way that the Evernote service handled networking. It previously used UDP Multicast to handle part of its image recognition workflow, which worked well in the company’s own data center where it could control the network routers involved.

But that same technology wasn’t available in Google’s cloud. Kundu said Evernote had to rework its application to use a queue-based model leveraging Google’s Cloud Pub/Sub service, instead.

Evernote couldn’t just migrate all of its user data over and then flip a switch directing traffic from its on-premises servers to Google’s cloud in one fell swoop. Instead, the company had to rearchitect its backend application to handle a staged migration with some data stored in different places.

The good news is that the transition didn’t require changes to the client. Kundu said that was key to the success of Evernote’s migration, because not all of the service’s users upgrade their software in a timely manner.

Evernote’s engagement with Google engineers was a pleasant surprise to McCormack. The team was available 24/7 to handle Evernote’s concerns remotely, and Google also sent a team of its engineers over to Evernote’s facilities to help with the migration.

Those Google employees were around to help troubleshoot any technical challenges Evernote was having with the move. That sort of engineer-to-engineer engagement is something Google says is a big part of its approach to service.

For one particularly important part of the migration, Google’s engineers came on a Sunday, bearing doughnuts for all in attendance. More than that, however, McCormack said that he was impressed with the engineers’ collaborative spirit.

“We had times when…we had written code to interface with Google Cloud Storage, we had [Google] engineers who were peer-reviewing that code, giving feedback and it genuinely felt like a partnership, which you very rarely see,” McCormack said. “Google wanted to see us be successful, and were willing to help across the boundaries to help us get there.”

In the end, it took roughly 70 days for the whole migration to take place from the signing of the contract to its final completion. The main part of the migration took place over a course of roughly 10 days in December, according to McCormack.

Lessons learned

If there was one thing Kundu and McCormack were crystal clear about, it’s that even the best-laid plans require a team that’s willing to adapt on the fly to a new environment. Evernote’s migration was a process of taking certain steps, evaluating what happened, and modifying the company’s approach in response to the situation they were presented with, even after doing extensive testing and simulation.

Furthermore, they also pointed out that work on a migration doesn’t stop once all the bytes are loaded into the cloud. Even with extensive testing, the Evernote team encountered new constraints working in Google’s environment once it was being used in production and bombarded with activity from live Evernote users.

For example, Google uses live migration techniques to move virtual machines from one host to another in order to apply patches and work around hardware issues. While that happens incredibly quickly, the Evernote service under full load had some problem with it, which required (and still requires) optimization.

Kundu said that Evernote had tested live migration prior to making the switch over to GCP, but that wasn’t enough.

When an application is put into production, user behavior and load on it might be different from test conditions, Kundu said. “And that’s where you have to be ready to handle those edge cases, and you have to realize that the day the migration happens or completes is not the day that you’re all done with the effort. You might see the problem in a month or whatever.”

Another key lesson, in McCormack’s opinion, is that the cloud is ready to handle any sort of workload. Evernote evaluated a migration roughly once every year, and it was only about 13 months ago that the company felt confident a cloud transition would be successful.

“Cloud has reached a maturity level and a breadth of features that means it’s unlikely that you’ll be unable to run in the cloud,” McCormack said.

That’s not to say it doesn’t require effort. While the cloud does provide benefits to Evernote that the company wasn’t going to get from running its own data center, they still had to cede control of their environment, and be willing to lose some of the telemetry they’re used to getting from a private data center.

Evernote’s engineers also did a lot of work on automating the transition. Moving users’ attachments over from the service’s on-premises infrastructure to Google Cloud Storage is handled by a pair of bespoke automated systems. The company used Puppet and Ansible for migrating the hundreds of shards holding user note data.

The immediate benefits of a migration

One of the key benefits of Evernote’s move to Google’s cloud is the company’s ability to provide reduced latency and improved connection consistency to its international customers. Evernote’s backend isn’t running in a geographically distributed manner right now, but Google’s worldwide networking investments provide an improvement right away.

“We have seen page loading times reducing quite significantly across some parts of our application,” McCormack said. “I wouldn’t say it’s everywhere yet, but we are starting to see that benefit of the Google power and the Google reach in terms of bridging traffic over their global fiber network.”

Right now, the company is still in the process of migrating the last of its users’ attachments to GCP. When that’s done, however, the company will be able to tell its users that all the data they have in the service is encrypted at rest, thanks to the capabilities of Google’s cloud.

From an Evernote standpoint, the company’s engineers have increased freedom to get their work done using cloud services. Rather than having to deal with provisioning physical infrastructure to power new features, developers now have a whole menu of options when it comes to using new services for developing features.

“Essentially, any GCP functionality that exists, they’re allowed to access, play with — within constraints of budget, obviously — and be able to build against.”

In addition, the cloud provides the company with additional flexibility and peace of mind when it comes to backups, outages and failover.

What comes next?

Looking further out, the company is interested in taking advantage of some of Google’s existing and forthcoming services. Evernote is investigating how it can use Google Cloud Functions, which lets developers write snippets of code that then run in response to event triggers.

Evernote is also alpha testing some Google Cloud Platform services that haven’t been released or revealed to the public yet. Kundu wouldn’t provide any details about those services.

In a similar vein, Kundu wouldn’t go into details about future Evernote functionality yet. However, he said that there are “a couple” of new features that have been enabled as a result of the migration.

Courtesy: www.cio.com

To comment on this article and other CIO content, visit on Facebook, LinkedIn or Twitter.

Active Record Basics

1 What is Active Record?

Active Record is the M in MVC – the model – which is the layer of the system responsible for representing business data and logic. Active Record facilitates the creation and use of business objects whose data requires persistent storage to a database. It is an implementation of the Active Record pattern which itself is a description of an Object Relational Mapping system.

1.1 The Active Record Pattern

Active Record was described by Martin Fowler in his book Patterns of Enterprise Application Architecture. In Active Record, objects carry both persistent data and behavior which operates on that data. Active Record takes the opinion that ensuring data access logic as part of the object will educate users of that object on how to write to and read from the database.

1.2 Object Relational Mapping

Object Relational Mapping, commonly referred to as its abbreviation ORM, is a technique that connects the rich objects of an application to tables in a relational database management system. Using ORM, the properties and relationships of the objects in an application can be easily stored and retrieved from a database without writing SQL statements directly and with less overall database access code.

1.3 Active Record as an ORM Framework

Active Record gives us several mechanisms, the most important being the ability to:

  • Represent models and their data.
  • Represent associations between these models.
  • Represent inheritance hierarchies through related models.
  • Validate models before they get persisted to the database.
  • Perform database operations in an object-oriented fashion.

2 Convention over Configuration in Active Record

When writing applications using other programming languages or frameworks, it may be necessary to write a lot of configuration code. This is particularly true for ORM frameworks in general. However, if you follow the conventions adopted by Rails, you’ll need to write very little configuration (in some cases no configuration at all) when creating Active Record models. The idea is that if you configure your applications in the very same way most of the time then this should be the default way. Thus, explicit configuration would be needed only in those cases where you can’t follow the standard convention.

2.1 Naming Conventions

By default, Active Record uses some naming conventions to find out how the mapping between models and database tables should be created. Rails will pluralize your class names to find the respective database table. So, for a class Book, you should have a database table called books. The Rails pluralization mechanisms are very powerful, being capable of pluralizing (and singularizing) both regular and irregular words. When using class names composed of two or more words, the model class name should follow the Ruby conventions, using the CamelCase form, while the table name must contain the words separated by underscores. Examples:

  • Database Table – Plural with underscores separating words (e.g., book_clubs).
  • Model Class – Singular with the first letter of each word capitalized (e.g., BookClub).
Model / Class Table / Schema
Article articles
LineItem line_items
Deer deers
Mouse mice
Person people

2.2 Schema Conventions

Active Record uses naming conventions for the columns in database tables, depending on the purpose of these columns.

  • Foreign keys – These fields should be named following the pattern singularized_table_name_id (e.g., item_id, order_id). These are the fields that Active Record will look for when you create associations between your models.
  • Primary keys – By default, Active Record will use an integer column named id as the table’s primary key. When using Active Record Migrations to create your tables, this column will be automatically created.

There are also some optional column names that will add additional features to Active Record instances:

  • created_at – Automatically gets set to the current date and time when the record is first created.
  • updated_at – Automatically gets set to the current date and time whenever the record is updated.
  • lock_version – Adds optimistic locking to a model.
  • type – Specifies that the model uses Single Table Inheritance.
  • (association_name)_type – Stores the type for polymorphic associations.
  • (table_name)_count – Used to cache the number of belonging objects on associations. For example, a comments_count column in an Article class that has many instances of Comment will cache the number of existent comments for each article.

While these column names are optional, they are in fact reserved by Active Record. Steer clear of reserved keywords unless you want the extra functionality. For example, type is a reserved keyword used to designate a table using Single Table Inheritance (STI). If you are not using STI, try an analogous keyword like “context”, that may still accurately describe the data you are modeling.

3 Creating Active Record Models

It is very easy to create Active Record models. All you have to do is to subclass the ApplicationRecord class and you’re good to go:

class Product < ApplicationRecord
end

This will create a Product model, mapped to a products table at the database. By doing this you’ll also have the ability to map the columns of each row in that table with the attributes of the instances of your model. Suppose that the products table was created using an SQL statement like:

CREATE TABLE products (
   id int(11) NOT NULL auto_increment,
   name varchar(255),
   PRIMARY KEY  (id)
);

Following the table schema above, you would be able to write code like the following:

p = Product.new
p.name = "Some Book"
puts p.name # "Some Book"

4 Overriding the Naming Conventions

What if you need to follow a different naming convention or need to use your Rails application with a legacy database? No problem, you can easily override the default conventions.

ApplicationRecord inherits from ActiveRecord::Base, which defines a number of helpful methods. You can use the ActiveRecord::Base.table_name= method to specify the table name that should be used:

class Product < ApplicationRecord
  self.table_name = "my_products"
end

If you do so, you will have to define manually the class name that is hosting the fixtures (my_products.yml) using the set_fixture_class method in your test definition:

class ProductTest < ActiveSupport::TestCase
  set_fixture_class my_products: Product
  fixtures :my_products
  ...
end

It’s also possible to override the column that should be used as the table’s primary key using the ActiveRecord::Base.primary_key= method:

class Product < ApplicationRecord
  self.primary_key = "product_id"
end

5 CRUD: Reading and Writing Data

CRUD is an acronym for the four verbs we use to operate on data: Create, Read, Update and Delete. Active Record automatically creates methods to allow an application to read and manipulate data stored within its tables.

5.1 Create

Active Record objects can be created from a hash, a block or have their attributes manually set after creation. The new method will return a new object while create will return the object and save it to the database.

For example, given a model User with attributes of name and occupation, the create method call will create and save a new record into the database:

user = User.create(name: "David", occupation: "Code Artist")

Using the new method, an object can be instantiated without being saved:

user = User.new
user.name = "David"
user.occupation = "Code Artist"

A call to user.save will commit the record to the database.

Finally, if a block is provided, both create and new will yield the new object to that block for initialization:

user = User.new do |u|
  u.name = "David"
  u.occupation = "Code Artist"
end

5.2 Read

Active Record provides a rich API for accessing data within a database. Below are a few examples of different data access methods provided by Active Record.

# return a collection with all users
users = User.all
# return the first user
user = User.first
# return the first user named David
david = User.find_by(name: 'David')
# find all users named David who are Code Artists and sort by created_at in reverse chronological order
users = User.where(name: 'David', occupation: 'Code Artist').order(created_at: :desc)

You can learn more about querying an Active Record model in the Active Record Query Interfaceguide.

5.3 Update

Once an Active Record object has been retrieved, its attributes can be modified and it can be saved to the database.

user = User.find_by(name: 'David')
user.name = 'Dave'
user.save

A shorthand for this is to use a hash mapping attribute names to the desired value, like so:

user = User.find_by(name: 'David')
user.update(name: 'Dave')

This is most useful when updating several attributes at once. If, on the other hand, you’d like to update several records in bulk, you may find the update_all class method useful:

User.update_all "max_login_attempts = 3, must_change_password = 'true'"

5.4 Delete

Likewise, once retrieved an Active Record object can be destroyed which removes it from the database.

user = User.find_by(name: 'David')
user.destroy

6 Validations

Active Record allows you to validate the state of a model before it gets written into the database. There are several methods that you can use to check your models and validate that an attribute value is not empty, is unique and not already in the database, follows a specific format and many more.

Validation is a very important issue to consider when persisting to the database, so the methods save and update take it into account when running: they return false when validation fails and they didn’t actually perform any operation on the database. All of these have a bang counterpart (that is, save! and update!), which are stricter in that they raise the exception ActiveRecord::RecordInvalid if validation fails. A quick example to illustrate:

class User < ApplicationRecord
  validates :name, presence: true
end
user = User.new
user.save  # => false
user.save! # => ActiveRecord::RecordInvalid: Validation failed: Name can't be blank

You can learn more about validations in the Active Record Validations guide.

7 Callbacks

Active Record callbacks allow you to attach code to certain events in the life-cycle of your models. This enables you to add behavior to your models by transparently executing code when those events occur, like when you create a new record, update it, destroy it and so on. You can learn more about callbacks in the Active Record Callbacks guide.

8 Migrations

Rails provides a domain-specific language for managing a database schema called migrations. Migrations are stored in files which are executed against any database that Active Record supports using rake. Here’s a migration that creates a table:

class CreatePublications < ActiveRecord::Migration[5.0]
  def change
    create_table :publications do |t|
      t.string :title
      t.text :description
      t.references :publication_type
      t.integer :publisher_id
      t.string :publisher_type
      t.boolean :single_issue
      t.timestamps
    end
    add_index :publications, :publication_type_id
  end
end

Rails keeps track of which files have been committed to the database and provides rollback features. To actually create the table, you’d run rails db:migrate and to roll it back, rails db:rollback.

Note that the above code is database-agnostic: it will run in MySQL, PostgreSQL, Oracle and others. You can learn more about migrations in the Active Record Migrations guide.

Courtesy: guides.rubyonrails.org

Login loop issue on Ubuntu

Had an issue with Ubuntu 14.04 version where in login into the system would result in the screen going through various screens and end up back at login page. I had previously had the same issue but was able to resolve it with the help of my friend. This time I thought I’d try to fix this myself and was able to faster than I thought.

Here’s how I resolved it after going through a few solutions :-

So basically lightdm is the display manager which comes by default with 14.04. So when you google for lightdm here’s what you find …

LightDM is an X display manager that aims to be lightweight, fast, extensible and multi-desktop. It uses various front-ends to draw login interfaces, also called Greeters.

Basically this package manages the login interface.To me that’s not a show stopper, in fact all my work starts after login.So I just thought I’d try another display manager. There are different display managers that work with ubuntu, another one being gdm. I just ran the following command to remove lightdm and install gdm.

CNTRL + ALT + F1 launches the terminal window even when user is’nt logged in.

sudo apt-get purge lightdm && sudo apt-get install gdm

This fixed my issue. Now I’m able to login to my machine without a prob. Case closed!