Gentle introduction to the concept of software frameworks

On the blog post for this month I am going to write about software frameworks in general, and web application frameworks in particular. Frameworks are not a new concept. In fact, they have been around for about 30-35 years now, and they are constantly used in software projects of all kinds and sizes.

Microsoft .Net Framework as an example of framework (source: wikimedia.org)

Software developers not only create new software, but what they do for most of their time is to maintain software created by other people. Therefore it is important to rely on a series of shared principles and some sort of rational structure, which is at the same time both open and general enough.

Usually this is achieved using software frameworks. Regardless the stack or the technology employed, frameworks allow developers to concentrate their efforts in the actual functional requirements while abstracting other tasks that deal with non-functional requirements (usually low-level parts like those related to the networking or database connections). Due to this, software projects can scale their development times down while their mainteninability improves. However, software frameworks also have some drawbacks. In particular, they produce a great deal of codebloat, wasting resources and making the code not as efficient as an equivalent specific-made solution. They also have a learning curve that slows the development at early adoption stages. Hence, although it is true that they reduce developing times in the long-term, it is actually the opposite in the short-term.

Frameworks have two key characteristics:

  • Extensibility: developers can modify the framework to fit the requirements, improve the performance or override some functionality.
  • Inversion of control: the flow of the execution is determined by the framework not the developer.

Architecture-wise, and irrespective of the problem they try to solve, frameworks are basically inter-related components made of two types of code:

  • Frozen spots: these are the portions of code that are not part of the business logic of the application and that barely change.
  • Hot spots: these are the portions of code that contain the business logic of the application. These are where the developers inject their code, and hence vary from one application to another.

A particular type of software frameworks is the web application frameworks. These are designed to provide a general solution to the problem of building web applications, no matter whether it is for banking, e-commerce, customer service or a news service. Usually web application frameworks are implementations of the Model-View-Controller (MVC) architectural pattern where:

  • Model is the data.
  • View is the graphical interface rendered to the user.
  • Controller is the bridge between the view and the model.

 

The way this pattern works (at high level) is very simple: the Controller corresponds to the API. This Controller (the API) receives the input from the users (the View), makes changes to the Model (the data) and pushes the results back to the View (or the web pages rendered in the browser).

As I mentioned earlier, frameworks abstract all the connections amongst the different application components. In the particular case of the web application frameworks they provide an interface to some kind of database management system (DBMS), and they provide a server to handle the clients requests. In addition they also provide (or may provide) user authentication/security, session management, AJAX sub-framework, or a templating system. And since all this is provided for free, what developers only have to do is to define the model(s) (i.e., the data structure(s) required by the system), the computations required to fulfill the requirements (i.e., the controller(s)) and the interface rendered to the users (the view(s)).

Thick Client vs Thin Client architectures (source: wikimedia.org)

From all the previous, it might seem that the use of this type of frameworks force the adoption of a thin-client architecture (where most of the computations are done at server-side). Traditionally this has been the case (for example with Django, ASP.Net, or more recently –and to some extent– with Sails). Yet in more recent times frameworks like ReactJS or AngularJS have favoured the thick-client architectures (where most of the computations are done at client-side). At the end of the day, this architecture choice relies more on design and business decisions than the frameworks themselves.

And this is the end of this gentle introduction to the concept of software frameworks. At ICAN Future Star we use different frameworks for developing our solutions. For example, AngularJS and ExpressJS have been used from the beginning. And currently we are transitioning to SailsJS. The experience so far has been positive and we hope they keep on helping us building robust products for our customers.

ElasticSearch: Features you will love

If you need effective and fast full-text search for your app, why don’t you give a try to ElasticSearch dB?
Setting up Elastic is easy and quick and in a couple of moments, you are ready to explore your data. Just define an index, a type and its mapping and index your data!
At ICAN Future Star, we use ElasticSearch to store and retrieve our data. We maintain thousands of documents containing university and courses, which is mainly text along with some numeric data. Searching in this amount of information wouldn’t be an easy task without the searching options of ElasticSearch.
Using Elastic for the past three months, I am every day more surprised by its capabilities. Here are some of the functionalities that I loved in it.

Complex bool queries

Although complex bool queries can be complicated if you use it for the first time, you will love them once you get familiar with them.
Taking an example from our own app, let’s say we want to search for documents in our database satisfying the following query:
('engineering' and 'civil') or ('economics' or 'finance')
Starting from the ('engineering' and 'civil') part, we create a bool query using the must keyword. Putting our two queries in an array, we are done with the and part of our query. For the ('economics' and 'finance'), we use again a bool query but this time we choose the should keyword for covering the or condition. The minimum_should_match parameter gives us the ability to define how many of the encapsulated conditions should be matched in the query. Just setting it to one satisfies our case. The two sub-queries we have constructed are now just combined in a should query to satisfy the or case.
This is the result of the query:
{
    "bool": {
        "should": [{
            "bool": {
                "must": [{
                    "match": {
                        "title": "engineering"
                    }
                }, {
                    "match": {
                        "title": "civil"
                    }
                }]
            }
        }, {
            "bool": {
                "should": [{
                    "match": {
                        "title": "economics"
                    }
                }, {
                    "match": {
                        "title": "finance"
                    }
                }],
                "minimum_should_match": 1
            }
        }],
        "minimum_should_match": 1
    }
}

Multi-fields

During the development of our software, we had to do with the issue of needing to use a field for two different purposes. For example, we wanted a field to be mapped as analyzed for full-text search but at the same time as non-analyzed to search for the exact value of the field.
Elastic provides the multi-fields functionality, with which you can define multiple mappings for a single field.
Here is the example of our mapping for the category field in one of our documents:
{ 
    "mappings":  {  
        "document_type":  {   
            "properties":  {    
                "category":  {
                    "type":   "text",
                         "analyzer": "custom analyzer",
                         "fields":  {   
                        "exact":  {        
                            "type":   "keyword"      
                        }     
                    }    
                }   
            }  
        } 
    }
}
Using this functionality, we keep two different versions of the same field: one analyzed with our custom analyzer and the exact version that is being handled as a keyword.
Now, using the first version of the field in the following query:
"query":  {  
    "match":  {   
        "category":   "engineering"   
    } 
}
the query value will be analyzed by the custom_analyzer and it will return the documents matching the resulting tokens.
On the other hand, using the exact version of the category field:
"query":  {  
    "match":  {   
        "category.exact":   "engineering"   
    } 
}
we will get just the documents that explicitly match the requested category.

Reindex API

After defining a specific mapping and loading the data into Elastic, it is a common need to change the type of a field or adding a new field. Although adding a new field to an existing mapping is straightforward, modifying a field mapping is impossible.
In this case, the only solution is to define a new mapping and index the data again.
Here comes the Reindex API, a convenient way to copy the data from an old index to a new one. It is easy to use and it simply copies the documents from an index to another indexing them according to the new mapping. An example of use:
POST _reindex
{
  "source": {
    "index": "old_index"
  },
  "dest": {
    "index": "new_index"
  }
}

Function Score in a Parent-Child relationship

In our database, we have documents that are connected through the Parent-Child relationship. It is very often in our case the need to sort the child documents according to the parent documents. Although elastic does not provide the option to sort the child documents according to parent fields with the regural sort options.
The solution, in this case is to use a function_score in combination with the has_parent query. By using the
By using the doc notation, one can access the fields of the parent documents.
{
    "query": {
        "has_parent" : {
            "parent_type" : "parent",
            "score" : true,
            "query" : {
                "function_score" : {
                    "script_score": {
                        "script": {
                              "lang": "painless",
                              "inline":{
                                  "double total = 0;"
                                  "for item in doc['numbers']{
                                      "total+ = params.multiply_factor*doc['numbers'].value" 
                                  "}"
                                   return total;
                              },
                               "params": {
                                    multiply_factor: 7
                               }    
                         }
                    }
                }
            }
        }
    }
}
In the above example, an iteration is being performed on the numbers field of the parent type adding the values of the arrays multiplied by the multiply_factor. In the params field one can define parameters that are being used in the script. The score returned by the script is used to sort the child documents and it is being aggregated with any other scores derived from other queries.
These are just some of the ElasticSearch features that we have used during our software implementation. Elastic has more for one to investigate and it is awesome that every time we use it, we keep finding new useful features!
That’s all for now! We will keep updating with new material 🙂

Super Basic Design Principles

Recently I was tasked with re-stylising the HelloUni catalogue for ICAN FUTURE STAR. This is a small document that outlines what we can offer to clients. As part of our regular Knowledge Transfers, I decided to do a talk on the basics of stylising a document named “Super Basic Design Principles According to Ellie”. I’m going to go through this talk with you all now in the form of a blog.

 

I’ve always been the type of person who likes to style documents so they flow well; so they become something that’s pleasant to look at, convey emotions and easy to read. Below I’ve outlined some of the basic techniques I use when stylising documents. Most of the points will be common sense but I hope you’ll find something of value in then nonetheless.

Know your audience

Firstly, before you even start writing the document you should know a few things about its purpose and target audience. It’s a good idea to imagine somebody from your targeted demographic reading your document. I’m going to use the example of a CV here because I think it’s something that we’ve all experienced or will experience at least once in our lives.

Who are they?

Imagine you’re writing your CV, Who’s going to read it? Well depending on the company it could go through a few people, Secretaries, Hiring Managers, Human Resources, Department Managers, Experts/Specialists, or Algorithms that sort CVs based on keywords. This can be a little daunting but what’s important here is that you shouldn’t assume prior knowledge. Applying for a technical job doesn’t necessarily mean that the people, or robots, reading your CV are technical people. Using abbreviations or acronyms can be detrimental in these cases. So, think about your audience then write and style your document accordingly. For the CV example, I’ve added a list of Skills at the top of the page that can be picked up by machines to get through the first stage of the job process and next is a personal statement so connect on a more human level.

 

How will they read the document?

The way people consume media has changed for the majority of people. Websites are visited more often on mobile devices than on desktop/laptop browsers (source). If you’re styling a document and think that it will most likely be read on a smaller screen, like a phone or a tablet, it’s best to take that into consideration. Don’t add a lot of images in lieu of important information. Keep it balanced. You would want to avoid making the reader zoom in and out of sections of text.

 

If you’re handing the reader a physical copy of the document and it’s likely they won’t read it there and then and front of you then try adding a front cover; something appealing that can catch somebody’s attention from across the room. This way it’s more likely that they’ll read the document when they’re ready to.

 

In my example, the CV is designed for a technical position, it’s likely this document would be passed around a company via email or printed out so people could write notes on it. For this reason, the CV is mainly white with plenty of space around the borders and sections. It’s a simple orange and black colour design and does not contain images. This will allow the file size to remain small and will not waste a lot of ink if printed. The text is also a standard size that can be read from a tablet screen and all the important information it’s presented in a single page so readers don’t need to scroll down or turn the page to see potentially hidden information.

 

White Space

Next up is the use of white space. White Space is simply an absence of information. A very useful principle that has a lot of benefits.

 

It gives any document logical breaks and allows for elements to be grouped into sections, this makes information much easier to read and generally looks more professional. Your white space also doesn’t necessarily need to be white, it just needs to be an absence of information.

 

In the CV example, I used white space to break up the sections. I’ve also given the document a much thick border between the edge of the page and the information. This is so if printed people can write notes directly onto the CV to be passed around the company.

 

Emphasis and Consistency

Simply put, emphasis and consistency are just how you use whitespace and colours. There are a lot of articles online that talk about colour theory far more articulately than I can so I’ll leave that up to them. Once you’ve got your colours and the emotions you want to convey with the document you should stick to them. Simple things like header and body fonts, colours, and size being consistent makes a huge difference to the credibility of a document. These improvements should make the document look deliberate and thought out to the reader even if they don’t notice it themselves.

 

If you have a lot of information it’s good to emphasise the key points. Some people are skimmers and scanners; they’ll read the text quickly trying to pick out information and avoiding the filler text such as and, the, and they.

 

It depends entirely on the purpose of the document and the target reader whether or not to highlight words inside paragraphs or just sticking to headers. In the CV example, I’ve only highlighted the hyperlinks as the CV is using an analogous orange style. Having too much orange will end up deemphasizing the links, this is something we’d want to avoid as the links either point towards contacting me or point to previous work.

Flow

Last I’d like to briefly mention the importance of flow. Flow is an interesting concept to me as it’s easy to explain but can be difficult to put into action efficiently. It’s not necessarily about ordering information in order of importance. Information can be presented in chronological order, you can present information as if you were telling a story. The main point is to get the information to flow logically. When I was doing my presentation I said something similar to “You wouldn’t read the even pages of a book then go back and read the odd”.

 

The flow of the CV is as follows:

The reader is automatically drawn to the name, it’s large, orange and at the top of the page, I did this so my name is in the head of the reader while they continued reading, I want to be memorable. Next, they’ll register the links under the name, it should be obvious that they’re just contactable links so the reader intentionally or unintentionally skips them to get to the important information.

 

Skills are next. If the reader is a robot it will pick out the keywords quickly. If the reader is a technically savvy person they might spend some time looking at the skills to see if I have what it would take to do the job. If the reader is non-technical they will skip this section.

 

Next is the personal statement, the human component. I want to try to establish a connection with the reader. I talk about what motivates me and my hobbies. By this time hopefully, I’ve got the attention of any reader. Robot, Tech-Savvy and Non-Technical.

 

Experience and Academia on a CV are obviously important sections to add. As previously mentioned I’ve written these as rows and columns. This way the reader can just look down the first row to find what I was doing and when they can then read a little more about that time. In each description, I’ve spoken about the time from a technical view and where possible added links to the projects I had worked on. This should please the robot, the Tech-Savvy and it should showcase to the Non-Technical reader what I’m capable of doing with the skills I had mentioned.

 

I finish the CV with additional links in a less formal style of writing and a small message to once again connect on a human level.

 

Tip of the Iceberg

There’s a whole world out there of design patterns, tips, best practices etc. This article is just meant to be the super basic design principles that I’ve found help me personally and professionally. I believe the design of a document is just as important as the content and I hope you can see why.

 

Thanks for reading

Lunch time: Beyond nourishing

As a Spaniard, lunch time in the UK is one of the toughest cultural shocks I face. In Mediterranean cultures, food is a big thing. It is not only a mere act of getting the nutrients your body needs to, basically, not die. It is a truly social act where you gather together around a table and food is just an excuse. Back in the early days when I arrived in Scotland, I worked as an industrial cleaner at the headquarters of one important British company. I remember cleaning loads of rests of food from the desks. Sometimes I actually saw people eating their lunches in front of their computers while they kept on working. And I could not help thinking: “this is wrong”. Moreover, I used to live near the financial district in Edinburgh (if such thing even exists) and I used to see workers from the banks offices around eating a sandwich in their cars at lunch time. A genuine anti-social act.

The developers at ICAN take a Mediterranean approach. Well… this is not hard having a Greek and a Spaniard on board. Thus, every day we all go together for lunch to one of the many different options available around the office. Sometimes agreeing on the place may be a bit hard, but in general agreement is reached quite fast. During this time we get out of our desks and we speak about everything but work: from our individual daily problems to our own cultures, languages, history, current affairs, food, cooking… basically, anything but work. Definitely, all the opposite to a British lunch-time.

The benefits of this approach are obvious. Not only is the time while we recover the energy levels to finish the working day, but it is a time where we get to know each other better. This not only ties bonds. It also provides us with better communication patterns. Similarly, it allows us know the general emotional state of the squad, and makes us understanding better the idiosyncrasy of each of the members of the team. It also helps to overcome the obvious cultural gaps in a multi-national crew like ours, which helps the improvement of the inter-personal relationships. In simple terms: something as mundane as eating becomes an unquestionably team-building activity.

Yet these are not the only benefits. This lunch time acts as a real break. The fact of leaving your desk, leaving the office and stop talking about work, creates a new context with a different frame of mind. And for people who work in a highly demanding intellectual activity like programming, this is essential. Therefore, lunch is also the time while the brain also rests. And this also helps the productivity.

Having said all this, I must also say I am sick of eating pasta, pizza and burgers during my working week. Because if lunch is all that I said before, as a Spaniard it is also about eating a proper heart-warming meal. And at this point the cultural gap is still too large, I am afraid.

Multi language support in AngularJS app using angular-translate that always sends and fetches data in English.

Recently we at ICAN came across an interesting issue that needed solving. We’re developing an app with localisation. Users can choose to use the app in either English or Chinese currently. As we’re using AngularJS we decided to go with Angular-translate to achieve this. The issue we were facing was simple, all data sent to our backend needed to be sent in English for future analytical processing.

I’ll elaborate. With angular-translate you can convert strings of text into any language but this means that if you’re using the app in Chinese and you update your country to China the app will save that data as 中国. We needed this information to be sent to our server as ‘China’ rather than ‘中国’. Similarly, if the data stored in the server is the literal string ‘China’ and a Chinese user fetches that data they will need to be able to see 中国 instead.

In this blog, I want to show you how we solved this issue from start to finish.

First off, I’ve mentioned a little about angular-translate, the AngularJS module used to easily localise your app for multiple languages. It’s very quick and easy to setup, simple type the following into your terminal.

npm install --save-dev angular-translate

Once installed add the module to your existing Angular app just like any other module.

You can see a quick example of how the module looks and works here.

var app = angular.module('at', ['pascalprecht.translate']);
app.config(function ($translateProvider) {
    $translateProvider.translations('en', {
     'LOGIN': 'Login',
     'LOGOUT': 'Logout',
     'REGISTER': 'Register',
     'CREATE': 'Create an account',
     'OK': 'Ok',
     'EDIT': 'Edit',
     'SAVE': 'Save',
     'CONTINUE': 'Continue',
     'CANCEL': 'Cancel',
     'BACK': 'Back'
    });
    $translateProvider.translations('zh', {
     'LOGIN': '登录',
     'LOGOUT': '登出',
     'REGISTER': '注册',
     'CREATE': '创建新帐号',
     'OK': '成功',
     'EDIT': '保存',
     'SAVE': '保存',
     'CONTINUE': '继续',
     'CANCEL': '取消',
     'BACK': '返回'
    });
    $translateProvider.preferredLanguage('en');
});

Now whenever you want to translate a string you can use

{{ 'LOGIN' | translate }}

If the preferred language is ‘en’ then LOGIN will be replaced with the string ‘Login’. If the preferred language is ‘zh’ then LOGIN will be replaced with ‘登录’. Simple.

This is all fine and well until things get a little more complicated. We also use the preferred language to load language specific files. For example, we have two files we use when selecting a country.

country_en.json is a long file so I’ll just show a snippet of what it looks like here

{
   "Americas": [{
     "code": "AW",
     "name": "Aruba"
   },{
     "code": "AI",
     "name": "Anguilla"
   },{
     "code": "AR",
     "name": "Argentina"
   }...]
}...

We also have country_zh.json which is loaded when the preferred language is ‘zh’ or Chinese. It looks like this

{
   "美洲": [{
     "code": "AW",
     "name": "阿鲁巴"
   },{
     "code": "AI",
     "name": "安圭拉"
   },{
     "code": "AR",
     "name": "阿根廷"
   }...]
}...

Our app uses the preferred language variable to determine which file is loaded and rendered to the screen, the user’s selection is then saved to be sent to our server for storage. Before the data is sent we have an intermediary stage that converts Chinese text to its English equivalent. This is how we’ve set up our factories.

app.factory('FILES', function ($http, lang) {
   return functions = {
     en_zh: function () {
       return $http.get('json/converters/en-zh.json');
     },

     zh_en: function () {
       return $http.get('json/converters/zh-en.json');
     },

     countryList: function () {
       return $http.get('json/country_' + lang + '.json'); 
     }
   };
});

and

app.factory('TRANSLATE', function (FILES) {

   return {
       toEnglish: function (key, value) {

         return FILES.zh_en().then(function (data) {
           var data = data.data[0];
           var result = {};
 
           for(var i in data){
             if(i == value){
               result[key] = data[i];
               return result;
             }
           }
           result[key] = value;
           return result;
         });
       },
       toChinese: function (key, value) {

         return FILES.en_zh().then(function (data) {
           var data = data.data[0];
           var result = {};

           for(var i in data){
             if(i == value){
               result[key] = data[i];
               return result;
             }
           }
           result[key] = value;
           return result;
       });
     }
   }
});

To convert either toEnglish or toChinese we can simply add the TRANSLATE module to a function like so.

function editProfile(data, callback, TRANSLATE, $q, $http) {

   var promises = [];
   var url = 'http://www.example.com/';

   for (var i in data){
     var promise = TRANSLATE.toEnglish(i, data[i]);
     promises.push(promise);
   }

   $q.all(promises).then(function(results){
     var data = {};

     for(var i in results){
       Object.assign(data, results[i]);
     }

     var req = {
       method: 'POST',
       url: url,
       headers: {
         "Content-Type": 'application/json'
       },
       withCredentials: true,
       data: data
     };

     $http(req).success(function (result) {
       callback(result);
     });
   })
}

The example above is when a user updates their profiles, their data is sent to this function as form data. The data is sent through toEnglish. This iterates through the file zh_en.json and if it finds a match the data will be updated to English.

// zh_en.json
{
  '阿鲁巴': 'Aruba',
  '安圭拉': 'Anguilla',
  '阿根廷': 'Argentina'
}

Once all date has been through the process the results will be sent to the server using Angular’s $http method.

You can see from the code above that this process can be used to fetch English strings and convert them to Chinese as well if the user is using the app with the Chinese language setting.

// en_zh.json
{
  'Aruba': '阿鲁巴',
  'Anguilla': '安圭拉',
  'Argentina': '阿根廷'
}