Designing & Testing of the tools


The tools are good but before testing the actual usage first we need to make sure that tool should look good. So for that I am going to design how the tool should look. I have come up with the following bit of code that gives us the following data.

.rd-toolbx-btn {
background: none;
border: 5px solid #efefef;
display: block;
width: 60px;
height: 60px;
box-shadow: 0 0 2px 0 #efefef;
background-color: rgba(0,0,0,0.8);
color: #efefef;
cursor: pointer;
border-radius: 50%;
font-family: ‘Pacifico’, cursive;
letter-spacing: 1.5px;
transition: all .12s linear;
}
.rd-toolbx-btn:hover {
display: block;
width: 85px;
height: 85px;
}

.rd-toolbx-btn:hover > .rd-tool-text {
display: inline-block;
}

.rd-toolbx-btn:hover > .rd-tool-icon {
left: 15px;
}

.rd-tool-text {
display: none;
}

.rd-tool-icon {
display: block;
width: 32px;
height: 32px;
position: relative;
left: 3px;
}

.rd-comment-tool-icon {
background-image : url(‘data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QAAKqNIzIAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfhBQ4UAxFjQeJZAAACAElEQVRIx62Vv2taURTHP1defFIU6pCQJzh06OCQrVvJ4hSw4JKtf0XXUDoU945JIIQs/QMEC5k6R3AvdHEQfKBwzyMmRVPwdHipen1PY2u+b3r3nh/f+z33nmNIgfpSpW4qBARASKg/aBa/m0nS1iwvDIOdT/qeQkrckfn6+/NuuCZAN1f8qB94AcCEjulpH0xJy7zBB+CX+SKNV+M03gz27Y2oqKi0ouNB3tnLR8fSinftzWA/xd0eSE9UVNr2kBWwh9IWFZWePUhkj93tuXqsgXr2PA7hsOjmYvL2hA1gT+KDdHOzpagRZ9/EHSBmETVmhZN7UWmvJ+8eRNqicj8M4vynoqJz6eRCpo/VcL+pXCzIqaLRKRj1oyEFvhXfzQJMk9frb+piZmbVosbo5a4nVVMAc7VgZqCYEkJ0MbC50hoFqXrUgcnDdcL4CTxc70zwqXumAnT27tztFQwWsHcnHd6aikcAprfeOB2mpxBkCED7mxbQUbQPBE/UfvEo6aw8Ql6bkht6rvaSk/NnSgphhhC07Jhdkq6Acun8loEQeyYqY/f1g6iMHi/qCgzyMha1ZxmagJ89SmjcWG5eLrJH+EAT9eVWVFpLDH5qdr280hKVW/VJPiYAW1vvPn9MPMNz3rqhPENL27qpwtZtPWbxP4Nl69H2vMN1JtU/jPc/EI2kPED2/FoAAAAldEVYdGRhdGU6Y3JlYXRlADIwMTctMDUtMTRUMjA6MDM6MTcrMDI6MDDb4OLEAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA1LTE0VDIwOjAzOjE3KzAyOjAwqr1aeAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAAASUVORK5CYII=’)
}

.rd-circle-tool {
background-image: url(‘data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QAAKqNIzIAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfhBQ4UCSYhE6/cAAADNUlEQVRIx41VPWxTVxT+rn+ShgCK+UliEBIWARQBykIUgWBBIMFQYKoQYojEQBiCFKAbEc1SRAcGhqidIlUZGDq0CxKCdKBupLCABFZUJANypNgE+37X77kFgp9Ph2fs5/deDHd679zvfOf3nqMQcqSTx3BGDSKJJIA88rKIPxJ/qo9BrPILKn3Vm3IB3ZjHgirUCkCkX/oxgsP4V83Gpta/xdpHOjhJm1k9VukLEusxZmlzUjrWUK/0Mc0ixyW+poE4x1lkOkgPQB9gjhmTwheOSTHDnD4QtJ7jXGmjH25tsbb4ZaWNnGOuxQvpYJqZoLo+R4eOPhdCkWHakwtOshjmPF/zLu/ydWggRU423bc5HgSVhynl3dZeSnk4hHycdj0MM81sWObNcf4GAHyufwqtSNZMA5BOWnqsXd55MywIQI/Rkk7oU3RC69oE7qeUDgfllT46+pTSP6t9iaMAsLzumztqBFEoRJSqzW76sRHMfTmqlkRQg8CRhQ9Xt/0HAPxLMjE1iAUX1vU9zuIXVCFSE4k8aNpa/S52VnZAqQgUYupSVx5TAIAFNRxDUhVcmIpLVf3akw0621vBbMObXXJR1VOuCpKMIFmrE3y6rXLyVJ9vm4/z8lTlPt12/2oFJEFbX2iUJmZu0eFMoTtMudDNGTrmlsQa9RmlAV+a663VN3mzaG32q1ubzaLJm+MtBf6BzyLIS79X2PMoOiTx2jU/Qe2axKNDPY9ahCm8icgiRlqhG1ZgieMnEAfWhhWfcABvQhqJOymlQ36C0iEKd3olxe2s6m9DWpkTXBblJpWnedpNmiguc6IFd4NLEkXwMfGxmQaA8gCfcJWrfFIegIt77AlJ8RXddmp9zvZWOuYEwFHaZt6kTMrM0+YoYE7Qsbc2OuIyP5Z2fLbpGSgcopRO6nusckqiACBRTrGq75VOUszBuvp+855Xm+54RprEmKbwlT7ijVcfYZbCv90xttTFF/qBeHeKd6iKKg5KJFBGZXa5XybBOb5d6fcBvnasW3v5kv9Ye0KuvrxYgHfbSD40iTWu2622eo16zcXmYwK+YrlWf++ttAtJhQlb1vuVxEw7gv8BJKD1k1XMK7IAAAAldEVYdGRhdGU6Y3JlYXRlADIwMTctMDUtMTRUMjA6MDk6MzgrMDI6MDB4rwWZAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA1LTE0VDIwOjA5OjM4KzAyOjAwCfK9JQAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAAASUVORK5CYII=’)
}

.rd-predict-image {
background-image: url(‘data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QAAKqNIzIAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfhBQ4UCy2E9xTWAAACU0lEQVRIx+2Uu2sUURTGv7sbNtkNG/PGnexGDYKFgk0UFIwWQlYStUljo4JgbROwEyxi/oCUFiIKQSImTcRHqcFGLCw0aSJumInJznx3dtlE3cexyMPZTIbdBUu/6s655/7OueecuQpV0hNQxYnugqqgToX2fB+Qu005vZntqxdQpdVWLlMovF//maoMmsfkF9aAykLDsc2YfXxrtdziTOrBBo6ux9cTHHK+cmrvDu/oi9JUE8DXFAqFD6vtEmGB3yRcuwa3kQcAjOa6vBvZLjzAjCrXcQk+387hSZCHhN1hOxkI0E+56A67R7cysNtE+UJM8TOpxwIA7qlMdDeW4py+5QO81ePOOepAhMf1GYUV3txjPU+XV3iBhZoIM8Z3vOe322m6fExhsSbC6Zfm/ex2mj8pdSGC1BAi1+30+6dwP8Ruy3hSzoRSkkISKZWUEjbQhR9YQQYryMhSxyv1G7DToVk0Ayipa+0z3ip/pFD0Fz1up50TbAcAiegjHOJ1zrJMYVZPBGQhIX6nUJhfjwc099HWrDqjAQg9RptC0fPOZR6WyM5BUfle97QzyQKFZWd6pyp/Ec7Idg0koi/hKg6phBhowxoy2EASfVCwlCkmFkrTPaa3nNu1+OSZ+9yxYjhstdOMtRoVIxQtW01m3EbENSpG53t/R8IvpEW9VN6b4gaATVgwxVSbklAGDHQCQIfvBwPyvcXBjjf+FyeKAQwob4cDFF/DvP9Zb1j/Af8WIIuBXkvBAE+zRfFsaET6cRAJJFRUVpUlFiz5UJ7ryQcB/gD1Qlxq4LAVuQAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wNS0xNFQyMDoxMTo0NSswMjowMMHk8nQAAAAldEVYdGRhdGU6bW9kaWZ5ADIwMTctMDUtMTRUMjA6MTE6NDUrMDI6MDCwuUrIAAAAGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAABJRU5ErkJggg==’)
}
}

I will try to explain what each thing will do one by one.

.rd-toolbx-button , .rd-toolbx-button:hover:- These are the actual styles of the toolbox buttons and the :hover will add the effect when we try to mouse over the tool buttons.

.rd-tool-text, .rd-tool-icon:- These are the classes which will be styling the text inside the tool buttons and the icons. And then later on I am going to add the background Image to each button using data-uri. 

.rd-predict-image :- These classes will hold the background url for the predict image tool.

here is the visual picture of the tool described on one of the sites we have scraped

Selection_208

The 3 tools are up and ready, now was the time to see the tools in action. So first step we will going to scrape any random site which is going to be our companion website where most of our tools are built and tested in initial stages.

Here are the result of the tools tested. The results are very good uptil now.

Selection_209.png

The next thing is to saving the data, loading the data, deleting it. Creating a complete CRUD and after that re rendering the data to make it responsive.

Advertisements

Playing around with Redux


What is Redux?

Redux is a framework that controls states in a JavaScript app. According to the official site:
Redux is a predictable state container for JavaScript apps.

There are many states in an app that will change depending on time, user behavior, or a plethora of different reasons. Thus, if we consider an app as the process to change its own state, the view a user sees is how the states are presented.

For example, in a TODO list app, when we create a new todo item, we actually changed the state of an app from one without that TODO item to one with the TODO item. In addition, because the app’s state has changed, and the view is how a state is presented, we will see the new TODO item in our view.

If an app’s state unfortunately does not behave as expected, then it’s time to debug the app. E.g. If we add a new TODO item but our state appears to have added two items instead of just one, we will have to spend time on figuring out which state change went wrong. Sounds pretty simple on paper, right? If you think this process will be easy, you’re too naïve…. Real-world apps are often much more complex, and there are a lot of
factors that will make this debugging process a nightmare, such as having a bad coding habit, flaws in the framework design, not investing time in writing unit tests, etc.

Redux’s framework is designed to ease the debugging process. To be more specific, Redux is a framework that extends the ideas of Flux and simplifies redundant things. As it puts it on its official site:
Redux evolves the ideas of Flux, but avoids its complexity by taking cues from Elm.

What was wrong with the original MVC frameworks?

Originally, there were many MVC frameworks out there of different sizes and shapes, but generally can be either categorized as MVCs or an extension of MVC.

Flux/Redux attempts to resolve the problem where bugs are difficult to trace when a model has been continuously updated as an app is being used.

Just as illustrated in the following chart:
mvc
As seen in the image, any given view could affect any model, and vice versa. The problem with this is when there are many views and models and one of the model’s state changed into something we didn’t expect, we are unable to efficiently trace which view or which model caused the problem, since there are way too many possibilities.

Flux/Redux makes debugging apps easier by removing the model (aka store) setter, and then letting the store update its own state via an action (which is the “unidirection data flow” mentioned on React’s official site.) In addition, the dispatch of an action cannot dispatch another action. This way, a store will be actively updated as opposed to passively updated, and whenever there is a bug, we can refer to the problematic store to see what events have happened before. This makes bug hunting easier, since we’ve essentially zeroed down the possibilities that caused a bug.

Want to speed up your learning process? Learn React with a Live Expert

How does Redux Work?

Redux can be broken down into the following:
store: manages the states. Mainly there is a dispatch method to dispatch an action. In a Redux app, you can obtain its states via store.getState()
action: a simple, plain JavaScript object. An action can also be considered as a command to change a state.
reducer: decides how to change a state after receiving an action, and thus can be considered the entrance of a state change. A reducer is comprised of functions, and it changes states by taking an action as an argument, in which it then returns a new state.
middleware: the middleman between a store.dispatch() and a reducer. Its purpose is to intercept an action that has been dispatched, and modify or even cancel the action before it reaches the reducer.
redux
As shown above, if we added a new TODO item in a Redux app, we will first create an action with a type ADD_TODO, and then dispatch the action through store.dispatch().

// actionCreator

export function addTodo(text) {
  return { type: types.ADD_TODO, text };
}

store.dispatch(addTodo({ text: 'Clean bedroom' });

Afterwards, this action will enter the middleware, and finally enter the reducer. Inside the reducer, the state will change accordingly to the action’s type.

// reducer

function todos(state, action) {
  switch(action.type) {
    case 'ADD_TODO':
      // handle action and return new state here

  }
}

And so, we’ve completed the most basic behavior of updating a state.

In a more complex app, we will need to split up a store’s state into different pieces like we’d do when namespacing. You can do this in Redux by creating different reducers to manage different areas of a state, and then merge them together through combineReducers.

The process should look like this:
redux

Async & Middleware

We’ve already introduced state updating, but a front-end app is never that simple~

Here we’ll go over how to handle asynchronous behavior in a Redux app. In this section, we’ll examine the process of sending an HTTP request to understand how async works in Redux.

The Sample Situation

Let’s say we have an app that a list called questions. At some point (e.g. a user clicks a button), we will need to send a request to our server to obtain the data in questions. While sending this request, we need to reflect the sending state in our store, and if the request has been made successfully, we will need to put the data of questions in our store. If the HTTP request failed, we will need to reflect the failure info in our store.

The Straightforward Solution

One naive way to approach this situation is to dispatch different events at different times. In the code below, I used superagent to make my requests.

import request from 'superagent';

const SENDING_QUESTIONS = 'SENDING_QUESTIONS';
const LOAD_QUESTIONS_SUCCESS = 'LOAD_QUESTIONS_SUCCESS';
const LOAD_QUESTIONS_FAILED = 'LOAD_QUESTIONS_FAILED';

store.dispatch({ type: SENDING_QUESTIONS });
request.get('/questions')
  .end((err, res)=> {
    if (err) {
      store.dispatch({
        type: LOAD_QUESTIONS_FAILED,
        error: err
      });
    } else {
      store.dispatch({
        type: LOAD_QUESTIONS_SUCCESS,
        questions: res.response
      });
    }
  });

This way, we can achieve async behavior in a Redux app. However, this approach is not suitable if we have to send many HTTP requests, since we need to add async behavior to every request and this makes it difficult to maintain our code. Furthermore, this approach is also not easy to test, especially since asynchronous code is more difficult to understand and test in general. Finally, if we use this approach in a React app, we will be forced to code this logic into a React Component.

How I am using redux is here. Store is the place where I am using all my reducers.

Selection_207.png

Why ApplyMiddleware ? what is Middle ware now ?

As stated before, in Redux a middleware is like the negotiator between store.dispatch and reducer. To be more specific, the store.dispatch() we call is actually comprised of layers of middleware, and the reducer is in the innermost layer.

We can visualize this via the following image:
redux

Through a middleware, we can extract the above-mentioned asynchronous API request and place them in the same middleware. I am using Redux-Thunk middleware.

Redux Thunk middleware allows you to write action creators that return a function instead of an action. The thunk can be used to delay the dispatch of an action, or to dispatch only if a certain condition is met. The inner function receives the store methods dispatch and getState as parameters.

Flickr API To Get Images


The Image Recognization is done using the Clarifia API. Now the next thing and major task are to get the Images out of the tags. Like for the following image

The Clarifia will give us the tags mountains, snow, road, grass. Now for the tags Clarifia also offers the image data but it’s image gallery can’t be bigger than world’s most popular image hosting and sharing resource Flickr.

Flickr (pronounced “flicker”) is an image hosting and video hosting website and web services suite that was created by Ludicorp in 2004 and acquired by Yahoo on March 20, 2005.[4] In addition to being a popular website for users to share and embed personal photographs, and effectively an online community, the service is widely used by photo researchers and by bloggers to host images that they embed in blogs and social media.[5]

The Verge reported in March 2013 that Flickr had a total of 87 million registered members and more than 3.5 million new images uploaded daily.[6] In August 2011 the site reported that it was hosting more than 6 billion images and this number continues to grow steadily according to reporting sources.[7] Photos and videos can be accessed from Flickr without the need to register an account but an account must be made in order to upload content onto the website.

So the next step is to get the API key of the flickr and using that to get the images from the tags.

The next step is obtaining an application key. Flickr uses this app key to keep tabs on our usage and other statistics. Head on over here and apply for your own API key.

Since our usage of this particular API key is purely educational we choose to obtain a non-commercial key.

Fill in all the details the form requires with special attention to the description of the project. The devs at Flickr actually read this description if your app misbehaves in some way to make sure it is legit. So spend that extra minute describing your masterpiece.

A successful registration yields you this page. Do note down the api key and the shared secret for later use.

The Flickr API provides a number of methods which may or may not require authentication. Each method takes a number of arguments which modify its behavior and payload. Responses can be received in a number of formats including JSON, XML, SOAP and REST. All these requests can be made to end points corresponding the format you’ve chosen to make the request in. For example, we’ll be using REST for the rest of this article and so our URL end point would be http://api.flickr.com/services/rest/.

There are a number of methods which pull in public data and thus require no authentication of any sort. We just need the api key we had obtained earlier along with any required arguments of the method in question. Lets take a look at an example.

The getPublicGroups method is an example of a method which doesn’t require authentication and which pulls in public data. We pass in the user id of the user and our api key and the API responds in the format you requested with a list of groups the user is part of.

We’d send in a request to this URL.

Replace your_api_key with the key we obtained earlier and user_id_x with a valid NSID. Since I like my responses to be in JSON, I can add another parameter asking the API to respond with a JSON payload.

The API will send a response like so:

So I am using Flickr in my Project as following :

Selection_206.png

Having other custom code as well to remove the Child nodes and creating the new nodes having images in them.

Image Recognization API


The next step of the application after fitting the toolbox inside the html files of the local copies of the website is to decide which API to use for the Image Recognization tool of our application. I found 2 major competitors

CLARIFIA

Clarifai is a privately held American artificial intelligence company headquartered in New York City.[2] It utilizes deep learning technology, a form of machine learning, to develop APIs for image and video recognition use by developers and businesses.

History

The company was first founded in 2013 by New York University computer science graduate student Matthew Zeiler[3] after his deep learning research attracted attention at ImageNet (also known as ILSVRC, or the ImageNet Large-Scale Visual Recognition Challenge), a competition that tests the latest advances in algorithmic models for object detection and image classification in digital images[8] . Zeiler’s work placed in the top 5 at the 2013 ImageNet competition, alongside Google, Microsoft and Baidu Inc. Since then it has received over $10 million in funding from investors such as Google Ventures, Qualcomm Ventures and Union Square Ventures.[4] The company works closely with the artificial intelligence arm at Googlex.

Technology

Clarifai’s deep learning technology today is derived from Zeiler’s deep learning research as a PhD student at NYU, where he studied under the pioneers of neural networks.[5] Zeiler’s work helped popularize the use of neural networks in machine learning and contributed to the rise of deep learning. Building on previous work by artificial intelligence researchers such as Yann LeCun, his research centered on uniquely designed convolutional neural network models for image classification and object detection, including a hierarchical convolutional deep learning model[6] and a visualization technique that makes use of supervised pre-training to expose the inputs that stimulate individual feature maps at any layer in the convolutional model

GOOGLE CLOUD VISION

Google Cloud Vision API enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy to use REST API. It quickly classifies images into thousands of categories (e.g., “sailboat”, “lion”, “Eiffel Tower”), detects individual objects and faces within images, and finds and reads printed words contained within images. You can build metadata on your image catalog, moderate offensive content, or enable new marketing scenarios through image sentiment analysis. Analyze images uploaded in the request or integrate with your image storage on Google Cloud Storage.

Detect Inappropriate Content

Powered by Google SafeSearch, easily moderate content from your crowd sourced images. Vision API enables you to detect different types of inappropriate content from adult to violent content.

Detect Inappropriate Content

Power of the Web

Vision API uses the power of Google Image Search to find topical entities like celebrities, logos, or news events. Combine this with Visually Similar Search to find similar images on the web.

Image Sentiment Analysis

Extract Text

Optical Character Recognition (OCR) enables you to detect text within your images, along with automatic language identification. Vision API supports a broad set of languages.

Extract Text

Since Google Cloud Vision API is having a price tag attached to it , so I got no choice but to get with CLARIFIA and it also is extremely nice and handy API.

But there is also big hurdles in using either the clarifia/cloud-vision API is that they are actually working with either the base 64 images or with the urls. So in any website if that is using url thats going to be easy otherwise we first have to convert it to the base64

I used the following code to convert that
var img = new Image();
img.crossOrigin = ‘Anonymous’;
img.onload = function() {
var canvas = document.createElement(‘CANVAS’);
var ctx = canvas.getContext(‘2d’);
canvas.height = this.height;
canvas.width = this.width;
ctx.drawImage(this, 0, 0);
var dataURL = canvas.toDataURL(‘image/png’);
var originalURL = dataURL.replace(‘data:image/png;base64,’, ”)
console.log(originalURL,’getting clarifier’);

Here is the complete function to predict the images.

Selection_205.png

Cheerio.js.org vs JsDOM


I need something which I can use to put my tool inside the HTML doc. There will be alot of workaround if  I go with the traditional of file system(fs) node modules.
With that FS, I first need to read the files and then need to find the ending body tag inside of it and then need to put my own code inside of it.
The following will be the pseudo code for the approach I was thinking earlier

  • First read the file with fs.readFileSync(‘file.name’).
  • Second find the ” inside the read file.
  • Create a space before ending body tag.
  • Note the position of that space now.
  • Store the tool in the file at that position now.

But the traditional approach is quite time consuming so for searching over the internet we got very good approach to do that in easy manner.

1. JSDOM

jsdom is a pure-JavaScript implementation of many web standards, notably the WHATWG DOM and HTML Standards, for use with Node.js. In general, the goal of the project is to emulate enough of a subset of a web browser to be useful for testing and scraping real-world web applications.

The latest versions of jsdom require Node.js v6 or newer. (Versions of jsdom below v10 still work with Node.js v4, but are unsupported.)

As of v10, jsdom has a new API (documented below). The old API is still supported for now; see its documentation for details.

Basic usage

const jsdom = require("jsdom");
const { JSDOM } = jsdom;

To use jsdom, you will primarily use the JSDOM constructor, which is a named export of the jsdom main module. Pass the constructor a string. You will get back a JSDOM object, which has a number of useful properties, notably window:

const dom = new JSDOM(`

Hello world

`

);
console.log(dom.window.document.querySelector("p").textContent);

2. CHEERIO

Fast, flexible, and lean implementation of core jQuery designed specifically for the server.

Introduction

Teach your server HTML.

var cheerio = require('cheerio'),
    $ = cheerio.load('

Hello world

);

$('h2.title').text('Hello there!');
$('h2').addClass('welcome');

$.html();
//=>

Hello there!

 


Installation

npm install cheerio

Features

❤ Familiar syntax: Cheerio implements a subset of core jQuery. Cheerio removes all the DOM inconsistencies and browser cruft from the jQuery library, revealing its truly gorgeous API.

ϟ Blazingly fast: Cheerio works with a very simple, consistent DOM model. As a result parsing, manipulating, and rendering are incredibly efficient. Preliminary end-to-end benchmarks suggest that cheerio is about 8x faster than JSDOM.

❁ Insanely flexible: Cheerio wraps around @FB55‘s forgiving htmlparser2. Cheerio can parse nearly any HTML or XML document.

JSDOM vs Cheerio

• JSDOM’s built-in parser is too strict: JSDOM’s bundled HTML parser cannot handle many popular sites out there today.

• JSDOM is too slow: Parsing big websites with JSDOM has a noticeable delay.

• JSDOM feels too heavy: The goal of JSDOM is to provide an identical DOM environment as what we see in the browser. I never really needed all this, I just wanted a simple, familiar way to do HTML manipulation.

So finally I made choice for cheerio. Need to write nearly 5 – 10 lines of coding to get it work and fit my tool inside of it


let htmlSource = fs.readFileSync(filePath , 'utf8');
//console.log(htmlSource,"=============");
let $ = cheerio.load(htmlSource);

//create the toolbox,and then append it
let toolbox = createToolBox(url);
$(‘body’).append(toolbox);

FrontEnd Design Implementation


Since the Frontend will be created using the React.js. The React.js is giving us the Virtual-DOM so that will provide us the ability to do all the manipulations in a very performant manner. Following is the Home Component.

The Home Component will be simple component which will be displaying the data which it gets

Selection_175

The next is the styles for this component. To make it look the way we want styles play a very important role.

main {
@include add-background(“./background.jpg”);
@include add-flex(column,center,center)
width: 100%;
height: 95%;
color: $theme-color;
position: relative;
}
// TAGLINE
.tagline{
&-wrapper {
@include add-flex(column,center,center);
font-size: 16px;
}
&-icon {
display: inline-block;
margin-left: 5px;
position: relative;
top: 5px;
width: 35px;
height: 35px;
@include add-background(“./smiley.svg”);
}
&-begin {
padding-left: 10px;
font: {
family: Sofia Pro,serif;
size: 2.5em;
weight: 300;
}
}
&-end {
font-family: Magnolia Script,serif;
font-size: 5.5em;
position: relative;
top: -10px;
left: -10px;
z-index: 10;
}
&-minor {
font: {
family: Sofia Pro,serif;
size: 1em;
weight: 300;
}
margin: 5px;
padding: 10px;
border-top: 1px solid $border-color;
border-bottom: 1px solid $border-color;

}
&-and{
font-family : Avenir-LT;
font-size: 1.2em;
}
}

The styles is the key to great UI/UX , we will be using SCSS preprocessor. Sass is a scripting language that is interpreted into Cascading Style Sheets (CSS). SassScript is the scripting language itself. Sass consists of two syntaxes. The original syntax, called “the indented syntax”, uses a syntax similar to Haml.[4] It uses indentation to separate code blocks and newline characters to separate rules. The newer syntax, “SCSS”, uses block formatting like that of CSS. It uses braces to denote code blocks and semicolons to separate lines within a block. The indented syntax and SCSS files are traditionally given the extensions .sass and .scss, respectively.

So from next week we gonna start with putting the tool in the Website.

Front End Design


Today I was about to start with the Server side code but remember that the basic Skeleton of the frontend application is still not ready. So today me and jaspreet will be going to read about the design principles used in the web design.

Basic elements of a good web design

In order to come up with a good web design and an effective visual and technical appeal of a website, there are some elements that must be incorporated. To know more about these elements, you can go through the following given points:

  • Shape – On most websites and webpages, the shapes used are squares or rectangular but they don’t necessarily have to be. Shapes are responsible for the creation of an enclosed boundary in the overall design, and you can experiment with any shape you seem suitable. It can either be a geometric shape or any other abstract shape that fits in the design.
  • Texture – Texture is one element that can help provide your website with a feeling of a surface beneath. There are several types of textures that you can incorporate, and some of them include natural textures and artificial textures. This element must be used in such a way that it brings out the content given on the website and makes it look more appealing.
  • Direction – Direction is the element of a web design which is responsible for lending it movement or motion. A good web design is one which automatically makes our eyes move from one corner or content of the website to another, according to relevance or hierarchy.
  • Color – Another basic element of a good website is the use of color. A black and white website may work for certain niches like photography websites, but it is always better to raise the appeal of a website using colors in a creative way. The colors are added in the later stage and not during the designing.

WEB DESIGN PRINCIPLES

Web design is not only about how the website looks and feels but is also a lot about how it works and responds. When web designers work on a website, they incorporate not just those elements that add a visual appeal to it but also try to make it highly responsive, functional, quick and useful. In order to create a highly usable and effective website, designers follow certain principles that act as thumb rules or standard points to keep in mind. The following are the various principles of an effective web design:

Web design principle #1.   Highly intuitive structure

The first law or principle of usability of a website says that a web page must have a highly intuitive structure and should be simple to understand so that users would not have to think which way to go. It must be self-explanatory in an obvious kind of way. Don’t let any question marks or queries come up and make the navigation intuitive and simple. This helps to increase the usability of the website and also makes it much more engaging. The structure must be free from lots of cognitive load so that visitors don’t have to wonder how to move from point A to point B.

Web design principle #2.   Visual hierarchy

The next principle that contributes to creating a successful and effective website is a visual hierarchy. Visual hierarchy is the order or sequence in which our eye moves and perceives the things it sees. When it comes to a web page, the visual hierarchy can be referred to the sequence in which our eye moves from one topic/content/block to another. When designing a web page, a designer first need to identify the order of importance of the various topics and then place them in such a way that the visitors first view what is most important and then move onto the others in a hierarchical manner.

There are two ways to create a visual hierarchy, and they are given as follows:

  • Size hierarchy – As the name suggests, size hierarchy is the kind of hierarchy in which the most important content or image is of the largest size on a webpage, followed by the second most important content or image in the second largest size and so on. The distinction in sizes should be such that a visitor would view the items in the order of importance, and the pecking order of things is obvious.
  • Content hierarchy – Besides the hierarchy of size, which is one of the best ways to create the order of importance, another way you can incorporate this principle is by creating a hierarchy of content. You can place content in such a way that the human eye first travels to the content that is most important, for example, the business’s objective or purpose and then moves to the less important content blocks in a hierarchical order.

Web design principle #3.   Accessibility

Another highly important principle that must not be ignored when designing a web page or website is the accessibility of it. When a visitor enters the website, he/she must be able to access each bit of information in the easiest manner. This means that the text must be legible, the colors must not be harsh on the eyes and the background must not overpower the content, etc. To make the website accessible to everyone, you can follow some of the following points:

  • Typefaces – Make sure you select a font type and font size which is readable to all and is not too fancy for some to access or understand. For example, Fonts like Verdana, Times New Roman, Arial, etc. are simple fonts that almost everyone can easily read online. Similarly, the font size that works the best is 16 px but you can be a little flexible with it.
  • Colors – As far as the user experience is concerned, your color scheme and contrast must be well thought of and should be able to create visual harmony and balance. It is always better to choose contrasting colors for the background and written content so that it can be easily read. Choose a darker text color and a lighter background shade so that the result is easy to the eyes. Extra bright colors must be used sparingly.
  • Images – Do you know that the human mind perceives and processes images a lot faster than text? Well, it is thus a good idea to choose and place the right images on your web pages to communicate with the audiences in a better way. Make sure they are high-quality images and are suitable for your purpose.

Web design principle #4.   Hick’s law

Hick’s law states that ‘with every additional choice increases the time required to take a decision.‘ This law does not only hold true for web design but also in a number of other situations and settings. For example, if you visit a restaurant and are provided with too many food items to pick from, you will take a longer time to take a decision. As far as web designing is concerned, the more options you offer to your visitors, the more difficult will the website become to use and browse through. This means that we need to reduce the number of choices in order to provide a better user experience. Distracting options have to be eliminated to aid increased sales and better overall profit.

Hick’s law can also be translated to ‘More options mean less sales’ In order to incorporate this law without having to sacrifice giving all product or service options that you have, you can organize the products in a hierarchy with the main categories shown in the sidebar and all the products of that category in a separate list.

Web design principle #5.   Fitt’s Law

Another law that acts as a major principle in web design is Fitt’s law. According to this law, the time needed to move to a target is dependent upon the size of the target as well as the distance to the target. This means that the larger the object or target and the closer it is in the distance, the easier would it be to move to it or reach it. This law can effectively be incorporated in web design and is something that can enhance your web design by leaps and bounds. However, this does not mean that the bigger, the better but means that usability factor of a target runs as a curve and not as a straight line. When you apply this law to your web design, then users may be more motivated and encouraged to press the button that you want them to press.

If you want your website visitors to take actions like order a product, read about a service or click on something, then you must make sure that they can reach the ‘click here’ more easily and quickly. Thus, it is a good idea to consider this law and use it well.

So reading all this and rejecting many designs and after discussing with our mentor Inderpreet Singh. We come up with the following design for our homepage

Selection_174