Configuring IntelliJ IDEA for Electron

Standard

In the last blog post on Setting Up Electron Framework for Desktop Apps, we talked about Electron framework and its installation, we also tried a “Hello World” example in Electron. This post will a short one and will help you to configure Electron in an IDE – IntelliJ IDEA 2016.

Let’s get started with our configuration, first thing is making sure you are using JavaScript ECMAScript 6. Go to File > Settings > Languages and Frameworks > JavaScript and select ECMAScript 6. Now we need to install the JavaScript library for Electron called github-electron-DefinitelyTyped. This can be done by heading over to,

File > Settings > Languages and Frameworks > JavaScript > Libraries > (Select) Download.

After all the libraries are loaded search for github-electron-DefinitelyTyped  and download and install it. After that check / select that library.

settings_007

To enable “Coding Assistance” go to File > Settings > Languages and Frameworks > Node.js and NPM > Enable the Coding Assistance for Node.js Core Library.

Run/Debug Configuration:

Select a Node.js run configuration. In the Node interpreter, give the path to the Electron executable. Also, in JavaScript file field give the path your main JavaScript file in the project and save this configuration and Run.

run-debug-configurations_008

Here’s a great Material Design desktop app I made with Electron Framework. In case you are wondering, I used Material Design Lite library for Material Design components.

This slideshow requires JavaScript.

Fork this app on Github:

image4233

Read my other post on Setting Up Electron Framework for Desktop Apps. Download the best IDE in town here at JetBrains IntelliJ IDEA.

Advertisements

Naive Bayes Classifier in Python

Standard

Naive Bayes Classifier is probably the most widely used text classifier, it’s a supervised learning algorithm. It can be used to classify blog posts or news articles into different categories like sports, entertainment and so forth. It can be used to detect spam emails. But most important is that it’s widely implemented in Sentiment analysis. So first of all what is supervised learning? It means that the labeled training dataset is provided along with the input and the respective output. From this training dataset, our algorithm infers the next outcome to a given input.

The basics,

Conditional Probability : It is simply the probability that something will happen, given that something else has happened. It’s a way to handle dependent events. You can check out some examples of conditional probability here.

So from the multiplication rule; (here A and B are dependent events)

P(A ∪ B) = P(A) · P(B|A)

Now from above equation we get the formula for conditional probability;

P(B|A) = P(A ∪ B) / P(A)

Bayes’ theorem : It describes the probability of an event based on the conditions or attributes that might be related to the event,

P(A|B) = [P(B|A) · P(A)] / P(B)

So, our classifier can be written as :

Assume a problem instance to be classified, represented by vector x = (x1, x2, x3, …. , xn) representing some n attributes. Here y is our class variable.

Selection_005

Here we have eliminated the denominator P(x1, x2, x3, …. , xn) because it doesn’t really contribute to our final solution.

Now to make sure our algorithm holds up good against our datasets, we need to take the following conditions into account.

The Zero Frequency problem : Let us consider the case where a given attribute or class never occurs together in the training data, causing the frequency-based probability estimate be zero. This small  condition will wipe out the entire information in other probabilities when multiplied (multiplied by zero…duh…!). The simple solution to it is to apply Laplace estimation by assuming a uniform distribution over all attributes ie. we simply add a pseudocount in all probability estimates such that no probability is ever set to zero.

Floating Point Underflow : The probability values can go out of the floating point range hence to avoid this we need take logarithms on the probabilities. Accordingly we need to apply logarithmic rules to our classifier.

I have implemented Naive Bayes Classifier in Python and you can find it on Github. If have any improvements to add or any suggestions let me know in the comments section below.

image4233

Refer :

Insertion Sort

Standard

        Insertion Sort merely means that we insert one number at a time into its correct position comparing with the numbers to the left of it. At each iteration, the key value is compared to the values at its left and swapped if necessary.

Insertion_Sort (Ascending order):
1.  for j = 2 to A.length
2.     key = A[j]
3.     i = j - 1
4.     while i>0 and A[i]>key
5.         A[i+1] = A[i]
6.         i = i - 1
7.     A[i+1] = key

Here j is the index of our key and A is our list or array containing the unsorted numbers. i is our index to store previous value.

        We initialize the for loop from second position so as to skip the first which will be our initial key. In the (2) line we store our current key and at (3) we store the previous value or the value left of the key. At (4) we compare the value at the left of the key with the key. If the key is smaller than the value to its left, we swap both values. (5) handles the swapping of positions of the left side values (6) decrements the i. The while loop keeps on executing till there is no value greater than the key at the left of the key. At (7) we position the key at the right place. This goes on for each value in the array.

There is a Python implementation of insertion sort at the following Gist (both in ascending and descending order). For descending order, the only change is at (4) for loop while comparing.

image4233

Insertion sort is best suitable for small datasets and is inefficient for large datasets to sort. The best case performance for this algorithm is where the input is already sorted and its linear running time will be O(n). The worst case performance would be where the input is sorted in the reverse order O(n²).