Tuesday, August 16, 2016

Lovely, Smooth Page Transitions With the History Web API_part 2 (end)

Applying the History Web API

Before we begin writing any code, we need to create a new file to hold our JavaScript; we’ll name it script.js and load the file in the document before the body closing tag.

Let’s add our first piece of code to change the document title and the URL upon clicking the menu navigation:
  1. // 1.
  2. var $wrap = $( "#wrap" );
  3.  
  4. // 2.
  5. $wrap.on( "click", ".page-link", function( event ) {
  6.      
  7.     // 3.
  8.     event.preventDefault();
  9.      
  10.     // 4.
  11.     if ( window.location === this.href ) {
  12.         return;
  13.     }
  14.      
  15.     // 5.
  16.     var pageTitle = ( this.title ) ? this.title : this.textContent;
  17.         pageTitle = ( this.getAttribute( "rel" ) === "home" ) ? pageTitle : pageTitle + " รข€” Acme";
  18.      
  19.     // 6.
  20.     History.pushState( null, pageTitle, this.href );
  21. } );
I’ve split the code apart into several numbered sections. These will make it easier for you to pinpoint the code with the following reference:
  • On the first line, we select the element, <div id="wrap"></div>, that wraps all of our website content.
  • We attach the click event. But, as you can see above, we attach it to the #wrap element instead of attaching the event directly on every menu navigation. This practice is known as event delegation. In other words, our #wrap element is responsible for listening to click events on behalf of .page-link.
  • We’ve also added event.preventDefault() so that the users will not be directed to the page in question.
  • If the clicked menu URL is the same as the current window we do not need to proceed to the next operation, simply because it is not necessary.
  • The pageTitle variable contains the title format, derived from the link title attribute or the link text. Each page title follows {Page Title} — Acme convention, except for the home page. “Acme” is our fictitious company name.
  • Lastly, we pass the pageTitle and the page URL to the History.js pushState() method.
At this point, when we click on the menu navigation, the title as well as the URL should change accordingly as shown below:
The page title and the URL are changed
Yet the page content remains the same! It is not updated to match the new title and the new URL.

Content

We need to add the following lines of code to replace the actual page content.
  1. // 1.
  2. History.Adapter.bind( window, "statechange", function() {
  3.      
  4.     // 2.
  5.     var state = History.getState();
  6.      
  7.     // 3.
  8.     $.get( state.url, function( res ) {
  9.  
  10.         // 4.
  11.         $.each( $( res ), function( index, elem ) {
  12.             if ( $wrap.selector !== "#" + elem.id ) {
  13.                 return;
  14.             }
  15.             $wrap.html( $( elem ).html() );
  16.         } );
  17.  
  18.     } );
  19. } );
Again, the code here is split into several numbered sections.
  • The first line of the code listens to the History change performed via the History.js pushState() method and runs the attached function.
  • We retrieve the state changes, containing various data like a URL, title, and id.
  • Through the jQuery .get() method we retrieve the content from the given URL.
  • Lastly, we sort out the element with an id named wrap from the retrieved content, and eventually replace the current page content with it.
Once it’s added, the content should now be updated when we click on the menu navigation. As mentioned, we are also able to access visited pages back and forth through the browser Back and Forward buttons.


Our website is presentable at this point. However, we would like to step further by adding a little animation to bring the page to life and, finally, our website feels more compelling.

Adding Animation and Transitions

Animation in this situation need only be simple, so we’ll write everything fro scratch, instead of loading animations through a library like Animate.css, Motion UI of ZURB, or Effeckt.css. We’ll name the animation slideInUp, as follows:
  1. @keyframes slideInUp {
  2.     from {
  3.         transform: translate3d(0, 10px, 0);
  4.         opacity: 0;
  5.     }
  6.     to {
  7.         transform: translate3d(0, 0, 0);
  8.         opacity: 1;
  9.     }
  10. }
As the name implies, the animation will slide the page content from bottom to top along with the element opacity. Apply the animation to the element that wraps the page main content, as follows.
  1. .section {
  2.     animation-duration: .38s;
  3.     animation-fill-mode: both;
  4.     animation-name: slideInUp;
  5. }
The transition from one page to another one should now feel smoother once the animation is applied. Here, you may stop and call it a day! Our website is done and we are ready to deploy it for the world to see.

However, there is one more thing that you may need to consider adding, especially for those who want to monitor the number of visits and the visitors’ behavior on your website.

We need to add Google Analytics to track each page view.

Google Analytics

Since our pages will be loaded asynchronously (except for the initial page loaded) tracking the page view number should also be done asynchronously.

To begin with, make sure you have the standard Google Analytics added within the document head. The code usually looks something as follows:
  1. <script>
  2.         (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  3.             (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  4.             m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  5.         })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
  6.  
  7.         ga('create', 'UA-XXXXXX-XX', 'auto');
  8.         ga('send', 'pageview');
  9.  
  10.     </script>
Then we need to adjust our JavaScript code to include the Google Analytics tracking code so that every page loaded asynchronously will also be measured as a page view.

We have several options. First, we can start counting when the user clicks a navigation link, or when changing the page title and URL, or when the content of the page has been fully loaded.

We’ll opt for the latter, which is arguably the most authentic, and in doing so we leverage the jQuery promise() method after we change the page content, as follows:
  1. $wrap.html( $( elem ).html() )
  2.         .promise()
  3.             .done( function( res ) {
  4.  
  5.             // Make sure the new content is added, and the 'ga()' method is available.
  6.             if ( typeof ga === "function" && res.length !== 0 ) {
  7.                 ga('set', {
  8.                     page: window.location.pathname,
  9.                     title: state.title
  10.                 });
  11.                 ga('send', 'pageview');
  12.         }
  13.     });
That’s all it is, we will now have the page view recorded in Google Analytics.

Wrapping Up

In this tutorial we have improved a simple static website with Web History API to make the page transition smoother, the load faster, and overall deliver a better experience to our users. At the end of this tutorial, we also implemented Google Analytics to record user page view asynchronously. Additionally, our website is perfectly crawl-able by search engine bots since it is, as mentioned, just a simple HTML website.
Written by Thoriq Firdaus

If you found this post interesting, follow and support us.
Suggest for you:

Ultimate JavaScript Strings

Ultimate HTML5,CSS3 & JAVASCRIPT To Create Your Own Interractive Websites

Vue.JS Tutorials: Zero to Hero with VueJS JavaScript Framework

Learning ECMAScript 6: Moving to the New JavaScript

Closure Library: Build Complex JavaScript Applications

Sunday, August 14, 2016

Lovely, Smooth Page Transitions With the History Web API_part 1

In this tutorial we’re going to build a website with beautifully smooth transitioning pages, without the usual aggressive page refresh. Navigate through the pages in the demo to see what I mean.

To achieve this effect we’ll use the History Web API. In a nutshell, this API is used to alter the browser history. It allows us to load a new URL, change the page title, then at the same time record it as a new visit in the browser without having to actually load the page.

This sounds confusing, but it opens up a number of possibilities–such as being able to serve smoother page transitions and give a sense of speediness which improves the user experience. You have probably already witnessed the Web History API in action on a number of websites and web applications, such as Trello, Quartz, and Privacy.

The rather abstract (and rather nice) Quartz website
Before we go any further, let’s first look into one particular API that we are going deploy on the website.

The History Web API, in Brief

To access the Web History API, we first write window.history then follow this with one of the APIs; a method or a property. In this tutorial we’ll be focusing on the pushState() method, so:
  1. window.history.pushState( state, title, url );
As you can see from the above snippet, the pushState() method takes three parameters.
  1. The first parameter, state, should be an object containing arbitrary data. This data will then be accessible through window.history.state. In a real world application, we would pass data like a page ID, a URL, or serialized inputs derived from a form. 
  2. The last two parameters are title and
  3. url. These two change the URL and the document title in the browser, as well as record them as a new entry in the browser history. 
Let’s dissect the following example to better understand how the pushState() Method works.
  1. (function( $ ){
  2.  
  3.     $( "a" ).on( "click", function( event ) {
  4.  
  5.         event.preventDefault();
  6.  
  7.         window.history.pushState( { ID: 9 }, "About - Acme", "about/" );
  8.     } );
  9.  
  10. })( jQuery );
In the above code, a link attached with the click event then deploys the pushState() method. As we click on the link, we expect the code to change the document title and the URL:

From top to bottom: Chrome, Firefox, Opera.
And it does; the screenshot shows the URL is changed to “about/” as defined in the pushState() method. And since the pushState() method creates a new record in the browser history, we are able to go back to the previous page through the browser’s Back button.

However, all the browsers in this example are currently ignoring the title parameter. You can see from the screenshot the document does not change to About - Acme as specified. Furthermore, calling the pushState() method won’t also trigger the popstate event; an event which is dispatched every time the history changes–something we need! There are a few discrepancies on how browsers handle this event, as stated in MDN:
“Browsers tend to handle the popstate event differently on page load. Chrome (prior to v34) and Safari always emit a popstate event on page load, but Firefox doesn’t.”
We will need a library as a fallback to make the History Web APIs work consistently across the browser without any hurdles.

Meet History.js
Since the pushState() method does not work to its full potential, in this tutorial we are going to leverage History.js. As the name implies, this JavaScript library is a polyfill, replicating the native History APIs that work across different browsers. It also exposes a set of methods similar to the native APIs, albeit with few differences.

As mentioned earlier, the browser native API is called through the history window object with the lowercase “h”, while the History.js API is accessed through History with the uppercase “H”. Given the previous example and assuming we have the history.js file loaded, we can revise the code, as follows (again, notice the uppercase “H”).
  1. window.History.pushState( {}, title, url );
I hope this brief explanation is easy to understand. Otherwise, here are some further references if you want to learn more about the Web History API.
  • History API
  • Manipulating the Browser History
  • An Introduction to the HTML5 History
Building Our Static Website

In this section we won’t discuss each step needed to build a static website in detail. Our website is plain simple, as shown in the following screenshot:

The Website Homepage
You don’t have to create a website that looks exactly the same; you are free to add any content and create as many pages as you need. However, there are some particular points you need to consider regarding the HTML structure and the use of id and class attributes for some elements.
  1. Load jQuery and History.js within the document head. You may load it as a project dependency through Bower, or through a CDN like CDNJS or JSDelivr.
  2. Wrap the header, the content, and footer in a div with the ID wrap<div id="wrap"></div>
  3. There are a few navigation items on the website header and the footer. Each menu should be pointing to a page. Make sure the pages exist and have content.
  4. Each menu link is given page-link class which we will use for selecting these menus.
  5. Lastly, we give each link a title attribute which we’ll pass to pushState() to determine the document title.
Taking all this into account, our HTML markup will roughly look as follows:
  1. <head>
  2.     <script src="jquery.js"></script>
  3.     <script src="history.js"></script>
  4. </head>
  5. <body>
  6.     <div id="wrap">
  7.         <header>
  8.             <nav>
  9.                 <ul>
  10.                     <li><a class="page-link" href="./" title="Acme">Home</a></li>
  11.                     <li><a class="page-link" href="./about.html" title="About Us">About</a></li>
  12.                     <!-- more menu -->
  13.                 </ul>
  14.             </nav>
  15.         </header>
  16.         <div>
  17.             <!-- content is here -->
  18.         </div>
  19.         <footer>
  20.             <nav>
  21.                 <ul>
  22.                     <li><a href="tos.html" class="page-link" title="Terms of Service">Terms</a></li>
  23.                     <!-- more menu -->
  24.                 </ul>
  25.             </nav>
  26.             <!-- this is the footer -->
  27.         </footer>
  28.     </div>
  29. </body>
When you are done building your static website we can move on the main section of this tutorial.
(continue)

If you found this post interesting, follow and support us.
Suggest for you:

Vue.JS Tutorials: Zero to Hero with VueJS JavaScript Framework

Learning ECMAScript 6: Moving to the New JavaScript

Closure Library: Build Complex JavaScript Applications

JavaScript Promises: Applications in ES6 and AngularJS

JavaScript For Absolute Beginners - Build Simple Projects

Friday, August 12, 2016

Introduction to Webpack: Part 2


In the previous tutorial we learned how to set up a Webpack project and how to use loaders to process our JavaScript. Where Webpack really shines, though, is in its ability to bundle other types of static assets such as CSS and images, and include them in our project only when they're required. Let's start by adding some styles to our page.

Style Loaders

First, create a normal CSS file in a styles directory. Call in main.css and add a style rule for the heading element.
  1. h2 {
  2.     background: blue;
  3.     color: yellow;
  4. }
So how do we get this stylesheet into our page? Well, like most things with Webpack, we'll need another loader. Two in fact: css-loader and style-loader. The first reads all the styles from our CSS files, whilst the other injects said styles into our HTML page. Install them like so:
  1. npm install style-loader css-loader
Next, we tell Webpack how to use them. In webpack.config.js, we need to add another object to the loaders array. In it we want to add a test to match only CSS files as well as specify which loaders to use.
  1. {
  2.     test: /\.css$/,
  3.     exclude: /node_modules/,
  4.     loader: 'style!css'
  5. }
The interesting part of this code snippet is the 'style!css' line. Loaders are read from right to left, so this tells Webpack to first read the styles of any file ending in .css, and then inject those styles into our page.

Because we've updated our configuration file, we'll need to restart the development server for our changes to be picked up. Use ctrl+c to stop the server and webpack-dev-server to start it again.

All we need to do now is require our stylesheet from within our main.js file. We do this in the same way as we would any other JavaScript module:
  1. const sayHello = require('./say-hello');
  2.  
  3. require('./styles/main.css');
  4.  
  5. sayHello('Guybrush', document.querySelector('h2'));
Note how we haven't even touched index.html. Open up your browser to see the page with styled h2. Change the colour of the heading in your stylesheet to see it instantly update without a refresh. Lovely.

You've Got to Sass It

"But nobody uses CSS these days, Grandad! It's all about Sass". Of course it is. Luckily Webpack has a loader to do just the thing. Install it along with the node version of Sass using:
  1. npm install sass-loader node-sass
Then update webpack.config.js:
  1. {
  2.     test: /\.scss$/,
  3.     exclude: /node_modules/,
  4.     loader: 'style!css!sass'
  5. }
This is now saying that for any file ending with .scss, convert the Sass to plain CSS, read the styles from the CSS, and then insert the styles into the page. Remember to rename main.css to main.scss, and require the newly named file in instead. First some Sass:
  1. $background: blue;
  2.  
  3. h2 {
  4.     background: $background;
  5.     color: yellow;
  6. }
Then main.js:
  1. require('./styles/main.scss');
Super. It's as easy as that. No converting and saving files, or even watching folders. We just require in our Sass styles directly.

Images

"So images, loaders too I bet?" Of course! With images, we want to use the url-loader. This loader takes the relative URL of your image and updates the path so that it's correctly included in your file bundle. As per usual:
  1. npm install url-loader
Now, let's try something different in our webpack.config.js. Add another entry to the loaders array in the usual manner, but this time we want the regular expression to match images with different file extensions:
  1. {
  2.     test: /\.(jpg|png|gif)$/,
  3.     include: /images/,
  4.     loader: 'url'
  5. }
Note the other difference here. We're not using the exclude key. Instead we're using include. This is more efficient as it is telling webpack to ignore everything that doesn't match a folder called "images".

Usually you'll be using some sort of templating system to create your HTML views, but we're going to keep it basic and create an image tag in JavaScript the old-fashioned way. First create an image element, set the required image to the src attribute, and then add the element to the page.
  1. var imgElement = document.createElement('img');
  2.  
  3. imgElement.src = require('./images/my-image.jpg');
  4.  
  5. document.body.appendChild(imgElement);
Head back to your browser to see your image appear before your very eyes!

Preloaders

Another task commonly carried out during development is linting. Linting is a way of looking out for potential errors in your code along with checking that you've followed certain coding conventions. Things you may want to look for are "Have I used a variable without declaring it first?" or "Have I forgotten a semicolon at the end of a line?" By enforcing these rules, we can weed out silly bugs early on.

A popular tool for linting is JSHint. This looks at our code and highlights potential errors we've made. JSHint can be run manually at the command line, but that quickly becomes a chore during development. Ideally we'd like it to run automatically every time we save a file. Our Webpack server is already watching out for changes, so yes, you guessed it—another loader.

Install the jshint-loader in the usual way:
  1. npm install jshint-loader
Again we have to tell Webpack to use it by adding it to our webpack.config.js. However, this loader is slightly different. It's not actually transforming any code—it's just having a look. We also don't want all our heavier code-modifying loaders to run and fail just because we've forgotten a semicolon. This is where preloaders come in. A preloader is any loader we specify to run before our main tasks. They're added to our webpack.conf.js in a similar way to loaders.
  1. module: {
  2.     preLoaders: [
  3.         {
  4.             test: /\.js$/,
  5.             exclude: /node_modules/,
  6.             loader: 'jshint'
  7.         }
  8.     ],
  9.     loaders: [
  10.        ...    
  11.     ]
  12. }
Now our linting process runs and fails immediately if there's a problem detected. Before we restart our web server, we need to tell JSHint that we're using ES6, otherwise it will fail when it sees the const keyword we're using.

After the module key in our config, add another entry called "jshint" and a line to specify the version of JavaScript.
  1. module: {
  2.     preLoaders: [
  3.         ...
  4.     ],
  5.     loaders: [
  6.         ...    
  7.     ]
  8. },
  9. jshint: {
  10.     esversion: 6
  11. }
Save the file and restart webpack-dev-server. Running ok? Great. This means your code contains no errors. Let's introduce one by removing a semicolon from this line:
  1. var imgElement = document.createElement('img')
Again, save the file and look at the terminal. Now we get this:
  1. WARNING in ./main.js
  2. jshint results in errors
  3.   Missing semicolon. @ line 7 char 47
Getting Ready for Production

Now that we're happy our code is in shape and it does everything we want it to, we need to get it ready for the real world. One of the most common things to do when putting your code live is to minify it, concatenating all your files into one and then compressing that into the smallest file possible. Before we continue, take a look at your current bundle.js. It's readable, has lots of whitespace, and is 32kb in size.

"Wait! Don't tell me. Another loader, right?" Nope! On this rare occasion, we don't need a loader. Webpack has minification built right in. Once you're happy with your code, simply run this command:
  1. webpack -p
The -p  flag tells Webpack to get our code ready for production. As it generates the bundle, it optimises as much as it can. After running this command, open bundle.js and you'll see it's all been squashed together, and that even with such a small amount of code we've saved 10kb.

Summary

I hope that this two-part tutorial has given you enough confidence to use Webpack in your own projects. Remember, if there's something you want to do in your build process then it's very likely Webpack has a loader for it. All loaders are installed via npm, so have a look there to see if someone's already made what you need.
Written by Stuart Memo

If you found this post interesting, follow and support us.
Suggest for you:

JavaScript Tutorials: Understanding the Weird Parts

ES6 Javascript: The Complete Developer's Guide

Upgrade your JavaScript to ES6

JavaScript Promises: Applications in ES6 and AngularJS

JavaScript For Absolute Beginners - Build Simple Projects


Introduction to Webpack: Part 1

It's fairly standard practice these days when building a website to have some sort of build process in place to help carry out development tasks and prepare your files for a live environment.

You may use Grunt or Gulp for this, constructing a chain of transformations that allow you to throw your code in one end and get some minified CSS and JavaScript out at the other.

These tools are extremely popular and very useful. There is, however, another way of doing things, and that's to use Webpack.

What Is Webpack?

Webpack is what is known as a "module bundler". It takes JavaScript modules, understands their dependencies, and then concatenates them together in the most efficient way possible, spitting out a single JS file at the end. Nothing special, right? Things like RequireJS have been doing this for years.

Well, here's the twist. With Webpack, modules aren't restricted to JavaScript files. By using Loaders, Webpack understands that a JavaScript module may require a CSS file, and that CSS file may require an image. The outputted assets will only contain exactly what is needed with minimum fuss. Let's get set up so we can see this in action.

Installation

As with most development tools, you'll need Node.js installed before you can continue. Assuming you have this correctly set up, all you need to do to install Webpack is simply type the following at the command line.
  1. npm install webpack -g
This will install Webpack and allow you to run it from anywhere on your system. Next, make a new directory and inside create a basic HTML file like so:
  1. <!doctype html>
  2. <html>
  3.     <head>
  4.         <meta charset="utf-8">
  5.         <title>Webpack fun</title>
  6.     </head>
  7.     <body>
  8.         <h2></h2>
  9.         <script src="bundle.js"></script>
  10.     </body>
  11. </html>
The important part here is the reference to bundle.js, which is what Webpack will be making for us. Also note the H2 element—we'll be using that later.

Next, create two files in the same directory as your HTML file. Name the first main.js, which as you can imagine is the main entry point for our script. Then name the second say-hello.js. This is going to be a simple module that takes a person's name and DOM element, and inserts a welcome message into said element.
  1. // say-hello.js
  2.  
  3. module.exports = function (name, element) {
  4.     element.textContent = 'Hello ' + name + '!';
  5. };
Now that we have a simple module, we can require this in and call it from main.js. This is as easy as doing:
  1. var sayHello = require('./say-hello');
  2. sayHello('Guybrush', document.querySelector('h2'));
Now if we were to open our HTML file then this message would obviously not be shown as we've not included main.js nor compiled the dependencies for the browser. What we need to do is get Webpack to look at main.js and see if it has any dependencies. If it does, it should compile them together and create a bundle.js file we can use in the browser.

Head back to the terminal to run Webpack. Simply type:
  1. webpack main.js bundle.js
The first file specified is the entry file we want Webpack to start looking for dependencies in. It will work out if any required files require any other files and will keep doing this until it's worked out all the necessary dependencies. Once done, it outputs the dependencies as a single concatenated file to bundle.js. If you press return, you should see something like this:
  1. Hash: 3d7d7339a68244b03c68
  2. Version: webpack 1.12.12
  3. Time: 55ms
  4.     Asset     Size  Chunks             Chunk Names
  5. bundle.js  1.65 kB       0  [emitted]  main
  6.    [0] ./main.js 90 bytes {0} [built]
  7.    [1] ./say-hello.js 94 bytes {0} [built]
Now open index.html in your browser to see your page saying hello.

Configuration

It isn't much fun specifying our input and output files each time we run Webpack. Thankfully, Webpack allows us to use a config file to save us the trouble. Create a file called webpack.config.js in the root of your project with the following contents:
  1. module.exports = {
  2.     entry: './main.js',
  3.     output: {
  4.         filename: 'bundle.js'
  5.     }
  6. };
 Now we can just type webpack and nothing else to achieve the same results.

Development Server

What's that? You can't even be bothered to type webpack every time you change a file? I don't know, today's generation etc, etc. Ok, let's set up a little development server to make things even more efficient. At the terminal, write:
  1. npm install webpack-dev-server -g
Then run the command webpack-dev-server. This will start a simple web server running, using the current directory as the place to serve files from. Open a new browser window and visit http://localhost:8080/webpack-dev-server/. If all is well, you'll see something along the lines of this:


Now, not only do we have a nice little web server here, we have one that watches our code for changes. If it sees we've altered a file, it will automatically run the webpack command to bundle our code and refresh the page without us doing a single thing. All with zero configuration.

Try it out, change the name passed to the sayHello function, and switch back to the browser to see your change applied instantly.

Loaders

One of the most important features of Webpack is Loaders. Loaders are analogous to "tasks" in Grunt and Gulp. They essentially take files and transform them in some way before they are included in our bundled code.

Say we wanted to use some of the niceties of ES2015 in our code. ES2015 is a new version of JavaScript that isn't supported in all browsers, so we need to use a loader to transform our ES2015 code into plain old ES5 code that is supported. To do this, we use the popular Babel Loader which, according to the instructions, we install like this:
  1. npm install babel-loader babel-core babel-preset-es2015 --save-dev
This installs not only the loader itself but its dependencies and an ES2015 preset as Babel needs to know what type of JavaScript it is converting.

Now that the loader is installed, we just need to tell Babel to use it. Update webpack.config.js so it looks like this:
  1. module.exports = {
  2.     entry: './main.js',
  3.     output: {
  4.         filename: 'bundle.js'
  5.     },
  6.     module: {
  7.         loaders: [
  8.             {
  9.                 test: /\.js$/,
  10.                 exclude: /node_modules/,
  11.                 loader: 'babel',
  12.                 query: {
  13.                     presets: ['es2015']
  14.                 }
  15.             }
  16.         ],
  17.     }
  18. };
There are a few important things to note here. The first is the line test: /\.js$/, which is a regular expression telling us to apply this loader to all files with a .js extension. Similarly exclude: /node_modules/ tells Webpack to ignore the node_modules directory.  loader and query are fairly self-explanatory—use the Babel loader with the ES2015 preset.

Restart your web server by pressing ctrl+c and running webpack-dev-server again. All we need to do now is use some ES6 code in order to test the transform. How about we change our sayHello variable to be a constant?
  1. const sayHello = require('./say-hello')
After saving, Webpack should have automatically recompiled your code and refreshed your browser. Hopefully you'll see no change whatsoever. Take a peek in bundle.js and see if you can find the const keyword. If Webpack and Babel have done their jobs, you won't see it anywhere—just plain old JavaScript.

On to Part 2

In Part 2 of this tutorial, we'll look at using Webpack to add CSS and images to your page, as well as getting your site ready for deployment.
Written by Stuart Memo

If you found this post interesting, follow and support us.
Suggest for you:

JavaScript for Beginners

JavaScript Bootcamp - 2016

JavaScript Tutorials: Understanding the Weird Parts

Wednesday, August 10, 2016

Completing Our Draggable Off-Canvas Menu with GreenSock_part2 (end)

What You'll Be Creating
The JavaScript

JavaScript is the last stop of this draggable menu journey, but before we write one line of JS we’ll need to write a module pattern setup.
  1. var dragaebelMenu = (function() {
  2.   function doSomething() {…}
  3.   return {
  4.     init: function() {…}
  5.   }
  6. })();
  7. dragaebelMenu.init(); // start it!
Variables

For the configuration setup we’ll define some variables for future reference.
  1. var dragaebelMenu = (function() { 
  2.   var container   = document.querySelectorAll('.js-dragsurface')[0],
  3.       nav         = document.querySelectorAll('.js-dragnav')[0],
  4.       nav_trigger = document.querySelectorAll('.js-dragtoggle')[0],
  5.       logo        = document.querySelectorAll('.js-draglogo')[0],
  6.       gs_targets  = [ container, nav, logo, nav_trigger ],
  7.       closed_nav  = nav.offsetWidth + getScrollBarWidth(); 
  8. })();
Most of these variables are simply grabbing DOM elements, with the exception of the last two that define our GreenSock targets plus the width of the navigation menu. The utility function getScrollBarWidth() (outside our discussion today) retrieves the width of the scroll bar so we can position the nav just beyond the width of the bar itself in order to see it when the menu opens. The targets are what we move when the menu opens in order to allow adjacent content to be pushed.

Methods

To keep things short I’ll only discuss methods that are extremely important to the functionality of the menu behavior. Everything else that you’ll see in the demo not discussed here is the “sugar on top” stuff that makes the menu even more powerful.
  1. function menu(duration) {
  2.   container._gsTransform.x === -closed_nav ? 
  3.     TweenMax.to(gs_targets, duration, { x: 0, ease: Linear.easeIn }) : 
  4.     TweenMax.to(gs_targets, duration, { x: -closed_nav, ease: Linear.easeOut });
  5. }
The menu function detects whether the container’s x coordinate equals the closed nav state. If so it sets the targets back to their starting position, otherwise sets them to their open position.
  1. function isOpen() {
  2.   return container._gsTransform.x < 0;
  3. }
This is a utility function to check the menu’s state. This will return 0 if the menu is closed, or a negative value if it’s open.
  1. function updateNav(event) {
  2.   TweenMax.set([nav, logo, nav_trigger], { x: container._gsTransform.x });
  3. }
This is another utility function which sets the target’s x coordinate inside the array parameter of the .set() method to the container’s x position everytime the onDrag or onThrowUpdate event happens. This is part of the Draggable object instance.
  1. function enableSelect() {
  2.   container.onselectstart = null; // Fires when the object is being selected.
  3.   TweenMax.set(container, { userSelect: 'text' });
  4. function disableSelect() {
  5.   TweenMax.set(container, { userSelect: 'none' });
  6. function isSelecting() {
  7.   // window.getSelection: Returns a Selection object representing
  8.   // the range of text selected by the user or the current position
  9.   // of the caret.
  10.   return !!window.getSelection().toString().length;
  11. }
These functions help to determine if someone is really selecting text in order to enable / disbale selection capabilities when someone drags across the screen. This is not the most ideal behavior for mouse events, but again, as we already mentioned, you can’t detect a touch screen.

Draggable Instance

  1. Draggable.create([targets], {options})
As we discussed in the previous tutorial about Draggable, this will create the instance of the Draggable object and target the DOM objects of our choice that can be passed as as an array.
  1. Draggable.create([container], {
  2.   type: 'x',
  3.   dragClickables: false,
  4.   throwProps: true,
  5.   dragResistance: 0.025,
  6.   edgeResistance: 0.99999999,
  7.   maxDuration: 0.25,
  8.   throwResistance: 2000,
  9.   cursor: 'resize',
  10.   allowEventDefault: true,
  11.   bounds: {…},
  12.   onDrag: updateNav,
  13.   onDragEnd: function(event) {…},
  14.   liveSnap: function(value) {…},
  15.   onPress: function(event) {…},
  16.   onClick: function(event) {…},
  17.   onThrowUpdate: function() {…}
  18. });
This is our entire Draggable instance and the properties used. The actual demo code contains comments I’ve left in order to understand and gain a better persepective on what each one is responsible for. I encourage you to look into the demo code and even challenge you to deconstruct the why and how.
Written  by Dennis Gaebel

If you found this post interesting, follow and support us.
Suggest for you:

JavaScript For Beginners - Learn JavaScript From Scratch

JavaScript for Absolute Beginners

JavaScript For Absolute Beginners - Build Simple Projects

JavaScript Bootcamp - 2016


Monday, August 8, 2016

Completing Our Draggable Off-Canvas Menu with GreenSock_part1

What You'll Be Creating
In the first part of this Draggable journey, we discussed how to include scripts, investigated the ThrowPropsPlugin, including the requirements to jump start our project in hopes of taking it to eleven! Now, get ready to make an off-canvas menu system which reacts to keyboard and touch.

The Demo
The full demo that we’ll be building and discussing for the remainder of this tutorial is also available on CodePen.

I encourage you to test this for yourself across as many devices as possible, especially keyboard navigation. Each interaction–whether touch, keyboard or mouse–has been accounted for, but as you’ll find in our current landscape you can’t detect a touchscreen and at times trying to do so even results in false positives.

The Setup

Using the markup from part I we’ll begin by adding a container div for structural purposes along with correlating classes for CSS and JavaScript hooks.
  1. <div class="app">
  2.   <header class="dragaebel-lcontainer" role="banner">
  3.     <a href="/" class="js-draglogo">…</a>
  4.     <a href="#menu" class="dragaebel-toggle js-dragtoggle" id="menu-button">…</a>
  5.     <nav class="dragaebel-nav js-dragnav" id="menu" role="navigation">…</nav>
  6.   </header>
  7.   <main role="main">
  8.     <div class="dragaebel-lcontainer js-dragsurface"></div>
  9.   </main>
  10. </div>
Classes that begin with the ”js” prefix signify that these classes only appear in JavaScript; removing them would hinder functionality. They’re never used in CSS, helping to isolate the focus of concerns. The surrounding container will help to control scrolling behavior which is discussed in the upcoming CSS section.

Accessibility

With the the foundation in place it’s time to add a layer of ARIA on top to lend semantic meaning to screen readers and keyboard users.
  1. <nav aria-hidden="true">…</nav>
Since the menu will be hidden by default the aria-hidden attribute is labeled true and will be updated accordingly depending on the menu’s state; false for open, true for closed. Here’s an explanation of the attribute aria-hidden per the W3C specification:
Indicates that the element and all of its descendants are not visible or perceivable to any user as implemented by the author. […] Authors MUST set aria-hidden=”true” on content that is not displayed, regardless of the mechanism used to hide it. This allows assistive technologies or user agents to properly skip hidden elements in the document. ~W3C WAI-ARIA Spec
Authors should be careful what content they hide, making this attribute a separate discussion outside the scope of this article. For those curious, the specification defines the attribute in further length and is somewhat grokkable; something I don’t usually say that often about specification jargon.

The CSS

Our CSS is where the magic really begins. Let’s take the important parts from the demo that bear meaning and break it down.
  1. body {
  2.   // scroll fix
  3.   height: 100%;
  4.   overflow: hidden;
  5.   // end scroll fix
  6. }
  7. .app {
  8.   // scroll fix
  9.   overflow-y: scroll;
  10.   height: 100vh;
  11.   // end scroll fix
  12. }
  13. .dragaebel-nav {
  14.   height: 100vh;
  15.   overflow-y: auto;
  16.   position: fixed;
  17.   top: 0;
  18.   right: 0;
  19. }
Setting the body height to 100% allows the container to stretch the entire viewport, but it’s also playing a more important part; allowing us to hide its overflow.

The overflow scroll fix helps to control how the primary container and navigation behave when either one contains overflowing content. For example, If the container is scrolled—or the menu—the other will not scroll when the user reaches the end of the intially scrolled element. It’s a weird behavior, not typically discussed, but makes for a better user experience.

Viewport Units

Viewport units are really powerful and play a vital role in how the primary container holds overflowing content. Viewport units have wonderful support across browsers these days and I highly suggest you start using them. I’ve used vh units on the nav, but I could have used a percentage instead. During development it was discovered that div.app must use vh units since percentage won’t allow for the overflowing content to maintain typical scrolling behavior; the content results in being clipped. Overflow is set to scroll in preparation in case the menu items exceeed the height of the menu or the height of the viewport becomes narrow.
  1. // Allow nav to open when JS fails
  2. .no-js .dragaebel-nav:target {
  3.   margin-right: 0;
  4. }
  5. .dragaebel-nav {
  6.   margin-right: -180px;
  7.   width: 180px;
  8. }
The .no-js .nav:target provides access to our menu regardless if JavaScript fails or is turned off, hence the reason we added the ID value to the href attribute of the menu trigger.

The primary navigation is moved to the right via a negative margin which is also the same as the nav’s width. For the sake of brevity I’m writing Vanilla CSS, but I’m sure you could write something fancier in a pre-processor of your choice.
Written by Dennis Gaebel

If you found this post interesting, follow and support us.
Suggest for you:

JavaScript for Absolute Beginners

JavaScript For Beginners - Learn JavaScript From Scratch

JavaScript for Beginners

JavaScript Bootcamp - 2016

ES6 Javascript: The Complete Developer's Guide

How to Use Jscrambler 4 to Protect Your Application's Integrity_part 2 (end)

2. Make Your Application Protect Itself

So far, we have talked about ways to make sure your application works as expected by preventing both the user and outsiders from modifying your source code. But that's not the only threat to your application you need to worry about.

As JavaScript has become popular in standalone and mobile applications, piracy has become a real issue also in this realm.

I'm sure we've all clicked "View Source" to learn how to do a cool new effect or trick, and adapted what we've learned to our websites. That's not what I'm talking about here.

Instead, let's assume you created a popular mobile game in HTML5 and JavaScript. Thanks to JavaScript's portability, you can use the same code on Android, iOS, and other platforms, and reach a larger audience without the extra work from writing native versions for every environment. But there's a problem: anyone can copy your JavaScript code, make a few modifications, and sell it as their own!

The transformations in the previous step help prevent this by making the code hard to read and reverse engineer. In addition to them, Jscrambler adds code traps, functionality that makes the code protect itself.

Let's take a look at some of them.

In Jscrambler, in the list of transformations, you'll find a section titled Code Locks.


The transformations in this section add yet another layer of security to your JavaScript application.

Using them, you can limit the execution of your code to a given set of browsers, a time frame (useful for demos that shouldn't be runnable after the preview period is over), on a given domain (usually yours), and a particular operating system.

One more powerful feature for protecting your code is Client-side RASP (Runtime Application Self-Protection). It modifies your JavaScript code, making it defend itself from runtime tampering. For example, the application will stop working if anyone tries to open the debugger.

Finally, in the Optimization section, select Minification to minify the code and make the file smaller.


Then, click on Protect App, followed by Download App to download the source code and use it in your application.


Jscrambler keeps track of your transformation history, so you can always go back to an earlier version.

3. Use Templates to Make Your Work Easier

As we have seen, configuring your protection level by selecting checkboxes one by one is straightforward. But you can still speed up your workflow by grouping your common combinations of transformations into templates.

Once you have found a set of transformations that you'd like to store for future use, click Create Template at the bottom of the Application Settings section.

Jscrambler will prompt you to give your template a name and description.


Type in something descriptive for the future, and click on Save Template.

You can also use one of the templates already available on the Your Templates tab:


Move your mouse pointer above the template names to read more about them and understand when they make sense. Then click on them to see what transformations they apply to your code.

4. Make Jscrambler a Part of Your Development Workflow

So far, we have covered some ways in which Jscrambler can help you protect your application using the web UI. While the interface is intuitive, as your application grows, you'll want something more straightforward.

Also, as I mentioned earlier, Jscrambler is polymorphic, generating a different output every time. So, it's useful to run the tool again now and then, even if there are no changes to the specific files.

To do this, let's look at Jscrambler's command-line tool.

First, download and install the command-line tool using npm. On your command line, type:
  1. sudo npm install -g jscrambler
Once the installation completes, go back to your applications in the Jscrambler admin.

Notice that to use the command-line tool, you'll need to use an actual application instead of the playground script. So, if you don't have an application yet, create one now.

After selecting the transformations you want to apply, click on the download button at the top right corner of the page. This will download your settings to your computer as a JSON file.

Copy the JSON file to your project. Don't commit it to version control as the file contains your API key and secret for using the Jscrambler API.

Then, run the jscrambler command to execute the transformations on your JavaScript files.

For example, if you only have one file, test.js, you can run the following command:
  1. $ jscrambler -c jscrambler.json -o test.protected.js test.js
In the command, you passed in the JSON file containing your application settings using the -c parameter, followed by the output file (-o test.protected.js), and finally the JavaScript file to protect.

To run the protection for all JavaScript files in the project directory, you can use something like this:
  1. $ jscrambler -c jscrambler.json -o protected/ **/*.js
In this example, instead of an output file, you define a directory (protected) where Jscrambler will place the results of the protection.

Now, you don't have to go back to Jscrambler's web UI every time you make changes to your JavaScript files. This will make it more likely that you'll remember the step, and thus keep your application secure.

As a further improvement, you could also set up the Jscrambler task to run whenever there are changes in your scripts, for example using Grunt. Or it could even be a task on your continuous integration server, running whenever a new version of the code is committed.

Conclusion

Jscrambler provides cutting-edge tools to make it hard for crackers, cheaters, and malware to inject unwanted functionality to your application or change your application's variables. It also makes it hard for others to copy your code and redistribute it as their own.

Security is a complex field with multiple different threats to consider. So, when your JavaScript code talks to a server, the best solution is to use a combination of server development best practices, robust parameter validation, and a JavaScript security platform such as Jscrambler to make the client tamper-proof. This way, you can prevent most of the attacks against your application's integrity already in the client and make sure your customers get the experience you designed for them to have.
Written by Jarkko Laine

If you found this post interesting, follow and support us.
Suggest for you:

JavaScript for Absolute Beginners

JavaScript For Beginners - Learn JavaScript From Scratch

JavaScript for Beginners

JavaScript Bootcamp - 2016

JavaScript Tutorials: Understanding the Weird Parts