/* This must be included to use the included variables and the color function */ @import "../node_modules/materialize-css/sass/components/color"; $primary-color: color("indigo", "base"); $secondary-color: color("blue", "lighten-1"); /* This is needed so the materialize.scss file can load the font */ $roboto-font-path: "../node_modules/materialize-css/fonts/roboto/"; @import "../node_modules/materialize-css/sass/materialize";By convention, these colors should be moved into a _variables.scss file which is included in styles.scss, but just do whatever feels right, man. I've sent a pull request to the git repo which is currently linked to on the angular2-materialize setup instructions for the latest WebPack version of Angular CLI, so fingers crossed that this little gotcha will not get many more people.
Holt Code
Saturday, October 29, 2016
Custom color theme when using Angular CLI and Materialize CSS
Thursday, March 10, 2016
Static website generation with Hugo & Gulp
It has again been a while since I posted. I've been working on some projects but nothing I really felt like blogging about until now. One of the projects I was working on introduced me to Hugo, which is a static website generation engine that takes metadata and templates and generates a set of HTML pages from them so that it's ready to be uploaded to a simple web server such as S3. On another project, I used Gulp for the first time to automate some repetitive build tasks such as minification and also to run BrowserSync so that I could make changes and have them appear automatically in-browser after I saved them. For my latest project, I want to combine the two so that I can generate a Hugo site and then post-process it, and also have that processed site appear in-browser whenever I make any changes. I'm happy with how this has turned out and I didn't see any blog posts about how to do this so I figured it was a good time for a post.
The Goal
By the end of this post, what we will have is:
- Hugo generating index and content pages based on a simple template and some simple content, and updating that whenever the content or template changes.
- ES6 code will be automatically transpiled to JS.
- JS, HTML, and CSS will be automatically minified when building for production.
- It will be possible to run a browser and have it automatically refresh whenever any of the HTML, JS, or CSS changes (and is automatically processed).
The project is up on GitHub if you want to see the finished product, but I'll spend the rest of the blog post going through the process step-by-step.
Hugo
The first step is to install and set up Hugo. They have a great quickstart guide on their site, so it's best to just go through that if you haven't already. What you should end up with is a basic site with an about page and a post, and you should be able to spin up a simple server that will watch for changes and automatically update the site. However we want to go another step and process that with Gulp, so we won't be using the Hugo server. We do want to use Hugo to watch for changes and generate the updated site, and for that we use the command:
hugo -w -s .\hugo-site -d ..\hugo-generated --disableRSS
This will (-w)atch the (-s)ource folder, which I put into a hugo-site subfolder, and generate it into the (-d)estination of another top-level subfolder hugo-generated. I've also disabled the generation of RSS feeds and haven't used a theme since I used a simple layout instead, but you can add or change these options as needed.
It's a bit annoying to have to remember this command, and we're going to be using npm anyway, so we might as well make a package.json file and add the command as a script, so we just need to remember "npm run hugo".
{ "private": true, "engines": { "node": ">=0.12.0" }, "scripts": { "hugo": "hugo -w -s .\\hugo-site -d ..\\hugo-generated --disableRSS" } }
Gulp First Steps
Now that we have a Hugo generated folder, what we want to do is use Gulp to pick up any files from there, transform them as needed, and output them to a final distribution folder I'll name gulp-dist. It might be possible to instead make a Gulp task to run Hugo and do everything in one go, but this way is simpler and it lets me see exactly what both Hugo and Gulp are doing in case of any problems.
The first thing we'll get Gulp to do is to simply copy the hugo files to the new gulp-dist folder, and make some helper scripts to clean up the gulp-dist and hugo-generated folders. We'll clean the gulp-dist folder before each build, but leave the cleaning of hugo-generated to be on-demand since we would need to manually trigger a Hugo generation after we do that. So what we need to do is install Gulp and the del library using npm.
npm install --save-dev gulp del
And then make a small gulpfile.js with the build and clean tasks.
var gulp = require('gulp'); var del = require('del'); var hugoBase = './hugo-generated'; var distBase = './gulp-dist'; gulp.task('clean', function() { del([distBase + '/**/*']); }); gulp.task('clean-hugo', function() { del([hugoBase + '/**/*']); }); gulp.task('build', function() { return gulp.src(hugoBase + '/**/*') .pipe(gulp.dest(distBase)); }); gulp.task('default', ['clean'], function() { gulp.start('build'); });
I won't spend too much time explaining how the gulp scripts work - hopefully it's clear enough by itself but if not you can dig into the Gulp docs. Basically though, the clean tasks are just deleting files, the build task is just copying files, and the default task first runs clean then build. This means we can just run "gulp" instead of "gulp clean" then "gulp build".
BrowserSync
Now let's get Gulp to do something useful - let's make a task that will open a browser and automatically copy the source files and then refresh the browser whenever any changes are made. To do this, we need to use BrowserSync and file watchers.
npm install --save-dev gulp del
// ... var browserSync = require('browser-sync').create(); gulp.task('serve', ['build'], function() { browserSync.init({ notify: false, server: { baseDir: distBase }, reloadDelay: 1000, reloadDebounce: 1000 }); gulp.watch(hugoBase + "/**/*", ['build']); gulp.watch(distBase + "/**/*").on('change', browserSync.reload); }); // ...
When we run "gulp serve", a browser should be automatically opened, and if we still have the Hugo generator running in the background, we can make changes to the Hugo content, see Hugo generate that into hugo-generated, then see Gulp copy that to gulp-dist and refresh the browser. Magical! The BrowserSync options we are using are notify, which removes an annoying popup on the browser, server, which says where to serve from, and reloadDelay/reloadDebounce, which helps to avoid multiple refreshes when Hugo regenerates all the files. The watcher on hugo-generated tells Gulp to re-build whenever that changes, and the watcher on gulp-dist tells BrowserSync to refresh the browser whenever that changes.
ES6 Transpiling
The next thing I wanted to do was to have Gulp convert ES6 code into plain JavaScript, so that I can use the new features but the code continues to work on all browsers. To do this we'll use Babel, and before writing any new code, we can change the gulpfile itself to ES6. So install babel and the ES6 (a.k.a. ES2015) preset:
npm install --save-dev babel-core babel-preset-es2015
Make a simple .babelrc file to tell Babel we want to transpile ES6.
{ "presets": ["es2015"] }
And then rename gulpfile.js to gulpfile.babel.js and convert it to ES6.
'use strict'; import gulp from 'gulp'; import del from 'del'; import bs from 'browser-sync'; let browserSync = bs.create(); let hugoBase = './hugo-generated'; let distBase = './gulp-dist'; gulp.task('clean', () => { del([distBase + '/**/*']); }); gulp.task('clean-hugo', () => { del([hugoBase + '/**/*']); }); gulp.task('build', () => { return gulp.src(hugoBase + '/**/*') .pipe(gulp.dest(distBase)); }); gulp.task('serve', ['build'], () => { browserSync.init({ notify: false, server: { baseDir: distBase }, reloadDelay: 1000, reloadDebounce: 1000 }); gulp.watch(hugoBase + "/**/*", ['build']); gulp.watch(distBase + "/**/*").on('change', browserSync.reload); }); gulp.task('default', ['clean'], () => { gulp.start('build'); });
Hopefully everything is working exactly as it was before, but now we're in a better position to write some ES6 and both transpile that to JS and generate sourcemaps at the same time. We'll make a simple JS file under hugo-generated/scripts and add that to the layouts so it's automatically included in all generated HTML.
let test = "testing"; setTimeout(() => console.log(test), 1000);
Now we need to install the Gulp plugins that will allow us to transpile and generate sourcemaps.
npm install --save-dev gulp-sourcemaps gulp-babel
And update the gulpfile to do the transpiling and sourcemap generation. At the same time we'll make the file selectors more specific so we can tell Gulp to process each type of file differently.
// ... import sourcemaps from 'gulp-sourcemaps'; import babel from 'gulp-babel'; gulp.task('html', () => { return gulp.src(hugoBase + '/**/*.html') .pipe(gulp.dest(distBase)); }); gulp.task('extras', () => { return gulp.src(hugoBase + '/sitemap.xml') .pipe(gulp.dest(distBase)); }); gulp.task('styles', () => { return gulp.src(hugoBase + '/styles/**/*.css') .pipe(gulp.dest(distBase + '/styles')); }); gulp.task('scripts', () => { return gulp.src(hugoBase + '/scripts/**/*.js') .pipe(sourcemaps.init()) .pipe(babel()) .pipe(sourcemaps.write('.')) .pipe(gulp.dest(distBase + '/scripts')); }); gulp.task('build', ['html', 'scripts', 'styles', 'extras']); gulp.task('serve', ['build'], () => { browserSync.init({ ... }); gulp.watch(hugoBase + "/**/*.html", ['html']); gulp.watch(hugoBase + "/scripts/**/*.js", ['scripts']); gulp.watch(hugoBase + "/styles/**/*.css", ['styles']); gulp.watch([distBase + "/**/*.html", distBase + "/scripts/**/*.js", distBase + "/**/*.js"]).on('change', browserSync.reload); }); // ...
And that's all we need to do! You should now be able to see Gulp taking the ES6 code from hugo-generated and using it to create JS and mapping files in the gulp-dist folder.
Linting
Another thing that's useful to have in the world of JavaScript is linting, so that you can have a bit more confidence that the code you're writing is good quality and free of any obvious bugs. This is another thing that is very easy to do when using Gulp. We'll be using ESLint, and all we need to do is install it and create a simple Gulp task.
npm install --save-dev gulp-eslint
// ... import eslint from 'gulp-eslint' var lintOptions = { extends: 'eslint:recommended', rules: { quotes: [2, "single"], "no-console": 0 }, env: { "es6": true, "browser": true } }; gulp.task('lint', () => { return gulp.src([hugoBase + '/scripts/**/*.js']) .pipe(eslint(lintOptions)) .pipe(eslint.format()) .pipe(eslint.failAfterError()); });
You can leave this task as something completely separate to run when you want, or make it a prerequisite for the build task, or even have separate tasks for development lint options and production lint options. I just have it alongside the clean script and run it when I run the default Gulp script, before doing any other processing.
Minification
The last thing I wanted to do was to have an option to build everything minified, so that it was ready to be uploaded to my webserver. The way I decided to do this was just to have an option I could pass to the "gulp" or "gulp build" commands which specified that I was building for production, while leaving everything else as-is. This means that when I'm developing, everything will stay un-minified, and I'll only do the minification when I'm ready to deploy. To do this, we'll need a few different Gulp plugins:
- gulp-util allows us to access arguments we pass to the gulp commands (we'll be using --production only)
- gulp-if allows us to perform processing only if that argument is passed
- gulp-uglify minifies JavaScript
- gulp-cssnano minifies CSS
- gulp-htmlmin minifies HTML
As with most Gulp code in this post, doing this is fairly simple and self-explanatory.
npm install --save-dev gulp-util gulp-if gulp-uglify gulp-cssnano gulp-htmlmin
// .. import util from 'gulp-util'; import gulpif from 'gulp-if'; import uglify from 'gulp-uglify'; import cssnano from 'gulp-cssnano'; import htmlmin from 'gulp-htmlmin'; gulp.task('html', () => { return gulp.src(hugoBase + '/**/*.html') .pipe(gulpif(util.env.production, htmlmin({collapseWhitespace: true}))) .pipe(gulp.dest(distBase)); }); gulp.task('scripts', () => { return gulp.src(hugoBase + '/scripts/**/*.js') .pipe(sourcemaps.init()) .pipe(babel()) .pipe(gulpif(!util.env.production, sourcemaps.write('.'))) .pipe(gulpif(util.env.production, uglify())) .pipe(gulp.dest(distBase + '/scripts')); }); gulp.task('styles', () => { return gulp.src(hugoBase + '/styles/**/*.css') .pipe(gulpif(util.env.production, cssnano())) .pipe(gulp.dest(distBase + '/styles')); }); // ...
Now all we need to do is run "gulp --production" and our dist folder will be cleaned and then populated with fully minified JS/HTML/CSS!
Conclusion
And that's it! We have done everything we set out to do, and it was pretty easy. Gulp is pretty great like that. Hopefully this post has helped someone out there, and remember that the full code is available on GitHub in case anyone wants to use it as a starting point.
Sunday, September 21, 2014
Long Time No Post
Firstly, the thing that's been taking up most of my spare programming time has been a rewrite of Scrobble Along. I decided to rewrite it mainly because the old code was written in an old version of TypeScript in Visual Studio, and after I installed the new TypeScript compiler my Visual Studio code stopped compiling. Rather than try to port over the TypeScript I started making hotfixes to the generated JavaScript and everything got really messy really quickly. I also really didn't like that the scraping part which gets all the tracks that are playing on the various radio stations was mixed in with the frontend website code. Also all programmers love starting a project from scratch, and I was pretty keen to work on another project that could use AngularJS. Now I have two separate codebases, the scraper runs on a Linode server I have, and it just loads all the stations about every 30 seconds and grabs the tracks from their various HTML and JSON sources, then scrobbles the tracks to the station's profile and to anyone who is scrobbling along. The frontend website is running on Heroku, and all it does is load information about the stations, and adds record to my database when someone starts or stops scrobbling along. It's much saner code now and I still really love AngularJS so I'm much happier about the project now. I did have to make one other major change, and that was to make the website pull the latest track details from my database rather than trying to load it from the last.fm API. For some reason, when I re-wrote the code it started taking a whole lot longer to load all the station details from last.fm, so it was taking a good 30 seconds for everything to show up. Right now it's loading all profile images, recent tracks, etc from my database so it's a whole lot faster than it used to be. I've got the code on two separate github repos, but I made a simple repo that has both of them as submodules here, so feel free to check out the code for more details.
The second thing I've been spending some time on has been updates to Trashbot. I've been experimenting with ways that I can try to combat bots that are just taking everything from as soon as it is dumped in. My plan was to limit the number of trades for each account to taking something like 10 items per day, but that would be impossible until I was able to quickly look up the trade details. As a first step I wanted to make a record of all the details available in MongoDB, under 3 tables. The first one would be a summary of each user, containing the total number of trade requests, items taken and donated, friend requests etc. The second one would be granular details for each trade, including the item, time, user, etc. The third one would be a record of the number of trades each user had made in each day and how many items they took and gave. Once that was set up I was going to look up the daily trades before accepting anything, and if a user had already taken too much I was going to make the bot refuse the trade. I've got the details being saved to MongoDB now, but I'm 90% sure I'm not actually going to refuse trades, as when I look at the details, there really are not many individuals who are taking the majority of the items. I think just occasionally looking though the details and banning a few people is the best option, any systematic method I come up with will just end up as an arms race against any "takerbots" that are out there. It was an interesting process to get all the details to be recorded though. I ended up writing a simple REST server that would update MongoDB when various POST requests were sent to it, e.g. I have something like /trade/userid/tradeid/itemid/taken which will update all the tables with one more trade item taken by a particular user on a particular day. Doing it this way meant that I was able to record the trade from both the bot which is accepting trade requests, and the CasperJS script that is accepting the trade offers. Again, the code is up here on GitHub if anyone is interested.
The final thing I'll mention was a relatively small weekend project I did for a battle created on the /r/webdevbattles subreddit. The challenge was to build an elevator simulation, and it piqued my interest because it was similar to a job interview question I got which I did not to very well on. Most of the people competing were focusing on the "frontend" part of the problem, e.g. by making CSS to show elevators moving up and down, but I wanted to focus more on the "engine" part of the problem, as that was what the interview question was about. I ended up writing a simulation that represented passengers and elevators as individual state machines, which would operate independently. The passengers waited on a floor, requested their destination, then waited around constantly checking for an open elevator that was going in the right direction, and at that point they would get on and wait for the doors to open on their destination floor before getting out. The elevators had a set of target floors and they just moved to those floors and opened their doors, waited until no-one had entered for a while, then headed to their next floor and opened their doors, etc. A central "brain" was in charge of reacting to floor requests from passengers and assigning those floors to one of the several elevators. I think it was a good idea, but I was limiting myself to a weekend's worth of work, so it's not really working 100% right now and elevators have a habit of bouncing between floors indefinitely and never reaching their target floors. I'm pretty sure the elevators need some sort of floor queue rather than just using a set of floors that they need to end up on at some point, but I'm not really planning to test that theory any time soon. Once again, I used AngularJS to visualize the engine data, and was very impressed with what I was able to get working in a pretty short amount of time. The code is up here.
And that's it! I'm not quite sure what I'm going to work on next, it's been a while since I've been in a situation where there isn't a project I know I "should" be working on, so I think I'll have to think up something new.
Saturday, January 25, 2014
Automatically Accepting last.fm Friends
My first thought was to use PhantomJS, a headless browser which I've used somewhat successfully to accept trade offers for my Steam Trash Bot which allows you to write JavaScript code to visit a website and do various DOM manipulations on it. After a bit of experimentation I realized that it was very fiddly and hard to do sequences of actions, and some web searching revealed that CasperJS was better for what I wanted to do. CasperJS is a wrapper around PhantomJS that allows you to easily write a sequence of navigation and manipulation steps - exactly what I wanted to do!
The sequence of steps I wanted to go through were, for each account I have, log in, go to the friend requests page, accept all the friends, then log out. Logging in and logging out was fairly easy, I just needed to tell CasperJS to go to the log in page, fill out and submit the form, then submit the logout form, the only trick was that I had to tell it to wait for the redirection after the form submission. Accepting friends was another story since there is no easy way of getting CasperJS to do something for each result of a selector, but as usual, StackOverflow had an answer that pointed me in the right direction. The trick is to come up with a CSS selector that will find the first unaccepted request, then click the accept button and wait until it is accepted, then try again until no more unaccepted requests are found. In last.fm, when you click the accept button, the div it is in gets hidden but the HTML is still there, so most selectors will not be able to tell if it's been accepted or not. Thankfully and slightly confusingly, one thing does change with the request, the action of the form changes from nothing to "/ajax/inbox/friendrequest", so the selector "form:not([action='/ajax/inbox/friendrequest']) input[name='accept']" can be used to find unaccepted friend requests.
Putting all this together I've written a nice little script that will save me literally minutes every week. Just think of what I can do with all those minutes!
Sunday, December 8, 2013
Running Karma Tests for TypeScript Node.js Code in WebStorm 7
My searches started out well, I found this blog post which goes through testing JavaScript using Karma and Jasmine and shows off some neat WebStorm integration, but this was written with "vanilla" JavaScript in mind, so even though it works with transpiled TypeScript, it doesn't work with Node.js modules. The reason is because it's running in a browser, so you can't pull in node modules using CommonJS-compiled TypeScript code. I briefly experimented with using AMD as the module system and getting the browser to use RequireJS, but that would have required me to have two different transpilations of my TypeScript code, and I couldn't get it to work anyway.
After a bit of poking around, I'm almost certain that "the way" to run jasmine Node.js tests is to use jasmine-node in place of the built-in Karma support in WebStorm. I figured I would just run jasmine-node from the terminal and deal with an un-integrated test runner, but I really wanted to be able to debug through my code while the tests were running. The way I've done this before was to use node-inspector, so I started searching around for a way to get that working with jasmine-node, and I found this StackOverflow question. The answer points out that jasmine-node is just an alias for a normal Node.js app, so I figured I would just add that as a Node.js run/debug configuration, and surprise surprise everything worked perfectly.
For posterity, here are all the steps I took:
- Create a new empty WebStorm project.
- Make a simple TypeScript file that we can test:
- Add the TypeScript file watcher, fix it by adding "--module commonjs" to the arguments.
- Add karma-jasmine as an external library as shown in the JetBrains blog post video so that autocomplete and code generation work for the test spec files.
- Get the jasmine TypeScript definitions from DefinitelyTyped.
- Make a simple test, referencing the TypeScript code and jasmine definitions:
- Make a new Node.js run configuration to run the tests, use the node_modules\jasmine-node\lib\jasmine-node\cli.js file as the JavaScript file (this can be either local or globally installed via npm), and the test folder as the application parameters.
- I also suggest making another run configuration that is identical except for having an additional "--autotest" application parameter before the test folder. This will run in the background and continually re-run tests whenever a change is detected in the source code.
// File: Roster.ts export class Person { constructor(public name:string) {} }
// File: test/PersonSpec.ts ////// import p = require("../Person"); describe("suite", () => { it("should be true", () => { expect(true).toBe(true); var person = new p.Person("Joe"); expect(person.name).toBe("Joe"); }); });
Thursday, December 5, 2013
Following Along with the WebStorm 7 Demonstration Video
It looks like a very slick IDE and I was looking forward to trying it out, but I ran into a whole lot of little problems when I tried to follow along. I'm pretty sure I've figured most of them out now, so I thought I'd try to go through the entire video and explain how to follow along with it.
First things first - making Roster.ts (0:00 - 0:30). This demonstrates the automatic compilation of TypeScript, so you'll first need to make sure TypeScript is properly set up. The file is created using the alt-insert shortcut, which lets you type in the type of file to create followed by the name of the file. When you first create a TypeScript file, you'll get a prompt about adding a File Watcher, which is what will automatically compile your TypeScript into JavaScript (along with a map file so you can debug through the TypeScript code). The first problem I encountered was that I had installed TypeScript 0.8 via an executable a while ago, so even though I installed the latest version (0.9.1-1) using npm, the file watcher was using the 0.8 version that was stored in a different location. Make sure you check that the watcher is pointing to the right file by running the compiler with the -v flag which will print out the version number. The second problem I encountered was that I kept getting the error "error TS5037: Cannot compile external modules unless the '--module' flag is provided." After a bit of module confusion as I mentioned in my last post, I realized that since we are writing Node.js code, we should specify that tsc should use the CommonJS module system, which you can do by editing the file watcher and adding --module "commonjs" to the arguments. With that confusing diversion dealt with, I was able to write up the Roster.ts file with the autocomplete working as it shows in the video. One small comment is that in the video they use a capital-cased String rather than a lower-case string, but that is easily changed.
Next is server.ts (0:30 - 1:43). This file uses some Node.js packages, so there are two "hidden" things to set up here. Firstly, you'll need to make a package.json file that sets up dependencies for the app. This example only uses express and cors, and we don't need to be strict about the version, so you can use the following file:
{ "name": "application-name", "version": "0.0.1", "dependencies": { "express": "*", "cors": "*" } }
Now you just need to run "npm install" from your project directory to pull in the required libraries. There are a few other ways to do this, you could just npm install the packages individually or use the Node.js and NPM settings, but keeping it in the package.json file means that we will be able to install the project in other places easily.
The second piece of setup is DefinitelyTyped, which is a set of files that contains the definitions for many different npm packages, so that we can compilate and use code completion for those packages. This can be downloaded or added as a git submodule using their github repo, or we can pull in individual definitions by going to the JavaScript -> Library settings and Downloading from the TypeScript community stubs section. I prefer to use a git submodule so that it's easy to update the definitions. I do like the idea of the community stubs, but as far as I can tell you need to copy and paste the file (renaming it in the process) into your projects main directory structure before you can use them properly, and that's too much work for me.
With this setup done, the creation of the server.ts file goes about as well as it shows in the video but there are some hidden tricks. I'm not entirely sure, but I think the magic that translates "ref" into "/// <reference path=""/>" is a custom Live Template, which I was able to add by making "ref" an abbreviation for "/// <reference path="$END$"/>". The extra "find relative references" magic can be conjured by using ctrl-space (possibly twice), which will bring up the list of relevant files. The next trick which automatically inserts a variable for "new jb.Roster()" is called Extract Variable, and can be accessed using ctrl-alt-v. One thing I couldn't get working was the explicit typing for ExpressApp/ExpressServerRequest etc., which I guess has had a change in name since the video was made. The IDE is smart enough to infer the type from the context so it's not a big deal. And for server.ts's final trick ... I guess it's some more Live Templates that are converting "get!" and "post!" to some pre-built function skeletons.
"Debug Node with ease" (1:43 - 1:58) really is as easy as they show (as long as you have the right npm packages installed), but "Test REST services" (1:58 - 2:30) is a bit trickier. The first thing to note is the "Enter action or option name" dialog which you can summon using ctrl-shift-a and use to find just about anything - in this case "test rest" opens up the REST client. Before it works as shown in the video you'll probably need to change the port and path and ensure that you've added the "content-type: application/json" header. Once that is sorted out though, take some time to play around with the pain-free TypeScript debugging, it really is pretty amazing when you think about it.
Now we get to the client side, the file app.ts starts out as a pretty standard AngularJS app, written with the assistance of the auto-completion and templates we've seen before (2:30 - 3:25). AngularJS is an awesome framework for client-side code but I'm not going to go into it here, if you want more information check out their website. The only note I have for this section is that my version of app.ts had a compilation error "The property 'message' does not exist on value of type 'AppCtrl'", which was easily fixed by adding message as a public variable on the class (public message:string;).
The next file that is created is a HTML file, but one that is created using "Emmet live templates" (3:25 - 3:40). This is something I have never seen before, but it looks like a bunch of shortcuts that expand into full HTML, nifty! It looks like it takes the best part of Jade - not needing to write HTML - with the added benefit of producing HTML rather than the sometimes confusing Jade markup. These are configured as Live Templates, so the usage and configuration is the same as the "ref" template we saw earlier. Find relative references also works inside script tags, so that is a few more keystrokes you can save here. But wait, what is this bower_components folder and where does it come from? This is another technology I've never seen before, but my minimal research leads me to believe that it is basically npm but for client side libraries. You create a bower.json file, specify the libraries you want to install, then run "bower install" to pull the files into your project (after installing it using "npm install -g bower"). This example only uses angular and angular-resource, so we can use the following (strangely familiar) file:
{ "name": "application-name-client", "version": "0.0.1", "dependencies": { "angular": "*", "angular-resource": "*" } }
Now the files can be found, and we can start putting some AngularJS into the HTML (3:40 - 3:55). As hinted at in the video, there is auto-complete for Angular directives, but it requires a plugin. To install it, go to the Plugins section of the settings, browse the repositories, and search for and install the AngularJS plugin. After restarting, the auto-completion should be working as shown in the video. I'm not sure what is converting "bi" to "{{}}" but I'm just going to assume its another custom template and move on.
The next part demonstrates how to start a debugger on the client as well as the server (3:55 - 4:25), which is pretty easy to follow along with but you might need to install the ExtensionJetBrains IDE Support Chrome extension. After a small change, we realize that the app dependencies aren't loaded correctly, so there is a bit of a detour to go an install that package, which is again easy to follow (4:25 - 4:45).
Next is some demonstrations on how to use browserify, which is yet again something I had to look up. A high level outline is that it looks like something that lets us write CommonJS/Node.js style require/export code in client-side code. Browserify can parse this code and use it to generate a bundled javascript file that can be included in the html. I'm not sure why this would be preferred over compiling TypeScript using the AMD module and using RequireJS to load the dependencies, but it does look pretty easy. The video shows a few different ways to call browserify (4:45 - 5:40), you can do it by running an interative terminal (alt-minus), or apparently through a "simpler command window", but I couldn't find that. You can also set up a file watcher that will detect when a change is made to app.js and automatically run browserify on it, similar to how JavaScript is created whenever you modify a TypeScript file. This is also easy to follow as long as you pause the video quite a bit so you can actually see what's being clicked on. With the bundle generated, we can now run and inspect our code.
The rest of the video is some code that you'll need to know a bit about AngularJS to understand, but it all works as shown (5:40 - 6:50), and that pretty much wraps up the video! Two things I still don't understand are how to get the camel-case expansion to work (e.g. RR => RosterResource), and what that "jump-to" shortcut is or what it's doing, but I'm sure I'll figure that out eventually.
Now to work on a real project in WebStorm!
Thursday, November 28, 2013
Making Sense of Modules in TypeScript, Node.js/CommonJS, and AMD/RequireJS
Instead of going straight into the TypeScript implementation, it's best to start at JavaScript, which has two main module management systems, the AMD (Asynchronous Module Definition) API which is implemented by RequireJS and jQuery (among others), and CommonJS's module specification which is implemented by Node.js. Both of these systems attempt to solve the problem of compartmentalizing sections of code without polluting the global scope.
In AMD, modules are defined using the define keyword, which takes a name if you don't accept the default of a file-based module name (which in general you should), a list of dependencies which are specified in the array argument, and a function within which the dependencies are scoped to a variable. Only the object returned from the function is the public interface of the module, so variables and functions can be hidden within the module definition. Modules can be referenced using the require function, which acts similar to the define function in regards to its dependency management, but does not allow a module definition to be returned. The AMD implementation can see a module's dependencies and load them asynchronously on-demand, so this form of dependency management is usually used on the client-side where bandwidth and speed is important. An example of some JavaScript code that uses the AMD API is shown below:
// File: subdir/dependency1.js define(function () { return function () { console.log("Hi from dep1!"); }; });
// File: dependency2.js define('dependency2', ['subdir/dependency1'], function (dep1) { var private = "Private"; return { public: "Public", func: function () { console.log("Hi from dep2! " + private); dep1(); } }; });
// File: main.js require(["subdir/dependency1", "dependency2"], function(dep1, dep2) { dep1(); dep2.func(); console.log(dep2.public); console.log(dep2.private); });
// Output Hi from dep1! Hi from dep2! Private Hi from dep1! Public undefined
CommonJS is similar but does not use the same scoping system, each file is a module who's public interface is defined by any property added to an "exports" variable (or anything that is assigned to the module.exports variable), and you pull in a module and assign it to a variable by using a require command. Besides syntax, the main difference between CommonJS and AMD is that in CommonJS all modules are loaded on startup, so it is mostly used in server-side code. CommonJS code that produces identical behavior as the code above is shown below:
// File: subdir/dependency1.js module.exports = function () { console.log("Hi from dep1!"); };
// File: dependency2.js var dep1 = require('./subdir/dependency1'); var private = "Private"; exports.public = "Public", exports.func = function () { console.log("Hi from dep2! " + private); dep1(); };
// File: main.js var dep1 = require('./subdir/dependency1'); var dep2 = require('./dependency2'); dep1(); dep2.func(); console.log(dep2.public); console.log(dep2.private);
// Output Hi from dep1! Hi from dep2! Private Hi from dep1! Public undefined
And now we get to TypeScript, which has its own module syntax. Modules are defined using the "module" keyword, and their public interfaces are defined by exporting things from inside the module. Functions and variables can also be exported in the file, without them needing to be wrapped in a module. Modules and exported variables are imported using syntax similar to CommonJS, but with the import keyword in place of var. An example is shown below:
// File: externalModule.ts export module ExternalModule { export function public () { console.log("ExternalModule.public"); }; function private () { console.log("ExternalModule.private"); } } export function ExportedFunction() { console.log("ExportedFunction"); }
// File: main.ts module InternalModule { export function public () { console.log("InternalModule.public"); }; function private () { console.log("InternalModule.private"); } } InternalModule.public(); //InternalModule.private(); // Does not compile import externalModule = require("externalModule"); externalModule.ExportedFunction(); externalModule.ExternalModule.public(); //externalModule.ExternalModule.private(); // Does not compile
// Output: InternalModule.public ExportedFunction ExternalModule.public
In my opinion, since this syntax is more explicit, it is cleaner and easier to understand than AMD or CommonJS modules, but here's where things get confusing. Since TypeScript compiles into JavaScript, you can actually compile this code into either AMD or CommonJS by using the --module flag on the tsc command.
For example, when you run tsc externalModule.ts --module "amd", you get:
define(["require", "exports"], function(require, exports) { (function (ExternalModule) { function public() { console.log("ExternalModule.public"); } ExternalModule.public = public; ; function private() { console.log("ExternalModule.private"); } })(exports.ExternalModule || (exports.ExternalModule = {})); var ExternalModule = exports.ExternalModule; function ExportedFunction() { console.log("ExportedFunction"); } exports.ExportedFunction = ExportedFunction; });
When you run tsc externalModule.ts --module "commonjs" you get:
(function (ExternalModule) { function public() { console.log("ExternalModule.public"); } ExternalModule.public = public; ; function private() { console.log("ExternalModule.private"); } })(exports.ExternalModule || (exports.ExternalModule = {})); var ExternalModule = exports.ExternalModule; function ExportedFunction() { console.log("ExportedFunction"); } exports.ExportedFunction = ExportedFunction;
To further add to the confusion, since all valid JavaScript is also valid TypeScript, there is nothing stopping you from mixing and matching TypeScript modules, AMD modules, and CommonJS modules (as long as you have an implementation of the module loader e.g. RequireJS for AMD and Node.js for CommonJS). Given the recent confusion I experienced I would recommend you just stick with the TypeScript syntax and have that compile into CommonJS for Node.js server-side code and AMD for client-side code.
One last thing I want to mention is the reference path syntax in TypeScript, because that can add to the module confusion a bit. As an example, let's make a simple class:
// File: MyClass.ts export module MyModule { export class MyClass { constructor(public str:String) {} public func() { console.log(str); } } }
We can use this class in another file by referencing it:
// File: main.ts ///var myclass = new MyModule.MyClass("test"); myclass.func();
So why does this work without us needing to import any modules? Because MyModule.MyClass is compiled into a variable, and TypeScript doesn't know how you're going to load your JavaScript files. You could easily have a HTML file that includes both of these files in script tags and it would work fine. What the reference tag does is tell the compiler where to find definitions, so it can tell that the MyClass class is within the MyModule module and it has a func function, so the code in main.ts is valid. When you're writing Node.js however, modules need to be loaded using CommonJS, so you need to use import/require commands in addition to referencing the TypeScript files or definitions (which are still required for syntax checks).
Hope this helps someone, here is some recommended reading/watching: