Sunday, September 21, 2014

Long Time No Post

Wow, it's been a while since I've posted anything. I have been keeping busy though. I haven't started on any major new projects, but I've made some pretty big changes to some of my existing projects so I might as well write something about them.

Firstly, the thing that's been taking up most of my spare programming time has been a rewrite of Scrobble Along. I decided to rewrite it mainly because the old code was written in an old version of TypeScript in Visual Studio, and after I installed the new TypeScript compiler my Visual Studio code stopped compiling. Rather than try to port over the TypeScript I started making hotfixes to the generated JavaScript and everything got really messy really quickly. I also really didn't like that the scraping part which gets all the tracks that are playing on the various radio stations was mixed in with the frontend website code. Also all programmers love starting a project from scratch, and I was pretty keen to work on another project that could use AngularJS. Now I have two separate codebases, the scraper runs on a Linode server I have, and it just loads all the stations about every 30 seconds and grabs the tracks from their various HTML and JSON sources, then scrobbles the tracks to the station's profile and to anyone who is scrobbling along. The frontend website is running on Heroku, and all it does is load information about the stations, and adds record to my database when someone starts or stops scrobbling along. It's much saner code now and I still really love AngularJS so I'm much happier about the project now. I did have to make one other major change, and that was to make the website pull the latest track details from my database rather than trying to load it from the last.fm API. For some reason, when I re-wrote the code it started taking a whole lot longer to load all the station details from last.fm, so it was taking a good 30 seconds for everything to show up. Right now it's loading all profile images, recent tracks, etc from my database so it's a whole lot faster than it used to be. I've got the code on two separate github repos, but I made a simple repo that has both of them as submodules here, so feel free to check out the code for more details.

The second thing I've been spending some time on has been updates to Trashbot. I've been experimenting with ways that I can try to combat bots that are just taking everything from as soon as it is dumped in. My plan was to limit the number of trades for each account to taking something like 10 items per day, but that would be impossible until I was able to quickly look up the trade details. As a first step I wanted to make a record of all the details available in MongoDB, under 3 tables. The first one would be a summary of each user, containing the total number of trade requests, items taken and donated, friend requests etc. The second one would be granular details for each trade, including the item, time, user, etc. The third one would be a record of the number of trades each user had made in each day and how many items they took and gave. Once that was set up I was going to look up the daily trades before accepting anything, and if a user had already taken too much I was going to make the bot refuse the trade. I've got the details being saved to MongoDB now, but I'm 90% sure I'm not actually going to refuse trades, as when I look at the details, there really are not many individuals who are taking the majority of the items. I think just occasionally looking though the details and banning a few people is the best option, any systematic method I come up with will just end up as an arms race against any "takerbots" that are out there. It was an interesting process to get all the details to be recorded though. I ended up writing a simple REST server that would update MongoDB when various POST requests were sent to it, e.g. I have something like /trade/userid/tradeid/itemid/taken which will update all the tables with one more trade item taken by a particular user on a particular day. Doing it this way meant that I was able to record the trade from both the bot which is accepting trade requests, and the CasperJS script that is accepting the trade offers. Again, the code is up here on GitHub if anyone is interested.

The final thing I'll mention was a relatively small weekend project I did for a battle created on the /r/webdevbattles subreddit. The challenge was to build an elevator simulation, and it piqued my interest because it was similar to a job interview question I got which I did not to very well on. Most of the people competing were focusing on the "frontend" part of the problem, e.g. by making CSS to show elevators moving up and down, but I wanted to focus more on the "engine" part of the problem, as that was what the interview question was about. I ended up writing a simulation that represented passengers and elevators as individual state machines, which would operate independently. The passengers waited on a floor, requested their destination, then waited around constantly checking for an open elevator that was going in the right direction, and at that point they would get on and wait for the doors to open on their destination floor before getting out. The elevators had a set of target floors and they just moved to those floors and opened their doors, waited until no-one had entered for a while, then headed to their next floor and opened their doors, etc. A central "brain" was in charge of reacting to floor requests from passengers and assigning those floors to one of the several elevators. I think it was a good idea, but I was limiting myself to a weekend's worth of work, so it's not really working 100% right now and elevators have a habit of bouncing between floors indefinitely and never reaching their target floors. I'm pretty sure the elevators need some sort of floor queue rather than just using a set of floors that they need to end up on at some point, but I'm not really planning to test that theory any time soon. Once again, I used AngularJS to visualize the engine data, and was very impressed with what I was able to get working in a pretty short amount of time. The code is up here.

And that's it! I'm not quite sure what I'm going to work on next, it's been a while since I've been in a situation where there isn't a project I know I "should" be working on, so I think I'll have to think up something new.

Saturday, January 25, 2014

Automatically Accepting last.fm Friends

After a bit of a break from working on my projects I've finally got some spare time again so it's time to get back on the saddle. Number 1 on the "kind of annoying thing I should fix" list is related to ScrobbleAlong. I've added a bunch of stations since I first launched the site and I'm now running about 30 different last.fm accounts for those stations. Fans of the stations want to be friends with them on last.fm, and I'm happy about that but there is no API for accepting friends so about once a week I've just been logging into all the accounts and accepting all the friends. Being a software developer, I wanted to find a way to automate this boring and repetitive task.

My first thought was to use PhantomJS, a headless browser which I've used somewhat successfully to accept trade offers for my Steam Trash Bot which allows you to write JavaScript code to visit a website and do various DOM manipulations on it. After a bit of experimentation I realized that it was very fiddly and hard to do sequences of actions, and some web searching revealed that CasperJS was better for what I wanted to do. CasperJS is a wrapper around PhantomJS that allows you to easily write a sequence of navigation and manipulation steps - exactly what I wanted to do!

The sequence of steps I wanted to go through were, for each account I have, log in, go to the friend requests page, accept all the friends, then log out. Logging in and logging out was fairly easy, I just needed to tell CasperJS to go to the log in page, fill out and submit the form, then submit the logout form, the only trick was that I had to tell it to wait for the redirection after the form submission. Accepting friends was another story since there is no easy way of getting CasperJS to do something for each result of a selector, but as usual, StackOverflow had an answer that pointed me in the right direction. The trick is to come up with a CSS selector that will find the first unaccepted request, then click the accept button and wait until it is accepted, then try again until no more unaccepted requests are found. In last.fm, when you click the accept button, the div it is in gets hidden but the HTML is still there, so most selectors will not be able to tell if it's been accepted or not. Thankfully and slightly confusingly, one thing does change with the request, the action of the form changes from nothing to "/ajax/inbox/friendrequest", so the selector "form:not([action='/ajax/inbox/friendrequest']) input[name='accept']" can be used to find unaccepted friend requests.

Putting all this together I've written a nice little script that will save me literally minutes every week. Just think of what I can do with all those minutes!