If you have changed or updated a git repository and want to throw away your changes then a good way of forcing the lastest changes to run a combination of two commands. The first command is git fetch --all, which tells git to download the latest updates from the remote without trying to merge or rebase anything. This is followed by git reset --hard origin/master where git resets the master branch (assuming you were on the master branch) to become the version of the master branch you just fetched. Here is the two commands in full:
After a recent update on Ubuntu I found that I was unable to use ssh due to a strange permissions error to do with the ssh config file. This was quite a problem as I wasn't able to push changes to my git server. The error was as follows:
I have started to use virtual machines to develop sites rather than installing a local web server. This allows me to replicate the exact setup of the server I will be deploying to with ease. For each virtual machine I set up a shared folder which allows me to store the files locally whilst being able to run the code on the virtual machine. One thing I missed was the ability to use xdebug to debug the sites through Netbeans, so I set about trying to set up the virtual hosts to allow me to use xdebug remotely.
Searching all files in a directory and sub-directories for a particular term is really useful and comes in handy in all sorts of situations. It is available on all Linux systems and the basic syntax is as follows.
grep -r -i pattern directory
The -r flag is used to recursively search underneath the given directory and the -i flag is used to ignore case. The pattern is a normal regular expression, which can be changed to an extended set by using the -E flag.
I have talked before about running Selenium tests in PHPUnit but I have only recently come to properly automate things. Getting a Selenium server to start and stop in a script is relatively easy and can be done in a simple script. My original script for running a directory of PHPUnit tests was as follows.
I had a recent requirement where I needed to temporarily replace the homepage of a website running Drupal with a simple HTML page. I wanted to do this without doing lots of changes to the site templates so I needed a solution that was easy to turn on and off and would still retain the Drupal site as it was. I found the simplest solution was to add a rule to the DirectoryIndex rule in the sites .htaccess file. Here is the rule I used.
I was at a meeting of the Manchester Web Performance Group the other day where Tom Taylor gave a talk about some of the performance testing tool he uses at Laterooms.com. He used a ruby script to set up some preferences in Firefox which then ran Selenium to open some web pages and test them with YSlow. The results of the YSlow inspection are then sent to a Show Slow server where the results can be graphed over time.
When Mike Bell approached me several months ago and said we should do a Drupal camp in the north west I was completely on board with the idea. So for the past few months I have been working with Mike and a group of people from the North West Drupal User Group (NWDUG) to create such an event. The result was DrupalCampNW2012, which was held from Friday 23rd to Sunday 25th November. The venue was the new University of Salford campus buildings in MediaCityUK.
I recently saw an implementation of a Twitter wall that used node.js to run searches on Twitter and post the results on a webpage. I had been wanting to create something using ReactPHP so I thought this was a good opportunity to have a go. ReactPHP, if you haven't heard of it, is an event-driven, non-blocking I/O that is essentially the PHP equivalent of node.js. The major difference is that ReactPHP is written in pure PHP with no extra components, whereas node.js is a collection of different programs, interfaces and languages.