PHPNW10: A Review

This weekend saw the 3rd PHPNW conference and being a PHP developer, working in Manchester, it would be inexcusable for me not to attend :). After missing my train and pouring my first coffee of the day into my conference pack it wasn't the best of starts. However, I still managed to turn up in plenty of time so I didn't miss any of the talks and got to say hello to the people I know from the PHPNW user group and some who I met at the PHPNW conference in previous years.

Day 1

Keynote: Teach a Man to Fish: Coaching Development Teams
Lorna Mitchell @lornajane

Image removed.
Photo By phpcodemonkey

The keynote this year was Lorna Mitchell, who talked about personal and professional development. Lorna's style and enthusiasm made this an excellent keynote with just enough joking to keep people interested and awake first thing on a Saturday morning. The talk was crammed with lots of great ideas about how to engage in training for minimal cost, and how that training can be delivered. I particularly liked the ideas of slide share karaoke, link Tuesday and developer lunches and will be suggesting them in my company when I go back to work on Monday.

The essential message behind the talk was that every person has a set of skills, and the aptitude to learn new ones. How we go about learning new skills is up to each individual. An important thing to realise is that this development shouldn't be the sole responsibility of the employer. Lorna also talked about how to used skill matrices to find out what skills the team has and where to concentrate on developing skills so that the team can work even if someone leaves.

joind.in page (with talk slides).

Geolocation and Maps with PHP
Derick Rethans @derickr

I was expecting a detailed and technical talk from seeing Derick's talks in previous years and he didn't disappoint. Every new slide was either a working map example or the code needed to get that example working. His talk had the most working examples that I have ever seen in a conference talk, which was quite an achievement. Overall this was a brilliant talk.

Derick started the talk by saying the Earth is not quite a sphere, it is shaped more like a pair, but can be approximated to a reference ellipsoid. There are different ways of describing the shape of The Earth, these are called datum or geodetic systems. Two systems in common use are WGS84, which is what GPS uses and OSGB36, which is what the UK OS use. The Earth is split into degrees, which start at the Greenwich meridian, although the two systems above have slightly different starting points.

The Earth is not flat but paper is, so when creating maps we use projections to project the sphere onto the flat surface of the paper. Different projections cause different distortions, making the shape, size and distance of different countries and continents look different.

Coordinates are used to pinpoint a place on a map, but different geoids (coordinate systems) give different coordinates for places. For example, when looking at OS and GPS it can be seen that they are a little bit out in the south of the UK, and more than a kilometre out in the far north. We use different translations to translate between the two coordinate systems, one example of which is the helmet transformation, however, even this can be up to 7m out in worst case.

When showing a simple map on a web page there are several services that you can use. Derick gave examples for Google and Openlayers, which are two of the best in my opinion.

Converting from a location name to longitude and latitude is called geocoding, doing the reverse (ie, converting from longitude and latitude to location) is called reverse geocoding. There are several services that are available to do this, the examples shown in the talk were Nominatim and Yahoo Geocode.

One notable example that Derick gave in his talk was to use his wireless access point to geocode his location on a map. In this example he used PHP to extract the MAC address and SSID names of all of the access points his computer could see and then used the Google geolocation service to convert this into longitude and latitude data. This is possible because when Google did its street mapping it also recorded every wireless network it found as they drove around the country. Derick said that when he tried this at home it was very accurate. He also said that when he moved house it took less than 3 weeks for Google to update the network address, which is pretty impressive.

I think the one important bit of information that I took away from this talk was the difference between spacial coordinates and geospatial coordinates. With spacial coordinates a circle at one latitude is different to that at another latitude. 1 degrees is less distance the further north you go so as you move the circle north it gets squashed.

joind.in page (with talk slides).

Zend Framework: Getting To Grips
Ryan Mauger @bittarman

After not using Zend Framework for a while I was keen to go to this talk in the expectation that it would rekindle my interest in the subject. Ryan started with a message that understanding the dispatch cycle in Zend Framework is of paramount importance when writing applications. A very basic form of the dispatch cycle is Bootstrap -> Routing -> Dispatch, but Ryan took us through more and more complicated examples until he displayed a full page class diagram. I was relieved at that point when Ryan said that he was scared of it too. Two things that came out of this section was that predispatch and postdispatch won't throw exceptions and that all modules' bootstraps will be run on each request (unless otherwise configured).

Other topics that were covered were autoloaders, plugins, action helpers, and what situations and circumstances to use each.

Perhaps the most useful part of the talk was when Ryan showed how Zend Framework form decorators work. I must admit that I have used them in the past, but ended up pulling my hair out in frustration over how they worked. Ryan used a simple diagram and a realtime PHP code editor and runner to show how form decorators worked and I simply understood them. I also understood why I had such a problem with them in the past.

The realtime PHP code editor and runner was called Beachphp and is an extension of eval2. This seemed like a neat tool to use (although I wouldn't put it on a live site) and will be taking a look when I get a chance.

The final part of the talk looked at database abstraction models in Zend Framework and what might be happening in ZF 2, which looks like it might incorporate Doctorine.

I have to admit that I was a little bit lost by the end of it, but it was still an interesting crash course in the dispatch cycle, autoloaders, action helpers, plugins, models and forms in Zend Framework.

joind.in page (with talk slides).

Unit testing after Zend Framework 1.8
Michelangelo van Dam @DragonBe

Unit testing everything in a Zend Framework application can be a bit daunting, but Michelangelo's relaxed style and extensive knowledge made this complex subject seem nice and easy. When all of the tests passed he would use the expression "lots of green, warm fuzzy feeling inside", which I might use when reporting unit tests to my own managers!

There are different types of testing although with many Zend Framework applications we use three strategies of unit testing. Controller testing, unit testing and database testing. Unit testing tests the logic of some of the simpler or more atomic elements of the application. Controller testing to test the Zend Framework application, does URL mapping, form validation and security. Database testing tests functionality of the database and involves making sure the CRUD functionality works. Databse testing also involves writing, converting and checking utf-8 characters.

To test the application we create a phpunit.xml file. This is a file that defines some command line parameters in an XML file, but also means we can include and exclude files. This is important as we want to exclude the view because we shouldn't be testing them directly. We then create a testhelper.php file, which is similar to the bootstrap with the important exception that the bootstrap object is not run. We then need to set up a file called something like ControllerTestCase.php that overrides the setup() method for the controller in order to stop it calling the parent and producing an error. When we use Zend_Tool to create a project the test directory will contain a mirror of the Zend Framework project, just without the views.

With controller testing we use $this->dispatch('/'); to browse to the root url of the application. This makes sure it actually works, ie. it doesn't produce any 404 or 500 errors. When we run the tests a code coverage report is generated.

Form testing is done in two parts. These are printing the form out to make sure the form element appears on the page and testing the form processing and validation. We test it with dispatch() and then test the output. We can also test with a dataprovider to throw an array of data at the form to make sure it fails. When we test the form with a dataprovider we can add in a bunch of well known security hacks to make sure that our application doesn't produce unwanted results. Additionally, when we find exploits we can also add these to the test data application.

When unit testing models it is important to think just about the business logic of the application. Model testing should not include database testing. Database testing is seeing that records are getting created, updated etc and that the correct encoding is being used. It also makes sure that all trigger, stored procedures are done on the db side and actually get done.

There are one or two things to think about when writing database unit tests. The database should also be reset to a known state at the end of every test, so if you write a record, make sure you delete it or truncate the table as well. This ensures that tests do not influence each other. System failures can cause the test to fail. We also need to watch out for unpredictable data fields or types, these are things like auto increment fields or data fields with current_timestamp. There is no way to know what value we will get from the database in these cases.

Once we have worked out all of the model tests we can then create the database tests. It is a good idea to create a mock database and fill it with a sample dataset, perhaps using XML files to store the data. When we use dataproviders we can add more files to add data or inspect current data.

Of course although we would like to, we sometimes can't test everything in the application so it is important to reach a balance between desire and reality. We should desire to test about 70% of the code base, use test driven development and write clean tests. However, the reality of things means that we test what is important or counts first, or we find the unknown quantities in the application and test those as a priority. We should also combine unit testing with integration testing.

Time saving can be achieved by using a continuous integration (CI) system that will test, create documentation and produce reports for us without us having to do anything.

It is also important to never test the production database, always test on a test database and it is a good idea to truncate the tables before starting.

During this session there must have been a leak in the roof as a section of ceiling tile fell into the room, right onto a guy sat two rows from the back. It startled him so much that he let out a yelp of surprise!

If you want a look at the code that Michelangelo went over in the talk then head over to his Zend Framework Unit Testing project on github.

joind.in page (with talk slides).

Practical Applications of Zend_Acl
Rowan Merewood @rowan_m

Image removed.
Photo By phpcodemonkey

The jokes either told by Rowan or hidden in the content of the slides made this talk stand out from the rest. It was an interesting and highly technical talk that made quite a few things in the ACL worlds clear for me. He made me feel better about my own frustrations by saying that ACL was complicated, which I had only hitherto suspected ;). Some of Rowans slides consisted of lots of code, which although he explained in detail I will really need to download the slides and revisit at some point.

Essentially, ACL is the gold standard for security, which can be said to be Authentication, Authorisation and Auditing. There is no right answer about when to use ACL, that decision must be made on a application by application basis.

ACL consist of roles, resources, privileges and assertions.

  • Roles:
    This is a named group of privileges, each of which may inherit from other roles. Be careful when creating roles, especially to avoid avoid circular dependencies. Roles can be overcomplicated, but the main rule of thumb is to try and keep a nice and flat structure to the role hierarchy.
  • Resources:
    These are essentially objects which users can interact with.
  • Privilages:
    This is a simple string that qualifies the operation a role might want to perform against a resource. It should have a shared vocabulary with operations. For example use CRUD style names for each to make it clear what each one is doing.
  • Assertions:
    This is a class that runs some code that takes roles resources and privileges and returns a true or false if the user is able to do that action. For example, a "user" can "view" a "group photo" if "user is a member of the group". This does not include things like user can't add a photo to a group that doesn't exist, which should be in the business logic of the application.

The classes involved in Zend Framework ACL are Zend_Acl, Zend_Acl_Role and Zend_Acl_Resource. After setting everything up it is possible to use something like the following:

$acl->isAllowed('user', 'resource');

One tip that Rowan gave when setting up ACL was that it saves a lot of headaches if you give permissions to groups of users, rather than to users directly. With this solution you can always create another group that has the permissions the user is capable of.

When to attach ACL rules depends on the application. They can be added to the controller and action (which is the method used most in tutorials). They can be added to the model, the business logic or everything, but the more times you check the permissions of a user, the more performance hit the application will take.

When unit testing your ACL rules you should pass the Zend_Acl class into your wrapper. Use a factory so that caching can be used and won't be a part of the ACL classes.

The final bit of advice Rowan gave was to think about what you want to protect and then test your solution with realistic data, not just simple testing data. More realistic data will give more realistic results, where simple testing data can leave gaps in your logic that you won't be able to spot. It is also important to assume that you are wrong, the ACL you build will almost certainly have security holes in it somewhere.

joind.in page (with talk slides).

Database version control without pain
Harrie Verveer @harrieverveer

Image removed.
Photo by phpcodemonkey

Harrie started the talk off by saying that he had asked Twitter users to give him some jokes to start his talk with that an Englishman would understand and showed a few of the best responses. Although some of them were good, they were mostly terrible, and the best advice he got was just to be himself, so he was.

SVN and GIT create nice source control mechanisms, but there is one thing they miss out completely and that is the database. After explaining a couple of different strategies and their obvious and insurmountable flaws Harrie said that there was no silver bullet, which created a collective sigh from the audience. One of the main practices in use today is called the Simple Patching Strategy.

 

The Simple Patching Strategy involves writing any changes done to the database into a file so that the file can then be run and apply the changes to a database of the same version. This file will mainly contain commands like alter table, although it can involve delete or create tables or even changing configuration data. For this strategy to run it needs to update an option table to set patch number as it is essential to make sure that you know what version you are on before you try and patch.

Writing these patches is important, but it is also important to create rollback patches that can revert back to the orignal version. In order to make rollbacks work the patches themselves must not be destructive i.e. dropping a column will lose data, so adding it back will not get the data back. Backing up the data before patching is a better approach, but it is just this problem that we are trying to overcome. The other solution to this that Harrie didn't cover was to patch in a non-destructive way. For example, lets say we want to move data from a user table to another table, we create the table and move the data but leave the original data in place. This way, if we need to do a rollback we don't need to do anything as all of our data is still present. The only issue with this is that the old data quickly goes out of date and adds to the size of the database but doesn't have a function. The solution here is to remove it later on, when you are sure that the patch was a success.

It is also important in this strategy to create an Install.sql file. This is a file containing the modified database, easier for new devs or for open source projects so that they can be installed. It is also an idea to create an install file that creates dummy content.

Nasty things will happen if you run a patch twice - so automating things is a massive must. Here is some pseudocode that Harrie showed that will run the patches up to the current version.

write a patch script

getdbversion

while (patchfileexists(patch++) {
    run patch            
}

The main problem with the simple patching strategy comes with branches and merging, patches are fine if branches happen in order, but this never happens. We could use naming techniques and other strategies, but mess can occur, ie. we might even end up with the same table names doing different things so running a patch might break the table. To synchronize two database structures without unexpected dataloss, communicating all steps needed to get from A to B is inevitable and will probably be a manual process.

Harrie also covered a couple of tools that try to overcome the database version problem.

Phing & DB Deploy. We could use Phing to create a migrate database target and use the DB deploy Phing extension to run the same patching strategy. This uses PHP and SQL and although not very feature rich, will get the job done.

Liquibase is a java system that uses XML files to define the schema. This makes it database independent, but we can also put SQL commands inside these files if we have to. Migration is the primary function, but Liquibase also supports updating, reverting, tagging, and will allow you to generate xml from tables and patch files. It has large DBMS Support, which is essentially everything that jdbc driver will run on and has good documentation.

DB Schema Manager. This is a Zend Framework plugin created by Akrabat (Rob Allen) that runs SQL commands, but also allows PHP patching of the database. This means we can do things like create encrypted passwords or similar. You can find out more about the DB Schema Manager on Akrabat's website.

Doctrine Migrations is the Symfony solution to this problem and is an ORM specific solution. It uses YAML files to communicate the patches, rather than store them as SQL patches.

This talk was informative and gave me a few things to think about when dealing with database updates. Overall it was very well researched, although I was thinking throughout that this problem has already been largely sorted in Drupal but which was missing from this talk. Perhaps Harrie didn't have time to go through every solution, but it was perhaps worth a mention.

joind.in page (with talk slides).

Framework Shootout!
Marcus Deglos @manarth

Image removed.
Photo by Stuart Herbert

This session featured a panel of framework experts consisting of Sam de Freyssinet (Kahona), David Zuelke (Agavi), Derick Rethans (Apache Components) and Rob Allen (Zend Framework) it was all chaired by Marcus Deglos. Although it could be said that David Zuelke did dominate the panel a little bit, I think he did bring a certain amount of enthusiasm, controversy and joking about that made for an even more livelier debate than if he wasn't there.

The audience were also given the opportunity to tweet their questions about PHP frameworks with the tag #phpnwshootout, which added some good questions to the discussion. Overall this was a good way to end a day of brain mashing technical talks with a friendly discussion about PHP frameworks and how they are important to the rest of the PHP industry and community.

joind.in page.

Finish

Image removed.
Photo by Stuart Herbert

The final session consisted of a round up by @phpcodemonkey. After Jeremy got through thanking everyone Rick stood up and made sure that Jeremy himself was thanked for organising the event. There was a large and extended round of applause for him and it was clear that there is a lot of love and respect in the room for him and for his efforts.

The final item was to give out some prizes. Throughout the day there was a bag available that allowed people to drop their names in with the chance of winning a book or tickets to another PHP event. For the first time in three years of going to this conference my name was called out and I won a data.gov.uk t-shirt, which is a no-frills t-shirt. Thanks data.gov.uk!

After all this there was a mad rush to get to the bar and drink it dry. This was helped along by the fact that it was paid for, for the first 2 hours. I had to leave at about 9:20pm, but the party and Mario Kart tournament was still in full swing.

joind.in page.

Image removed.
Photo by phpcodemonkey

Image removed.
Photo by akrabat

Day 2

Abstracting functionality with centralised content
Michael Peacock @michaelpeacock

I usually have a little bit of trouble in getting into Manchester on a Sunday and this was no exception. This meant that I missed the first few minutes of the talk before I turned up.

From what I did see I thought that Michael had an interesting concept that seems to be a common theme among content management systems like Wordpress and Drupal. He talked about how a centralised content type could be implemented and how things like comments and ratings only needed to be added once.

joind.in page (with talk slides).

Using Zend_Tool
Kathryn Reeve @BinaryKitten

Looking a little worse for ware from the partying last night (even though she doesn't drink alcohol) and talking a little later than planned Kat still gave a great introduction to Zend Tool. I have used Zend Tool in the past and so have seen some of its capabilities, but Kat's talk was so content rich that I learned a lot more than I previously did.

Zend_Tool is essentially a mini framework within Zend Framework that allows the setup of Zend Framework projects. It comes with pre-set providers and actions, but these can be expanded with plugins. The tool is available in full Zend Framework package (in the bin folder) or from svn, or GIT and also in Zend Server.

To use the tool you need to add the variable ZEND_TOOL_INCLUDE_PATH=/path/to/lib to your path, this only needs to be temporary, but you can make it permanent if you like. You should also add the bin folder and your PHP binary to the path or the tool won't be able to run. To test that you have everything up and running you can run the following command:

zf show version

This will print out the current Zend Framework version number that you are using. Once this works you can then run three set up commands, these are as follows:

zf --setup storage-directory
zf --setup config-file

There are other configuration options available, the command zf --setup will print a list of the available setup commands.

The general syntax for a Zend Tool command is below, note that you can use a ? in place of any command or --help at the start of the command to get more information about that section. To see a list of all of the actions available just type in zf on its own.

zf [--global-opts] action-name[--action-opts] provider-name [--profider-ops] [provider parameters...] 

To create a Zend Framework project in Zend Tool use the following command:

zf create project .

This will add the project to the current directory. You can then do different things with the project like enabling the layout.

zf enable layout

Or creating a controller and an action in that controller. These actions will also create unit tests in the test directory and will also overwrite any unit tests you have written.

zf create controller controllername
zf create action actionname controllername

It is also possible to extend Zend Tool to different actions that can be run through the zf command. Kat said that Cal Evans was using Zend Tool to Tweet with so there is a lot of potential as to what can be done with Zend Tool. To do this you need to create a namespace folder inside the library folder, then create your tool folder inside your namespace folder and create your extending class, and finally place the class file in your tool folder. Your class needs to extend Zend_Tool_Framework_Provider_Abstract and you should throw errors using throw errors using Zend_Tool_Project_Exceptoins.

This was an interesting talk. Kat has a unique and amusing style of presenting that personified system components. She was knowledgeable about the subject and clearly uses Zend Tool in her day to day tasks.

joind.in page.

Turbocharge Your PHP With Nginx
Errazudin Ishak @errazudin

Image removed.
Photo by phpcodemonkey

Nginx is pronounced engine-X and it is essentially a free, open source and lightweight HTTP server, created by Russians. It is faster in comparison to Apache for the main reason that Apache does a lot of things but Nginx only does 5 things, 4 of which are done faster than Apache. Nginx is a HTTP server at its core, but it also can act as a reverse proxy, an IMAP or POP3 proxy server, a load balancer or even a media streamer. The popular USA television site Hulu use Nginx to stream their content. Nginx can handle lots of requests with less memory and less cpu than Apache uses. It also has support for mod rewrite and virtual hosting.

Errazudin went into a very short analysis of the HTTP server market shares and Nginx was the only one to gain a share (although very small) in the last 6 months and is one of the top 3 most used HTTP servers in the world after Apache and IIS. Some notable top sites who are using Nginx to serve the content are hulu.com, torrentreactor.net, sourceforge.net, github.com and wordpress.com. PHP is compatible with Nginx and is run using fastcgi.

There was then a detailed look at benchmark Nginx alongside Apache, and in every instance Apache was either much worse or a little bit worse than Nginx. In Errazudin's experience Nginx is best when using lots of concurrent users.

I had never heard of Nginx before this talk and Errazudin made a convincing case for its use in production environments. I will certainly be giving it another look when I get a chance.

joind.in page.

PHP through the eyes of a hoster
Thijs Feryn @thijsferyn

Thijs works at Combell, who are a hosting company based in Europe. His main message was that any hosting provider is a genuine stakeholder in the PHP community as any web app needs to be hosted.

This wasn't a technical talk, but one little trick that made the FTP user account the same as the PHP cgi user.

SuevecUserGroup dev dev

Image removed.
Photo by phpcodemonkey

Even though PHP4 is at the end of its life and as much as we all would not like it to be it is far from dead. PHP has become a popular language because it is easy, cheap and stable. This means that everyone can write PHP script but not everyone has what it takes to be a real developer. The majority of hacking/abuse cases are PHP related, but the real issue is a combination of the quality of code network and server security and the PHP version and configuration. Developers can sometimes forget that they are responsible for making things work.

MySQL explain is a good way of finding out how a query looks behind the scenes in MySQL and is invaluable when finding bottlenecks in applications. It should be an essential tool for every developer, knowing how to construct a query is one thing, knowing how it works behind the scenes can greatly increase performance.

Some other tools that Thijs recommended using to speed up applications were APC, Memcached, Gearman and Varnish. Varnish is especially important when serving different parts of the page statically or dynamically.

joind.in page (with talk slides).

Community works!
Michelangelo van Dam @DragonBe

Image removed.
Photo by phpcodemonkey

This talk by Michelangelo (his second talk of the event) was more of an inspirational and motivational talk about why people go to conferences and why people set up and go to user groups. His message that it is all about giving something back to the community and all it takes is one person to do it and other people will follow. If no-one else has done it then it might as well be you.

Michelangelo had heard a quote once that said that "user groups are the hippies of the new age", which is true in that we all give and contribute to the community and share a love of a subject. I was in total agreement with Michelangelo when he said that he goes to his user group (PHPBenelux) because it is an extension of his family. I had never really thought of it like that before and it was an interesting and revealing take on why I devote so much of my spare time writing technical blogs and attending various user groups around Manchester. He used the phrase "warm and fuzzy feeling inside" to describe how he feels when he is part of a community, which is a fair reflection.

Contributing to a community can be about contributing to open source projects (either through writing code or documentation or even translation) by writing blog posts and attending or organising events that include various user groups.

The take home message really was about how can you do your little bit of contribution to the community?

joind.in page (with talk slides).

After the morning sessions were over a group of about 20 of us went to the nearest pub to get something to eat and talk about the conference before some of the delegates had to leave for the airport. The sun was out so we all sat in the beer garden and enjoyed the weather, which made a nice finish to the conference. Whilst we were sat there we received word that two guys from Newcastle had liked Michelangelo's talk so much that they decided to create the PHPNE user group and within 42 hours they had set up a PHPNE site and created a PHPNE twitter account.

In conclusion this PHPNW conference was definitely the best yet and I'm already looking forward to what PHPNW11 will bring. Due to the popularity I imagine that next years event will probably have to be held at a slightly bigger venue. Which is partly a shame because it is such a brilliant venue, but also quite good as it means that the event has received real recognition both nationally and internationally. Finally, I would just like to thanks Jeremy Coats, the team from Magma Digital, the speakers and the rest of the volunteers for making this such a brilliant event.

Add new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
1 + 8 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.