Legacy Code Evaluation Part 2 – Back End Code Evaluation and Tools

Intro

All of us will encounter legacy code, and most of us will ‘inherit’ code to maintain. In our first post on legacy code, we talked about getting the things you need to start working on a project. In this post, we will talk about how you can evaluate the backend code you will most likely have to start digging in to start new features or squashing bugs.

While you will encounter a significant amount of UX or front-end issues when starting on a new project, you are also just as likely to have to track an issue to the underlying backend code. The backend not only creates the foundation for your web app but its also the last line of defense for your back end code. At minimum issues on the backend will cause confusing results or headaches. At worst, issues on the backend will expose some of the most vital parts of your webapp to exploits.

Most backend code is the result of a lot of hours and hard work; it can be extremely confusing to wrap your head around the basic concepts let alone the deeper portions of the code. We’ll discuss some basic patterns and explain some of the tools we use to explore code, PHP code in particular, when we take on a project with some ‘baggage’.

Strategies

I like to split the application into small semantic sections. You can either break an application down into ‘workflows’, the actions a user may perform with an application; or as ‘roles’, things a particular actor will use a web app for. While this may sound like a very high level activity for inspecting back end code I think its crucial for understanding the application itself.

Once you have split the code into sections, trace the entire workflow. Follow the code from the beggining of an action to the resulting output. Tracing through the code will allow you to completely understand it.

Debugging and Tracing Tools

Xdebug

This is one of my favorite tools and is the first thing I install when I set up a new environment. Not only does this give me much easier to read error outputs, but it also gives you some great tools for interactive debugging. Seriously, if you do nothing else I suggest, at least install xdebug.

Remote Interactive Debugging – MacGDBp & chrome-xdebug-enabler

Interactive debugging is the best tool for finding that super stubborn bug or finding out why something isnt working as you expect. With a remote interactive debugger you can set break points and view exactly what the code is doing at a given point in time. You can actually inspect a variable value and watch it change as you step through the lines of code. This is something you should get excited about. Check out the screenshot below to see this in action with an example Erdiko project:

I use MacGDBp, a Cocoa based debugging tool that allows you to set break points and view the code when you set break points. I’ve been told that PHPStorm and even Sublime have some interactive debugging tools, I have not used those. MacGDBp is a bit dated, and I think almost abandonware, but it is still an indispensible tool in my aresenal. Check out the MacGDBp project docs for more info about the feature set and how to install it.

Personally, I would recommend using MacGDBp along with xdebug and the chrome extension ’chrome-xdebug-enabler’ to interact with your debugger. Using this extension you can conditionally turn the debugging off and on as you wish when refreshing the page.

I should note that while I personally use MacGDBp with vim, many other editors also include some remote debugging tools. PHPStorm has this functionality built in, but there are plugins for SublimeText and other full editors. We will explore those in a later post.

PHPLint and PHPCodesniffer

There are two great tools for getting some high level information about a PHP codebase: PHPLint and PHPCodesniffer.

PHPCodesniffer evaluates code based on a provided set of rules and standards and outputs a report that can give you a good idea of how ‘bad’ things look under the hood. It’s actually two tools: phpcs and phpcbf. phpcs evaluates the code, phpcbf attempts to fix the code. Personally, I like to use some discretion when tweaking code, so I would reccomend using phpcbf sparingly.

PHPLint is another tool, very similar to PHPCS that also outputs information on a code base based on rules and validation sets. This one is also very useful for generating user docs based on docblocks. This can be EXTREMELY useful when you need to collect a lot of information about a large codebase and create some baseline documentation for your team.

Unit Testing

Unit testing is often touted as the best prevention of introducing new bugs and finding conflicting code. I would also like to note, most people ignore the fact that introducing unit testing to legacy code is a painful process. Code written without testing in mind can often be a pain to manage dependencies and ‘magically’ required code.

This is a topic that actually deserves its own blog post, but I do want to mention it in this one.

… to be continued

In my next post, ill include some instructions on how to install xdebug on your local *nix development environment.

Legacy Code Evaluation Part 1 – Intro

All of us will encounter legacy code, and most of us will ‘inherit’ code to maintain. Even on the freshest of projects 99 % of the time you will encounter some ‘baggage’ code that we will have to maintain. This is especially true as a contractor for any PHP project. Deservedly or not, PHP has a bad reputation for code quality

The good news is that you can mitigate some of the worst aspects of a legacy project by doing some work up front. Making sure you have all the assets you need up front can prevent embarrassing emails to the client asking for random things and generally making your life easier.

For this first of three blog posts about legacy code, we’ll spit this into two lists: Things you need and nice to haves.

Things you need:

Server Access 

This should be a no brainer. You need access to the server to make sure you have the most up to date code, and hopefully you will be deploying some updates in the near future. Yes, this may seem very obvious but you would be surprised how easy it is to forget this step in the excitement of a new project.

Ask the client how they access and how the previous team accessed the server. Make sure you try and actually login with these credentials as well. After you successfully get access, you may need to deactivate or in the minimum change some passwords of the previous server users.

Code in Version Control

You will also want access to the most up to date version of the codebase, hopefully in a form of version control. Again, while this seems like a no-brainer, it is very important to track changes you make once you start to dig into the code. It can also provide some information as to what the previous developer was thinking with the code in the past.

Once you have access to this you can convert it to your VCS of choice or continue to use the one provided. You will really want to consider keeping the VCS the same if you are working with other existing teams or with the client who needs access to the code.

It is also worth your time to ensure that the code in the VCS is sync’d and up to date with what is on the production server. There are many file comparison tools that can be used to compare entire directories which we will cover in our next post. While you can assume a previous team who had a code base in version control has kept things up to date… like the Russians say: Trust, but verify.

Documentation

Ask for any documentation the client can provide, good or bad. Something is better than nothing.

You should also plan on converting this into some form of living document you can edit and share easily. A self hosted wiki works great for tracking changes, but so does a markdown file stored in the repo itself. Anything easy to edit will remove another barrier that will prevent you from updating documentation, and already loathsome task.

You should be documenting everything as you explore the project. Document the file and database structure, document the existing workflows. Ask the client how things “should work” and then note how things are broken. Get it all into a living document so you can track your progress.

Nice to Haves:

A Shared Project Management System

Create a method in which you can communicate your project status with the customer. This will also provide a great resource in which you and the client can document bugs and future updates.

A shared system will allow you to manage your project beyond instant messages and emails. It also provides instant feedback to the client allowing them to see ‘exactly what they are paying you for’ and allows for the easier exchange of ideas.

If you’re lucky, the client may already have one (and be trained on how to use it.) It’s also very likely the client does not know how or what a Project Management System is, in which case priority number one is showing them how it is useful.

A separate Staging and local development environment

Tools like Docker and Vagrant allow you to spin up servers quickly and provide tools to re-recreate the server after you mess it up. This will become very important as you dig in and mess things up while trying to determine exactly what is wrong.

Our next post will focus on some frameworks and tools we commonly use to evaluate legacy code. We’ll also provide some basic examples of how to use these tools.

AngularJS 2 & AJAX

Intro

The AJAX paradigm really launched the web into primetime some 10+ years ago, but now its almost expected. Now it’s a ubiquitous part of the web landscape. Most web apps now resemble an ‘app’ in regards to their UX and users ‘expect’ content to dynamically update without a page refresh. In the past, we have used some great tools like jQuery to cobble together a basic app to interact with our page and an AJAX backend, but a framework like AngularJS adds some sanity and structure to what could have been a mess of spaghetti code.

The most common use cases for AJAX are dynamic content and form interactions. For example, if you have a large list of tabular data, you can load it via AJAX to show your basic page to a user without having to render the table on the server. Forms can be validated with JS and sent to the server in the ‘background’ so the user doesn’t have to leave the page. There are MANY other use cases for AJAX, but I think this covers the majority you will encounter when developing a web app.

HTTP / XHR

Most browsers you will encounter, or your clients will use, will have access to the HTTP API known as XHR (it’s safe to assume that anything above IE 5.6 will allow you access to this API). While some browsers offer Fetch support, I will skip that for the purposes of this blog post.

AngularJS Services

In general, you will provide AJAX access via a service. Services are a part of the Dependency Injection patten that makes AngularJS so modular. Creating a piece of code that allows you to encapsulate all actions that make AJAX calls allows for better re-use and easier testing.

Observables vs Promises

Too often used tools/patters for working with asynchronous data structures are Observables and Promises. While you may be familiar with Promises as a pattern, and most likely have used something like promises in the past, Observables are a pretty new concept to me.

In short, Observables are “a push based collection” using the observer pattern. Put simply, this is a queue of objects with some amazing manipulation methods and hooks to allow you to detect when something occurs within the queue. It’s pretty great for dealing with a dataset that needs to be lazy loaded or can possibly get updated in the life cycle of the application.

Here is a simple example based on the angular.io Hall of Heros example showing an observable request:

image

A promise is “a way to define actions or computations once an async event completes”; an observable is “way to define computations or actions that happen when one or more events in a stream occur”.

Here is a simple example based on the angular.io Hall of Heros example showing an promise based request:

image

The key take away from comparing these two tools: observables and promises are both great tools for async programming like AJAX, but observables are more like a stream of data than a ‘one time request of data’. We’ll touch more on this in future blog posts.

Here are some code examples that borrow very heavily from the HTTP Client example found on the Angular.io site.

Code Example: Promise

http://plnkr.co/edit/aB69cfM5soJzz9SPf1e7

Code Example: Observable

http://plnkr.co/edit/t6zDqSAjGNvSKNzIQJtR

Understanding AngularJS 2 vs AngularJS 1: Part 2

Components instead of Controllers

While Components have been around in some shape or form around the web, Angular 2 is based around these amazingly flexible elements.

Components in AngularJS 2 will entirely replace controllers. In fact, they will also replace the concept of a module. They look like a directive, but actually act like a controller.

A component is essentially an HTML element that uses Shadow DOM to encapsulate its own scope and behavior. What this really means: your component has its own variable scope. What happens in your component, stays in your component. You can however inject your components into other components or decorate them to add functionality.

Here is an example of the previous burrito example above implemented in AngularJS 2 showing a component in action:

Decorators via Annotations

Decorators are a simple design pattern that we can now use to update the values and context of a component without introducing new behavior.

Seriously, this is great. You can just apply a decorator to your directive and change its variable values without actually tearing into its guts. Change your components color from ‘Red’ to ‘Blue’ at runtime externally without hacking away at its internal functionality or exposing some getter after the fact.

Currently, this is available in the upis provided by TypeScript, but is

A Focus on Dependency Injection

AngularJS 1.x certainly used the concept of dependency injection and promoted it with the usage of services and factories. I don’t think I need to spend a lot of time stumping why and how this can make for better encapsulation and easier re-use of code.

AngularJS 2 will take this a step further and encourage you to use dependency injection with components. Write a login component and inject it into

Zones.js

Previously, we needed to let the Digest / Apply loop cycle through your variables, or use the oft-maligned scope.$apply to let your app know that something changed. ZoneJS will now be handling all of this.

Zones is able to encapsulate the entire scope of your component and is aware of when asynchronous events start and stop.

While getting away from $apply and not having to worry about your scope is great, here comes the best part: ZonesJS will speed up execution time as much as 5x compared to AngularJS 1.x code.

Here is a great talk about Zones by Brian Ford given at the 2015 ng-conf.

The bad news: Zones is only available for modern evergreen browsers. This means if you are using IE10 or below you will not get the to use this amazing toolset. Angular 2 will have fallbacks with dirty value checking and poly-fills to make sure this is all invisible for you, but you will not see the major speed increase you would get from actually using ZonesJS.

TypeScript

TypeScript is a superset of ES6 and gives us some amazing features.

AngularJS 2 will be written in TypeScript, but this doesn’t mean you have to use it. I can say that I plan on using it as I think this will be an area

Ok, this sounds great. How do I prepare to upgrade?

I won’t go too much in depth on this but there will be upgrade paths and plans if you dont want to just ‘start over’ with your existing angular app.

The Official AngularJS 1.x to 2 Upgrade Strategy Doc offers some insight on how to upgrade.

Understanding AngularJS 2 vs AngularJS 1: Part 1

As I’m sure you know if you are reading this, the AngularJS team is planning for a major version release “very soon” that will include some major paradigm shifts. A lot of the things you learned while teaching yourself AngularJS 1.x will just not be present in AngularJS 2.

AngularJS 1.x is stable and will definitely be around for a long while with long term support from the original team. But do you really want to pass up on some of those amazing features (3-5x Performance increases, the ability to use Server Side Rendering, bragging rights)? No, of course not.

Just to make sure we can explain the concepts of Angular 2, let’s briefly review the main concepts of Angular 1.

Modules & Controllers

Controllers are just Javascript objects that create scope (more on this later), setting up attributes and functions. It’s the connection point for all scopes, functions and data sources in the application. These are similar to traffic cops that direct the flow and send data back and forth between the parts of your application.

A module can be thought of as the container that holds all the different parts of your AngularJS application. These have the advantage of making your code declarative and well encapsulated.

Directives

Custom HTML elements used to enable and interact with AngularJS. This cool feature is how we instantiate your app (`ng-app`) or create custom elements that transform data and views with JS logic stored in your models and controllers.

Here is an example of a directive in AngularJS 1.3 that replaced a `<burrito />` directive with an image of a burrito:

HTML

   <div>
<burrito></burrito>
</div>

JS

var app = angular.module(‘plunker’, []);

app.controller(‘MainCtrl’, function($scope) { });

app.directive(‘burrito’, function(){
return {
restrict: ‘E’,
replace: true,
scope: true,
template: ’<img src=“[ASSET SOURCE]” />’
};
});

Scope

The Application model and context, defined by an angular element. Defined hierarchically, scopes can be passed between elements

It’s important to note that Controllers, Services and Directives all have their own nested scopes. This can get messy when you are passing around data and need to make sure you aren’t polluting your own scope.

image

 

Databinding

 

While this is a great feature, it’s incredibly slow. Every digest cycle has to check ALL of these variables and update them appropriately.

Data binding is that magic that lets you sync data between your models and your view. In a sentence: your model’s data is bound to the view and your view is bound to your model’s data.

Here is a quick example:

HTML

<fieldset
<h1>Hello {{name}}!</h1>

<fieldset>

<label>an example of databinding:</label>

<br>

<input type=“text” ng-model=“name” />

JS

var app = angular.module(‘plunker’, []);app.controller(‘MainCtrl’, function($scope) {
$scope.name = ‘World’;
});

This is that awesome thing that does that sweet ‘live’ updating of values in your forms. Most likely, this was the huge selling point when you first started learning about AngularJS and the ‘reactive’ web.


We’ll explore where these things start to differ in Angular 2 in our next post

Talk about API first development

image

It was a pleasure speaking at the WAPRO event by Uncoded earlier this month.  I got to speak about API first architectures and open source mash ups with Erdiko.

API first is a technology paradigm for building apps that is emerging as an excellent way to think about your application and how it grows.  We’ve been a big fan of it for a while now.

The early days of web software (websites) were desktop first.  With the rise of smart phones and tablets it moved to a Mobile First paradigm.  We’re now seeing a shift to API First.

What does that mean?

It means that you are designing your app at the data and interaction level first.  This will inform your mobile UI, desktop UI, and public/partner APIs.

We will write more on this topic in posts to come.   Till then, download the slides from my talk in Long Beach, CA.

http://arroyolabs.com/downloads/talks/WAPRO-Talk.pdf

jQuery getSelector Plugin

jQuery getSelector Plugin

AngularJS JSONP GET Example

AngularJS JSONP GET Example

This is an example of a cross domain GET request using JSONP, the most common way to get data. Like the previous example using CORS, this too uses requires the server to be ‘aware’ of request so it can reply with the JSONP callback method.

The “P” in the JSONP actually stands for padding, which really means “Named Function”. You pass along a named javascript function that tells the server to wrap its response and when it responds your data is neatly wrapped with a helper function. This gets around the Same Origin Policy since <script> tags are ok’d for cross domain execution, which is how we can include JS libs from CDNs and other sources.

This method does require control over the request destination, or the site itself has to support a JSONP callback. This is just a javascript function returned from a ‘black box’, so there are some serious security concerns (but you’re not sending sensitive data via GET, right?). Also, we should note that this method have VERY limited support for reporting errors. Most people work around this by timing a response check and timeouts (which someday we may blog about).

AngularJS CORS example using httpbin.org

AngularJS CORS example using httpbin.org

This is an example of a cross domain GET and POST request to a server with CORS headers enabled. It’s a very simple one field form that displays the echo’d response from httpbin. I chose httpbin since I knew it had CORS headers enabled, and seemed like a relatively simplistic tool to play around with.

As the code shows, GET requests are quite simple and look like a regular $http call. The heavy lifting is all done by the server

The POST request on the other hand is a little more complicated and requires some manipulation of the headers before the request itself.

First, we need to set the Content-Type to form-urlencoded to let the destination know we are posting form data:

$http.defaults.headers.post["Content-Type"] = "application/x-www-form-urlencoded";

Second, we need to delete a header called “Requested-With” that is sent along with AJAX requests:

delete $http.defaults.headers.common['X-Requested-With'];

Also, to stop this mistruth from spreading any farther: A lot of blog posts about Cross Domain posting with Angular will show you this code:

$http.defaults.useXDomain = true

It’s not a real config value, you don’t need it. Seriously. Try it with and later without it, you won’t see a difference (I think the dojo toolkit may require it, I’m not sure how it made it’s way into multiple AngularJS examples).

Cross Domain AJAX & AngularJS: The Same Origin Policy

We are big fans of AngularJS here at Arroyo Labs, and often our Angular Apps require a lot of AJAX to work: a simple GET request to load data here, a form POST there. You know, typical web stuff.

More than once we’ve made an App that needs to make an AJAX request to a different domain than the one our app lives on and hit some roadblocks. For one, your browser doesn’t really allow it (I’ll explain this later) and for the most part it’s a pain compared to hitting a local resource.

“Why would you need to do that? Why aren’t you making locally hosted APIs to access this data?!” you may blurt out while reading this. Well sometimes, you have to.

Sometimes you run into a weird legacy architecture or you can’t access the backend code. You may even want to do all of this work with only front-end level code. What if you need the client’s browser to resolve a domain for you? It’s also possible you have been making AJAX requests to other domains and never encountered an issue until now.

The point is, eventually you will have to make an AJAX request on another domain, so lets talk about it.

The Same Origin Policy

The Same Origin Policy is a security feature built into your browser (and most everybody else’s) for your safety. It prevents scripts from accessing anything outside of the current ‘site’ as the executed script, and by site I mean the same Scheme + Hostname + Port as the page executing the script.

Take a look at this chart from the wikipedia page, for a script executing from http://www.example.com/dir/page.html

image

You can’t even make a request to a subdomain like http://foobar.example.com without Same Origin stepping in and waving its finger in your face. Try it, and check your console log: “not allowed by Access-Control-Allow-Origin" all over the place.

Why does this policy exist? Imagine an internet where we could access any domain’s HTML or make a POST ‘on your behalf’ (to websites you are logged into like your bank or twitter) from your browser with Javascript. Thats a scary place.

So, how do we get around the Same Origin Policy?

Theres a few ways to ‘relax’ the Same Origin Policy so you can actually interact with data from another domain.

1) CORS

Introduced relatively recently are a type of headers that tell your browser its ok with a requests from another domain. This is known as Cross Origin Resource Sharing or CORS. With these headers turned on, you can make POST & GETs just like you would with an endpoint on the same server as your script.

You will need access to the webserver configuration to make this happen, which is not an option for a site you have no control over. The guys over at enable-cors.org do a good job explaining how to enable this for the most common webservers.

Also, I should note, this is the only method that allows you to send a POST request to another domain.

In fact, here is our example

2) Proxy (with CORS)

This solution involves a middle man proxy that does have a CORS headers that passes along your request to a server that does not have CORS headers. Yes, this is a hack-y way to just make a cURL call to a webservice.

There is the hosted flavor (which is inherently unsafe, since they can see/log everything you send) or the self hosted variety.

3) JSONP

This is the most common way to make a GET request to another domain, and is probably the method with the least effort required.

The “P” in the JSONP actually stands for padding, which really means “Named Function”. You pass along a named javascript function that tells the server to wrap its response and when it responds your data is neatly wrapped with a helper function. This gets around the Same Origin Policy since <script> tags are ok’d for cross domain execution, which is how we can include JS libs from CDNs and other sources.

This method does require control over the request destination, or the site itself has to support a JSONP callback. This is just a javascript function returned from a ‘black box’, so there are some serious security concerns (but you’re not sending sensitive data via GET, right?). Also, we should note that this method have VERY limited support for reporting errors. Most people work around this by timing a response check and timeouts.

4) Cross Document Messaging

This method isn’t really possible with the $http object (the traditional way to make AJAX requests with AngularJS), but it’s still a viable option for some cross domain communication. I think it’s an interesting concept worth exploring and I think would be a cool way to explore promises and error states.

Conclusion

With some work and concessions, you can make cross domain requests. We’ll provide some working code examples soon.