Friday, August 27, 2010

ReadWriteWeb - Why Cloud Equals Better Outsourcing




The following is a post that was written by Appoxy for ReadWriteWeb:

Business Week published an article recently that talked about changes in outsourcing. They got the cloud part right - massive disruptions and changes in the IT infrastructure stack both in technology and company power positions. But they got the outsourcing part wrong.

There will be big changes for large and middle-tier outsourcing companies. But the large won't necessarily get larger. In fact, the combination of cloud and modern programming frameworks makes it perfect for small developers and medium IT shops to get a leg up on the big consulting firms, putting their models - and margins - at risk.

This post explains why cloud makes for better outsourcing. More specifically, why cloud lets you keep a better eye on outsourced development, lets you more quickly correct issues that might arise, and gives you more security when taking ownership of the work.

Read more >>

Friday, August 20, 2010

Minimum Viable Product = Measure Once, Cut Many Times

Developing with RoR + AWS provides incredible agility - making it possible to quickly develop products that people can react to. This combines well with “minimum viable product” theory -- an approach which is rapidly moving from web 2.0 startups to many companies across the spectrum.
From Wikipedia:

A Minimum Viable Product has just those features (and no more) that allows the product to be deployed. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent. "The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort."

The idea is not that different than the long-time approach of developing a prototype except that now the prototype becomes a production version. It may not be released widely but it’s not necessarily built to be disposable. The prototype is expressly created to test market reaction. Prototypes in the past were developed primarily to assess the technical challenges or to create a version for internal reaction only.

In this MVP process, requirements and user interface design are still important (and essential). The difference is that it’s no longer the case of working with internal team members using long documents and multi-week processes. Development now gets front-loaded more quickly into the process -- which for architects and developers is a great thing given how eager they are to roll up their sleeves.

Getting to a minimal viable product means that you have to be practical and determined in taking the vision, focusing in on a market and specific use cases, reducing what’s possible to the essential features and flows. Approaches for distilling requirements are similar to approaches for time-management. There are many often opposing ways to organize to-do lists but that’s because they map to the different ways people work.

One back-of-the-napkin approach to reducing requirements is to take a data model view and prioritize and group the entities that you’ll be tracking. You’ll find they are probably 3-4 major data elements with others nesting around these. By mapping the flows and actions between these elements you should have the primary value of the application. Add in straightforward navigation, minimal visualization and design and you’ll have a rough outline of the first agile cycle. The other data elements will accommodate additional features and capabilities and take care of edge cases. But you’ll want to get to these only if and when you find out they’re in demand.


Ruby’s object support and GEM structure makes it easy to build and extend. Rails provides a great framework for structuring applications. AWS enforces a loosely coupled but solid approach to system architecture. This means you can create and adapt applications quickly.

Which means you can measure once and then you can start developing. And then based on market data from real use, you can develop again. Without protracted periods of
measurement market research and requirement cycles. The application becomes the plan.

MVP + RoR + AWS couldn't make for a better combination. (Unless of course, there was a monkey.) **


** Great commercial and back story on the monkey. We recommend.

Wednesday, August 11, 2010

Clickstreams, footstreams, sensorstreams, tweetstreams, and otherstreams

Came across an interesting term today -- “footstream”. Used by Jeff Holden at Whrrl to describe geolocated data event that has a particular meaning or importance to it. Here’s his description from a talk he gave at Where 2.0 in 2009.

People vote with their feet. An individual person visits places that are in some way important to that person... Location-based services [can now] provide us with the ability to capture, in digital form, the places people go. And “places” does not mean just the lat/longs, the cities or zip codes or neighborhoods. ... We can capture which businesses or other points of interest individual people visit. This data set is the real-world analog of a clickstream in the Web domain; in fact, we might call it a “footstream.”
This comparison of footstreams to clickstreams is interesting and apt. The pervasiveness of capturing and analyzing clickstream data was recently explored in-depth in a recent WSJ series. To anyone in analytics, interactive advertising, ecommerce, and other consumer tech industries, the practice of capturing clickstream is not necessarily new.
What is new is the proliferation of companies getting data over the past 2-3 years. Several years ago, websites would typically have just a embed to capture clickstream data -- their analytics program. And if it was sent to the service provider, it wasn’t use beyond providing the analytics service and for internal provider needs. Now sites have upwards of 60 services included within their pages that capture clickstream data and metadata around the clickstream data. The ad widgets, recommendation widgets, and other services that appear on a pages all use the data the click to provide the appropriate response. They also store and use this data across their networks and for secondary and tertiary purposes (market research, subsequent service requests, selilng it to third parties, etc.)

When the comparison is made between footstreams to clickstreams, you can see where geolocation is going. You can see that the data being captured will be used to provide benefit now and stored and processed in the future for individual users and for benefits of third parties. You can also see the issues and magnitude in dealing with this data. Capturing and processing clickstream is not a simple matter. When servicing a number of high-traffic sites, it quickly becomes overwhelming -- such to the point its difficult to make use of it because of the amount of data and the complexity of the variables (sites, pages, and query stream parameters are just the tip).

As industries grow around the use of more and more realtime atomic streams of data -- tweets, smart meter data, sensor data -- we’re increasingly seeing patterns in dealing with this deluge of streamed data. The capture, processing, storing, analyzing, and archiving these streams takes thinking. It also horizontal scaling of both servers and data storage. And stateless approaches to web application and development. Something that developing in the cloud helps with immensely. (In a subsequent post, we’ll explore these patterns.)

Products and services built to process email, securities trading, ecommerce transactions, even user-generated video are used to these issues. But a number of industries are just beginning to see what they’re in for. The Smart Grid and Internet of Things has been getting a lot of attention recently.
But only this summer has there been much mention of data handling issues for these areas. That will change. Especially as the streams grow in amount and complexity and as the derivative uses become more apparent.

Every web application is now an event transaction processing application. It's just a matter of what type of datastream you're working with.


Tuesday, August 10, 2010

MiniFB Ruby Gem for Facebook Now Supports Facebook Graph API

Our lightweight gem for interacting with Facebook, mini_fb, has been updated for use with the new Facebook Graph API. This also includes support for the new Oauth authentication and of course you can still use the old API if you must.

If you want to get started quickly, you can check out our demo application on github at http://github.com/appoxy/mini_fb_demo.