The Internet of Things
The “Internet of Things” is all around us.. or, well, we’d like it to be. From “Just in Time” delivery of bus or train schedules as you leave your house, to sensors tracking air quality in the city, from phones to smart clothing, the Internet of Things is about integrating ourselves with this data available all around us. Free of ‘apps’ and ‘services,’ the Internet of Things will make data available ‘just in time’ wherever and whenever you need it. Whether you get the data from WiFi, RFID or QR codes it’s out there, and we think that’s exciting!
Cloud Host Almost Everything!
If you’re a savvy IT person, it’s may have been a while since you provisioned your own Linux host on a Virtual Private Server or hosting plan. Cloud hosting isn’t emerging, it’s emerged and taking over. You’re likely to use a cloud-based service such as Heroku, Amazon EC2, SliceHost, or even Microsoft’s Windows Azure. There are private cloud partitioning services to protect your data from theft in co-location. Companies such as CloudBees have been shifting so many services to the cloud that now they even provide development and QA platforms that provide your SVN or Git repositories, run Hudson or Jenkins, and provide Continuous Integration services.
Big Data and Data Mining
The opportunity for capturing, processing, and utilizing massive quantities of user/customer/machine/device/sensor generated data is becoming ever more affordable, approachable, and consumable. While “cheap, fast, good: pick any two” is still very much a reality in the “Big Data” space, we’re quickly seeing advancements toward offering users the luxury of all three.
Along with its ecosystem, the open-source Hadoop framework has emerged as the go-to platform satisfying “cheap” (cluster nodes consisting of reasonably inexpensive commodity hardware) and “good” (arguably “excellent” in terms of a distributed computing platform). Hadoop’s primary criticism is high latency (i.e., not returning information “fast” enough). Ecosystem projects, such as HBase, have addressed the latency problem, but at the expense of somewhat rigid access constraints (efficient data retrieval and storage requires in-depth, upfront knowledge in regard to data access patterns, for example). Hive alleviates the access constraints by offering ad-hoc query capability, but the underlying processing is done through Hadoop’s batch oriented, high latency mapreduce framework. We’re paying very close attention to emerging projects that hope to solve the latency issue while offering intuitive, ad-hoc query capability, ultimately yielding a “Big Data” system that resembles ubiquitous oltp database systems.
With the advent of ever improving “Big Data” collection and processing technologies, it’s not too far of a stretch to anticipate a surge in attempting to maximize the full potential of that data by way of data mining. There are several very good, freely available tools to facilitate with such endeavors (in no particular order): RapidMiner, Weka, Rattle, and Orange. We’re paying close attention to the space where “Big Data” systems integrate with these smaller scoped tools as well as the techniques utilized in extracting representative samples that can be used with more statistically inclined languages.
The Evolution of Android
From the October 2008 launch of Google’s first Android phone, the G1, the Android software platform and devices have seen nothing short of impressive growth. In mid-2012 KPCB partner Mary Meeker noted an impressive trend. To quote: “[Android phone] adoption is ramping up six times faster than iPhone.”
Looking forward to 2013, Android is continuing to undergo massive adoption amongst smartphone users. Meeker expects that “by the end of 2013 there [will] be 160 million Android devices, 100 million Windows devices, and 80 million iOS devices shipped per quarter.”
In terms of functionality, the platform has matured at a startling rate – especially if we refer back to its humble 1.0/1.5(Cupcake) beginnings. Back then, multi-tasking was the key differentiating feature from iOS. Now many Android devices come with NFC support, Android Beam and Google Now.
From a development perspective, Android’s improvement is equally impressive. Community adoption has thrown a lot of great frameworks into the fray, and the Google SDK continues to steadily improve and adapt more readily to tablet and smartphone adoption. While Apple also continues to roll out iOS updates, it still appears that Android is continuing to show even more rapid improvement, and that is exciting.
We’ll be continuing to look forward to Android updates with potentially more zeal than that of iOS. It seems the platform most likely to throw a curveball and continue to keep its community excited and engaged. We’ll also continue to watch the fragmentation in the marketplace and Google’s approach to helping developers handle it, above and beyond the current support frameworks.
Concurrent Programming Environments
Not so long ago, the only dual-CPU computers were in a server room, or a hand-crafted monster gaming rig. But look again: now any smartphone that doesn’t come with 4 CPU and 3 GPU cores is yesterday’s news. The reign of Moore’s Law in increasing CPU speed in Megahertz is long gone, and today the focus is on adding more cores while lowering power consumption.
This is all fine for the consumer — more power in your hand and better battery life to boot — but not so easy for the programmer. Typical programming styles and languages don’t do a good job of utilizing multiple cores, and sometimes actively make it difficult to do so properly. It’s not that it can’t be done if you’re careful and knowledgeable — just that few people take the time when common libraries and practices direct them other ways. The lag is still worse for GPU processing — specialized toolkits exist, but only for some languages and they’re hardly suitable for general-purpose tasks.
We’re watching this space as good tools to leverage all this multi-core hardware are finally arriving. Best practices such as the Actor pattern are becoming more widespread, languages like Scala make it more natural to write thread-safe code for multi-core processors, Java is building GPU computing into the VM, and even HTML5 includes the ability to assign processing to other cores via Web Workers. While none of these updates have yet found their way into the toolbox of the everyday programmer, we’re anticipating that the wait won’t be very long.
And good parts there are. Suddenly with so many developers looking for more powerful browser-based applications, the all-but-death of Flex and the Flash runtime, and browser providers working hard to support powerful features such as canvas, web sockets, client-side databases, we have got a platform that can run on devices from massive PCs to the smallest smart phones.
This year we’ll finally start to see some real leaders emerging. Speaking of…
Yeoman is a fully featured workflow for building web apps. It including tools and libraries for (but not only):
- AppCache manifest generation
- PhantomJS Unit Testing
- Image Optimization and
- Automatically compile CoffeeScript & Compass and lint your scripts
We love things that make our lives easier, and our clients happier, and nothing does happier better than a faster time to market. That is what we are looking for from Yeoman. We will continue to watch closely (You should probably go try it out… now).
Some honorable mentions…
While we had a lot of other things on our list, here are several more to keep your eyes on…
- Browser capabilities are improving – in just a year or two, we’re all used to having our browsers ask us if they can use our location, take our pictures, and, on mobile devices at least, detect whether we’re moving. Watch for even more features in the future.
- The rise of micro services – instead of trying to unify all of your services under a single application stack, why not just host them on platforms that can easily scale? If you’re thinking cloud platforms, you’re on the right track. Some in the industry believe that a scattering of a number of smaller services across an amorphous cloud-based hosting model makes each one easy to manage. Some old-schoolers are crying foul and wondering what the single point of failure really is now!
- Repeatable environments and builds – we’ve been watching this space for some time. Tools like Chef and Puppet, easily re-built environments and cloud-tolerant hosting models make it a cinch to restart your applications if they go down. Bugs can be reproduced easily in the same stack that you use for production, scaled-down of course.