Discussion on the Future of Tech in 2020 and Beyond by the Chariot Solutions Team

by
Tags: ,
Category:

This discussion on the Future of Tech in 2020 and Beyond is a direct transcript from the Chariot Solutions team’s fireside chat held at Venture Cafe Philadelphia on December 12th 2019.

Tracey Welson-Rossman, CMO/Moderator: “At Chariot Solutions, we are always having conversations about ‘what’s going to be next’ in terms of the type of technologies that we are going to be using. We’d like to give you a little look into what our talented team is thinking about in 2020 and beyond. These are trends that we’re watching for use in software development and determining which ones have staying power.

This really isn’t about predictions, but us highlighting the tools and tech our team is interested in following and learning more about, and which ones have the potential to break out. Hopefully we’re able to give you a window on how to track and evaluate new tech for your business.

What is the top tech you’re excited about for 2020

Don Coleman, CIO: “For the projects we’re doing, I’m going to be continued to be involved in the Internet of Things (IoT) where we are gathering data from devices or controlling devices. The device and hardware side is really cool, but what is most interesting to us is the communications side and how we get that data up in the cloud to build these data pipelines. That’s the real stuff we’re doing day-to-day.

In terms of the future, I’m really looking forward to the ability to run machine learning models on actual hard drives. Two years ago we were doing that kind of work on phones, now we’re doing it on microcontrollers, which are essentially the CPUs that go into IoT devices.

Sujan Kapadia, Director of Consulting: “When I started my career data was just one piece of the puzzle, but these days it’s basically become the core piece of most products out there. Data pipelines is a big part of what we do now and you have to manage the entire lifecycle of data. It doesn’t just land somewhere and sit there for someone else to run reports on it. You have to understand your data, where it’s going, how it’s being used, and how it can be leverage across the business.

We see companies devoting a lot more technology and human resources to just dealing with data. It’s even getting commoditized now. You’re seeing higher level services doing machine learning, AI, and data processing to the point where people who don’t have math or stat degrees provide a lot of value. We’re going to continue seeing a lot of stuff like that.

We’re even seeing now where machine learning itself is becoming automated, like with AutoMO, where the optimization of the model is being handled algorithms instead of human beings. In general, I’d say 75-8o% of the work these days is “What are you going to do with the data?”

Steve Smith – Mobile Practice Lead: “From a mobile perspective, there’s things I’m both excited about, and things I’m interested in. I’m excited about wearables and how they can be used by business and consumers to interact and do things. In terms of things that I am interested in, it would have to be multi-platform development where you can do Android, iOS and desktop all in one magical code base. We’ve been chasing that rabbit for years, but this year there’s especially a lot of talk about progressive web apps, which are web apps that are developed in a special way. They still use HTML, Javascript, and CSS but they have certain qualities where they can run offline. The reason they’re becoming more prevalent is with the release of iS13, mobile Safari became a little more friendly. If you go to tech talks, you’ll hear some people talk about direct competition with native apps, but they’re really just different tools that differ on what you’re trying to accomplish. I think there is going to be a lot more of that talk this year and beyond to figure out the best ways to do things from a business perspective.

Ken Rimple – Director of Training and Mentoring: -“A lot of what I’ve been focusing on for the last 8 years now has been front-end Javascript clients. That’s all really gone through a big evolution over the years. If you go all the way back, you had tools like jQuery where if you were a programmer, you’d program jQuery to do calls to get data to show on your page. Eventually people figured out that just writing little tiny scripts everywhere just made a mess of your front-end code, so they started building frameworks.

When I first started at Chariot over 12 years ago, there were web frameworks that everyone was building. They all moved to the browser, so now there are browser-based frameworks. In the last couple of years starting with Angular and then moving to React, there are now some pretty sophisticated frameworks that will do these progressive web apps. I’ve been seeing the transformation, the number of lines of code getting lower and lower.

AngularJS was heavyweight and looked like a framework. Angular, although the build tool was a little more complex, the code was a little leaner and did things quicker. React is completely on the other end of things. You write these tiny little components and these components all communicate with each other. In the last year, the React framework went over a transformation where the components became purely functions. So, if you’re a programmer, you learn that you can write function that takes input and renders your content. Instead of building big chunks of objects with properties in them, now you’re building functions that render your components.

There’s a new feature in React called hooks. Hooks make it easier to write functional react components than the traditional class-based model, and this takes less code overall. You’re also able to throw away some of the frameworks you’re used to using too. If you previously used state tools, like Redux, you don’t necessarily need them anymore since you can basically do the same things from a hook function like useReducer.

There is a lot of movement in the front-end around making applications leaner and more functional, but there is also movement in bringing user interface developers closer to the user experience, through tools like Storybook. Storybook lets you run a webserver that mounts all your components and widgets without running a whole application. Those are some of the areas I’m excited about right now.

Keith Gregory – Cloud Practice Lead: “What I’m interested in from a technological perspective, is observability, which is an idea that’s been around for a really long time. But the practice has changed dramatically in a cloud deployment. It means understanding what’s happening in you entire application, ranging from your IoT device to the clients running in a browser, not just the more traditional server side of things. In the cloud environment, it becomes very important to correlate what’s happening in every part of your application.

The technology and ideas have been around for a long time, but we’re getting to the point where network bandwidth, disk storage, and compute capability are essentially free. A developer 10 years ago would say “we can’t turn on high levels of logging in production because we’ll run out of disk space” you don’t have that issue today because disk space is effectively infinite in the cloud. Where you need distributed tracing is now in production, because it’s going to be the 2AM failure of your entire application, which may be split amongst thousands of components, where you need to be able to go in and very quickly find out what happened.

Technology is a big part of that, but it’s also very much a cultural thing. A lot of software developers who’ve been around for a while who were always cautioned “don’t be doing this” have to learn to really instrument their code and put in enough information to gain a complete picture of what’s happening. But they also have to be careful, not doing something like Facebook did recently by accidentally logging personally identifiable information.

Welson-Rossman: So as you can see, software development has changed and is even more complex today. There are more places we have to think about in terms of an IoT or mobile device, we’re storing our data and our software, so these are some of the discussions that we’re having.

Let’s switch gears and talk a little bit about product development teams and what they look like in 2020 and beyond.

Kapadia: What we’ve been seeing the last few years and more lately are multi-disciplinary teams. You’re seeing people who don’t have traditional software engineering backgrounds, like data scientists and statisticians. You’re working with people that have domain knowledge that are armed with some level of tech-savvy as well. Companies are realizing that’s it not all about sitting behind a monitor and keyboard coding, but you’re interacting with the people that actually understand the business.

I think people coming out of school and boot camps these days are getting a more diverse view of things that are going on. More and more, we’re going to see those kinds of teams and distributed teams. Companies can’t wait for talent to come to their doorsteps, so they’re going to have to reach out to the US and Globally for all kinds of people. The shift internally is if you don’t have the traditional IT department, your engineer has to be able to talk to other human beings and has to be able to get along with others and not be separate from the business. We’re seeing more well-rounded people, so if you’re interested in technology, it’s good to know it’s not just about tech anymore it’s also being able to work with people as well. That’s equally important.

Businesses today really need to be open minded and listen to everyone. You can no longer just assume that because people aren’t in technology, they don’t know what they’re talking about. A lot of people have some great ideas, so just stay open minded and meet as many people as you can. Being open minded is the biggest piece of advice I can give.

Smith: Along with that, we’ve been seeing more and more that people are geographically dispersed. So that communication becomes more important. If you send an email or IM, you have to mind what you’re saying much more these days, especially in written form, which can be more easily misinterpreted. On the phone it’s a little easier, but it’s also a different kind of interaction that we’re seeing a lot more of, especially with our clients. Even though we’re talking about software development and technology, it’s clear that communication is going to be a key trend that everyone is paying attention to.

Rimple: In terms of technology, there’s no better time than now to learn things. I attended AWS re:Invent in Las Vegas this year and they had 3,000 talks in one week, and most of them are already online so you can watch these presentations. There’s a ton of stuff up on GitHub for shared code or you can join a user group. There are tons of user groups in the Philly area for example. It’s a good time to be going into development.

Welson-Rossman: There has been a lot of changes over the past 20 years in the open source world. A lot of folks may not understand what open source means to them now, but I think it’s something important to touch on.

Gregory: Many companies today would go out of business if there was no open source. I’ve been watching over the past couple years as the nature of the open source environment has changed pretty dramatically. Two years ago for example MongoDB and Elastic decided to relicense their flagship products, which had been released previously on an open source license. They found that software as a service providers like Amazon were able to take their open source product and release it as a service and make more revenue. It’s estimated that Amazon makes more revenue from its managed Elasticsearch than Elastic does.

Elastic took dramatic steps by relicensing and mixing open source and non-open source in their public code repositories. So a company that simply took a copy of the code repository and modified it may now in violation of their license. As a response, Amazon took the open source portions and did what is called a “hard fork”. So now Amazon is maintaining their own copy of the Elasticsearch code base.

As I’m seeing this happen, this is something I think is going to happen more and more. Companies that have a financial incentive in open source, either because they’re offering as a service or because their business depends on it, will start putting more effort into their own maintenance of it and not relying on others.

In recent years there’s been cases of maintainers, people who wrote the code, pulling their open source off public repositories. One of the more notorious cases is what is called left-pad, which is 8 lines of code available for javascript programmers that a huge percentage of the world’s websites depended on. The person who wrote that and put it up for general consumption got upset because he had to rename one of his other packages because he violated a trademark. So he took the left-pad code down and basically broke a large chunk of the Internet.

As a business that depends on open source, you’re going to have to ensure that you always have access to the open source packages you depend on. People have become very complacent with things like Linux, which you can get from many different places. Or Apache; the Apache foundation is superb with maintaining their own repositories, once something goes up there it never comes down.

The world of open source in my mind started in 1985, then it evolved to be a critical part of business, and it’s going to evolve again over the next few years.

Kapadia: Do you feel that companies are obligated to make those contributions back, especially if they’re incentivized, from the changes they make?

Gregory: That’s an incredibly tough question. On one hand I say yes. If you’re using it, you owe a moral debt to the people who created it. If you fix a problem, you should help them and send the patches upstream. It may be that you don’t have anything to give back. Or it may be that the maintainers don’t want your patches; you never know.

But, I think more important if you’re maintaining open source and if you’re using an open source model for your product, you should go into that without expecting anybody to feel that they owe you anything. If you’re releasing a product as open source, you may not get patches and someone else may decide that it’s important enough for their business that they’re going to take it over. So I think you really have to think before you do an open source release.

Coleman: On the flipside of that, I personally have a lot of open source small project software out there and people will come demanding things be fixed immediately because they have a deadline next week. So it goes both ways, if you’re getting something for free there’s more than just demanding something get fixed.

Gregory: I have a couple small packages, two of which I discovered last year are in the Linux distribution. I’ve been on the lucky side that most of the things I’ve released are very niche and the people who’ve asked for patches or help have all been great in doing so. But yes again, if you release something open source, learn to say no.

Welson-Rossman: I’d like to circle back around and talk a little bit about mobile, especially augmented reality and machine learning.

Smith: From a mobile perspective, I’m always about user experience and how you get and keep them engaged. One of the advancements that’s come about lately on mobile platforms is the ability to do machine learning and augmented reality on the platform. There are nice tools that android and iOS both supply that gives us the ability to really develop apps that are much more immersive and interactive to what the user is doing. As devices get more powerful, that’s just going to give app developers and businesses the ability to come up with better and deeper ideas and more interactive and personalized applications that will better serve the business and the user.

Welson-Rossman: A lot of people are talking about serverless applications, so what’s the big deal and why should others be interested in them?

Rimple: There’s been a shift for awhile now virtualizing servers to put a bunch of servers on a physical host to save hardware. In the advent of moving things to the cloud, where you don’t know where the hardware lives or how big it is, you don’t really care so much about severs anymore.

So there’s been a couple big movements over the last couple years. One is called Docker, or containerized development. Instead of building some linux based machine where you install and operating system and a whole stack of software and you run that as a server, now it’s I’m going to bring my application up in pieces. You make little tiny purpose driven containers, and this is the biggest innovation of the past five years.

The interesting thing about Docker is that I can run it on my windows, mac and Linux machines, but we can also run the same Docker engine up on high-end servers like Amazon ECS, which is Docker in the cloud. We can even allow Amazon to run the engines for us, which is called Fargate. So there’s this concept of having a scalable platform where as long as you architect your (thins?.. Not sure of the word being said here) in units that can be deployed by themselves, you can build these docker containers and scale up.

Coleman: One of the really cool thing about the containers instead of having to figure out what did you do, how did you set this server up, and how do I duplicate it, when you use tools like Docker you’re taking your infrastructure and running it from a configuration. So if someone spins out a Docker container, we know we have the exact same thing. Once we test and deploy that to production, we know it’s going to be the same thing. And as we’re able to version our software, it gives us a lot more repeatability.

Rimple: We have clients now that bring us a Docker container just fire it up. It used to be that we would have to wrestle someone to the ground to sit there and watch them run their development tool and then build a decent build for them. Now we have companies coming to us saying they run everything on Docker, which is the common thing to do now. In the next year or two, we expect most of our clients to be using Docker for a lot of their projects. It just makes onboarding a new developer really easy and going into a new environment really easy.

The next thing is that we’re seeing the beginning of an adoption curve that I believe will be rather large, now I’m not sure if it’ll be replacing anything but likely augmenting things, and that’s the concept of serverless. Serverless doesn’t mean there is no server, it just means that you don’t manage a server. So you write code, generally in the form of functions, and you put these functions into a serverless platform and you say “go run it in the cloud”. You tell it roughly, “I think this function will be this much horsepower, battery, concurrent users” and you do that across your application.

The benefit of serverless is that you only pay for the time that the CPUs are active. It spins up and runs your function, and then you’re only paying for the compute time that you racked up. Then when they’re done, they shut down and you pay nothing. I was recently at a talk on this and there was an entire theater of developers learning about these. So it’s clearly something everyone wants to do or is starting to do.

The goal is as you have more of these serverless architectures you need to find infrastructure in your cloud system that is all managed by the cloud provider. The promise of this is that you pay less and more importantly as your application needs to scale, your services ramp up in scale to match what you need. You’ve got a more predictable form of performance versus cost, and I’m very interested in that right now.

Coleman: I think the biggest thing is really looking at where we can put machine learning where we haven’t used it before. Particularly deep learning on microcontrollers and IoT devices. I know that we’ve thrown the term machine learning around a lot, but one of the ways to think about it is classically when you’re running code to do something you may have some data and you’ll figure out an algorithm and you’ll run that data through that algorithm to get an answer.

The thing with machine learning is we have lots and lots of data and we kind of know an answer that we want. We’re then able to run that through machine learning and train it to develop the algorithm for us. So we’re able to take data and the answer that we want and create a trained model that figures out some of these really complex formulas that would take a long time to get right.

We use those for classifications of motions. Like with an Apple Watch uses that to figure out when you’re swimming what types of strokes you’re doing and classifying those motions. It’s a lot of those types of things we can do on microcontrollers that run on tiny amounts of power that even two years ago we weren’t able to do. It’s really exciting!