Solutions provider takeaway: As cloud computing takes hold, your customers may begin clamoring for cloud-based applications. So your first task is to get a grip on the various cloud applications and the programming languages that enable these applications.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
About the book
This chapter excerpt on Applications for Clouds (download PDF) is taken from the book Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center. Solutions providers will get a detailed background of cloud computing from this book while explaining the different types of clouds available. Find out factors to consider before cloud adoption, including security, application integration, and structural limitations, as well as the future of the cloud.
As you prepare for these new customer demands, part one of this chapter excerpt may help. It outlines types of cloud-based apps and the pros and cons of high-and low-level programming languages.
Cloud applications can take many forms and can be created using many development systems. In this chapter, we'll learn about the tools available to build cloud applications and the variety of applications, from personal productivity to enterprise back-end support, that are available in the cloud. In this chapter, we'll learn the following:
- The development environment's path to clouds. Just how are coding systems changing to take advantage of the new environment? What kinds of changes are happening that will help programmers "bootstrap" themselves into the clouds?
- The role of software development kits (SDKs) and application programming interfaces (APIs) in rapid development. As development systems change, just how much help are cloud providers giving us in making changes?
- How abstraction is starting to leave the browser behind. Software as a Service (SaaS) started us on the road to clouds, but the browser by itself couldn't break us of the "thick application" habit until "rich Internet applications" burst onto the scene, bringing us the world the best of both worlds.
- How far we have come with higher and higher-level languages and the beginnings of abstraction. We now have an unparalleled choice of development systems, which has become a dual-edged sword and led to some unintended consequences.
- Commercial off-the-shelf, government off-the-shelf versus stovepipes. Users are much less willing to pay for ultra-customized applications that perform only a single function. Just how are the new collections of systems meeting our needs?
- Storage clouds. How are cloud storage systems luring users? Thinly veiled storage clouds are being used to connect mobile users to cloud providers, and storage might very well be the way many corporations dip their toes into the world of clouds.
- Is Google getting closer to a "true cloud"? Google and its amazingly close tie to the Android phone system leads us to believe that we might start seeing clouds that will become even less specific and might be paving the way for cloud computing to become more of a commodity.
Just what does a cloud application look like, and what makes it different from the applications running on your desktop PC? With grids and high-performance computing (HPC) clusters in their lineage, cloud applications also need to have their functions split in several pieces; in the case of clouds, however, the user interface is typically the portion closest to the user, and some sort of back-end process makes up the bulk of the heavy-lifting component. A common misconception is that cloud applications must be very general in function and closely resemble traditional websites in form. How did these ideas arise? They came from the early days of Web-based applications, when active user interface tools were limited and most database applications were simple screen-scraping versions of green-screen applications (basic monochrome terminal applications ported to the Web with minimal enhancements to the user interface).
As an extension of the opinions about green-screen computing, some feeling about early cloud apps was based on response to the thin-versus thick application debates. However, as new development environments such as AIR (Adobe Integrated Runtime) started appearing, the traditional line between thick and thin applications became blurred. The new trend is that Web-based applications neither have to be limited to the boundaries of a Web browser nor do they necessarily mean a compromise in the user interface. Much of the sense of what is possible in cloud applications is based on newer developments in the tools used to build those applications.
Browser versus desktop (aka thick versus thin)
The browser has become the preferred way for delivering many applications because it allows easy deployment across operating systems and simplified application maintenance. Plus, the modern programming languages used in the browser enable rapid application design and development.
Adobe AIR complements the browser by providing the same application development and deployment benefits while adding desktop integration, local data access, and enhanced branding opportunities. An emerging design pattern for rich Internet applications (RIAs) is to deliver a browser-based version of an RIA in the browser for all users and an RIA on the desktop for more active users.
Not long ago, there was a great debate in the Java programming world about whether it was possible to have contextual information brought up simply by having a cursor hover over a spot on a Web page. Although we're all very familiar with this feature now, it wasn't long ago that Java didn't directly support this feature. Frustratingly, it took several years to get Java to the point where it could rival traditional programming environments for functionality. Now, however, with the browser so tightly integrated into the operating system (be it PC or Mac or Linux), the language capability to extend through to the base hardware makes even applications that demand services from a number of different hardware-based systems possible. Applications such as Web-based video conferencing are now commonly used, and Web-based application sharing like that from Adobe (Enterprise Connect), WebEx, or Microsoft's Live Meeting is now regularly accepted as a key component in collaboration in distributed organizations.
Access to services both complex and simple is gained through the architectural structure of browser plug-ins. These sub-applications connect to and extend the functionality of the browser in known, well-defined ways. This regular architecture carries a number of benefits and a couple of significant risks. Among the benefits are simple installation, small memory and CPU footprint, and rapid function extension. The drawbacks tend to be security-related, since users can often add a browser helper or plug-in without understanding the full ramifications of the act. Just look at how many users have Yahoo, Google, and MSN toolbars in their browser and can't explain how they got there!
Plug-ins and code generators
The behavior and impact of plug-ins is complicated by the fact that some plug-in code will act without user intervention to make rather profound actions. For example, choosing certain toolbar plug-ins has been known to change home pages, choice of video players, choice of music/MP3 players, etc. Too many users don't really read what the plug-in is for and in some rare cases have unknowingly shared some very private information. Simpler plug-ins are often installed on demand as an alternative to the huge overhead of a full installer session for traditional applications. Moreover, some complex plug-ins are much better behaved -- such as the NetExtender SSL-VPN application from SonicWall --and will remove themselves and their history upon logout. This makes the plug-in a way to deliver functionality with very close-to-zero footprint.
Add to this the revolution represented by systems such as Ruby on Rails, Flex, and Ajax, all of which serve as programming abstraction layers, and you have a dramatic shift in the essential nature of the chunk of code we call an application. Anything these systems can do can also be done in a lower-level language (i.e., PERL or PHP instead of Ruby on Rails), but you also spend considerably more time to develop and debug such a program. Another hidden advantage of programming abstraction layers is that these systems tend to force some standardization, which also increases the possibility of reusing code. You lose a bit of control, but the payback is increased development speed and increased standardization.
What we do lose is operating speed. In order to accommodate any potential situation, systems like this must have libraries and functions to cover most situations. This extra baggage is a big contributor to "code creep," or "code bloating." So while putting up a message onto a screen could be done in a few lines of a high-level language like Python, those few lines of code can potentially expand to several hundred or even thousands of lines of code as various libraries are brought into memory to handle housekeeping. All this extra overhead happens because the higher-level language will insert extra code just to handle any type of eventuality, regardless of whether it will be used. Our view is that although abstraction layers do provide quite a few benefits, the code bloat (larger and larger applications) is one of the major reasons why Moore's law exists. Every time computing capabilities take a jump, applications tend to fill the empty space. From a philosophical code-development point of view, an example from the other end of the spectrum is small-code purists such as Drew Majors, author of Novell Netware.
The advantages of low-level languages
It was the four gentlemen called the "SuperSet," with Drew Majors at its head, who developed the kernel of Novell Netware. Netware got its amazing speed in part because the "SuperSet" wrote the kernel in Assembler instead of a much higher-level language. In fact, some of the fastest code on the planet is still written in low-level languages, simply because a low-level language doesn't need to accommodate any possible eventuality. It's much easier to tune a program into a speed demon when there is less "stuff" to sift through.
Assembler and C are still among the most popular computer languages for writing machine control systems and device drivers, because of their extremely concise nature. The downside is that such concise programs also tend to be very difficult to write and in some cases are considered an art form. These time- and talent-intensive systems are almost always reserved for systems that are timing-sensitive, such as those in video encoding/ decoding systems, flight control systems, or any application that has ramifications if things get out of sync. Programmers capable of writing such low-level code are rare and extremely expensive. It's no wonder that abstraction layers such as Ruby on Rails, Python, SPSS, SAS, etc., have been developed.
Also due to the smaller and more concise nature of the code, low-level languages like C are also popular for embedded computing. This type of concise code is also useful for extremely small processors such as the class of devices called PICs (peripheral interface computers), which are small enough and inexpensive enough that they're found in devices as small as watches and remote controls, as well as all the way up to automobiles. These stripped-down computers are also unique in that you can "burn" the program onto the PIC so that it can't possibly be erased. More advanced versions also have flash memory (just like the flash memory cards used in digital cameras), so that new, updated versions of the program can be swapped in.
So what those high-level abstraction layers give us is that the extra code makes building blocks easier to fit together. Similar in concept to the Lego child's toy, the blocks below have an expected pattern that's designed to fit into the block above. Many of the examples we've given thus far deal with individual systems and their applications, but the principles are identical when applied to cloud computing, and many of the application platforms we've discussed are used for both local applications and cloud-delivered apps.
With the massive surge in processor power available in a modern computer system, it's now possible to trade off speed of development and ease of maintainability with tight, concise programming code. Maybe that's what "junk DNA" is all about: It may be nature's programming code leftovers that are there to accommodate other situations.
About the authors
Brian J. S. Chee is a senior contributing editor at InfoWorld magazine. Chee currently does research for University of Hawaii School of Ocean and Earth Sciences and Technology and has a unique insight into networking trends and the emergence of new technology.
Curtis Franklin Jr. is a senior writer at NetWitness and been an information technology writer for more than 25 years.
We previously mentioned the Adobe AIR environment as an example of how far abstraction layers have progressed in the world of application development environments. AIR allows a developer to write for a single environment that is abstracted from the underlying operating system and hardware. What Adobe promises is a wrapper environment that allows both Java and Flash programmers to ignore whether they're writing for Microsoft Windows or the Apple Macintosh. AIR also gives the developer options to develop in HTML/AJAX, Adobe Flash, and Flex. While all of these languages were intended to be Web-based, AIR has certainly been applied to all sorts of unique applications. We regularly use Klok (free software from mcgraphix) to keep track of time spent on various projects. Klok runs on several different platforms, limited only to which platform AIR is currently available for. Whether the Adobe team will fulfill AIR's destiny and extend it further in the Linux world and perhaps even the mobile world is yet to be seen.
Adobe's AIR isn't the only commercial abstraction layer in the market, but it is the first heavily supported platform that we've seen that works equally on the two major commercial operating systems. While the Microsoft .NET environment has done an amazing amount of reducing the work necessary to produce amazingly complex systems, it is limited to the Microsoft operating systems family, completely ignoring the fact that the bulk of the public Web servers in the world are Apache, with the bulk of them installed on Linux or a Unix derivative. The Microsoft Silverlight environment, while not as encompassing as AIR, has potential in how you can manipulate Web media and is capable of handling the complexities of digital rights management for video-on-demand systems such as Netflix .
A brief history of high-level languages
To understand better what these systems are providing and why their popularity has exploded, we need to go back in history a bit to some of the very first programming languages. The original assembler for the IBM 360/30 was very straightforward but tedious, requiring an intimate knowledge of the computing hardware and how instructions worked on data. Writing utilities in Assembler provided access to the most primitive instructions and capabilities of the tape and disk systems; at the same time, programmers had the ability to display messages and accept input from the console. However, every single piece of code had to be in each punched-card deck, and code reuse meant literally lifting sections of cards out of one deck and placing them into another.
In the late 1970s, IBM released a new version of BAL (Basic Assembly Language), a macro assembler that allowed programmers to take advantage of a library of prewritten snippets of code (macros) to do certain repetitive tasks. Examples of the repetitive tasks that could be automated included rewinding a nine-track tape reel, skipping to the third data set on the tape, etc. The ability to use tested and standardized code in programs immediately freed programmers from huge amounts of writing and debugging. Even though a program lost portability (unless the other system had the same macro assembler and version), what was gained was the ability to develop much more complex programs in a fraction of the time required by the earlier assembler.
It also meant that others on a programming team didn't have to adapt their code to reuse these functions. That alone had the effect of moving teams toward much more standardized coding. The macro assembler gained widespread use just in time to prepare programmers for a new programming language called COBOL (Common Business-Oriented Language), which in one "print format" line of code did what used to take hundreds of lines of assembler and days of debugging. At this point, you'll notice that we haven't discussed hardware abstraction. When everyone was using systems from a single large vendor, hardware abstraction was much less of an issue, surfacing primarily when shifting code from one storage subsystem to another.
One of this chapter's co-authors (Brian Chee) heard a lecture in the mid-1970s by U.S. Navy Commander Grace Hopper (author of COBOL), in which she talked about things like cost analysis and how COBOL could potentially save the Navy hundreds of thousands of dollars in data-processing costs and make data processing available to dramatically more people due to the reduced costs. Little did she know just how a big a leap we would make before she finally retired from the Navy as Rear Admiral Grace Hopper. As the computing world developed, new programming languages sprang up: FORTRAN (formula translator) for scientists, and a veritable Tower of Babel (BASIC, PL1, LISP, APL, SNOBOL and C were just a few). Each language reached out to a larger and larger and more specific audience by making it easier and easier to create more and more complex programs with less effort. SPSS, for instance, stands for "Statistics Package for the Social Sciences" and was initially targeted at the need for certain types of statistics in the social sciences; it eventually grew into one of the decade's most popular statistics system on mainframes and minicomputers, eventually reaching the PC. With IBM's recent purchase of SPSS, we predict that the SPSS suite of statistics modules will find their way into cloud modules in coming years.
SPSS had a huge impact on the scientific community because it was a very early example of a system that was nearly completely divorced from traditional programming languages. Instead of needing to write an extremely complex program to do standard-deviation calculations, now all you had to do was feed it data and ask for it in a single instruction line. Nearly a 3,000:1 reduction in coding effort was commonplace, and it allowed nonprogrammers to do complex statistical analysis without needing a degree in computer science. Philosophically, this was an important predecessor to the later cloud applications, because it allowed users to develop complex data analysis routines without having to learn the intricacies of a "real" programming language. At a certain level, this extended the concept of an " abstraction layer" up the stack, to the point of allowing abstraction for the user, rather than simply for the computing system.
Database abstraction and putting the database on the Web
Another huge step along the way to Cloud City happened quietly in the mid-1990s at a small Honolulu computer distributorship called Aspect Computing, where James Laurel and Richard Chan faced a dilemma: They wanted to have a home life, but their livelihood was linked to computer retailers that often needed to obtain information on equipment stock at some very odd hours. What they really wanted was a way to leverage this new thing called the World Wide Web so that these computer dealers could query the Aspect Computing inventory system even when the shop was closed. Laurel also wanted to create an abstraction layer that would allow the system to have security but would be flexible enough that the system could be reused for new applications as yet unimagined.
The product they developed, WebDB, eventually became a commercial product that for the first time allowed Web hosts to provide a peek into databases from a Web client. To put this achievement into proper perspective, keep in mind that at this time it was very rare for any database application to be able to handle queries over a network, and those that could required that the network link appear as a mounted disk drive (i.e., through something like Novell NetWare, the disk had to appear as a drive letter.) Networked database applications of this time all had to have access to files stored on this drive mount. No peer-to-peer or client-to-server database apps were available outside of development labs. Apparently, this technology was enough of a paradigm shift that Microsoft bought Aspect Computing in 1996.
Moving beyond drive letters was a critical piece of the overall puzzle in allowing virtual and cloud applications to be created. In a way, this was yet another abstraction layer, in that network database access became part of the "plumbing" of the Internet, allowing applications to perform queries and correlations across dozens or even hundreds of databases—far more than could be accommodated with the old "drive-mapped" methods— and leading eventually to true client server computing.
A standard way of programming database access was another vital link, as abstraction layers such as ODBC (Open Database Connectivity) became standardized across multiple operating systems. Another effort to develop the standard access method was begun in 1993, early in the Web's history, on a mailing list used to discuss Web programming projects. The result was the Common Gateway Interface (CGI), which became the basis for a great number of other programming efforts designed to create Web applications. It laid out a methodology to link programs outside the normal purview of the Web server so that complex application results could be linked to the Web. This sideways step also allowed developers to extend the capabilities of Web applications in previously unforeseen ways.
Printed with permission from CRC Press. Copyright 2010. Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center by Brian J. S. Chee and Curtis Franklin Jr. For more information about this title and other similar books, please visit http://www.crcpress.com/.