While the COVID-19 pandemic continues to ravage communities and the economy, many companies in ecommerce, logistics, online learning, food delivery, online business collaboration, and other sectors are experiencing massive spikes in demand for their products and services. For many of these companies, the evolving usage patterns caused by shelter-in-place and lockdown orders have created surges in business. These surges have pushed applications to their limits or beyond, potentially resulting in outages and delays that frustrate customers.

If your company is experiencing a dramatic increase in business and application load, what can you do? How can you rapidly increase the performance and scalability of your applications to ensure a great customer experience without breaking the bank? Here are six tips for rapidly scaling applications the right way.

Tip 1: Understand the full challenge

Addressing only part of the problem may not achieve the desired results. Be sure to consider all of the following.

Technical issues – Application performance under load (and as experienced by end users) is determined by the interplay between latency and concurrency. Latency is the time required for a specific operation, such as the time it takes for a website to respond to a user request. Concurrency is how many simultaneous requests a system can handle. When concurrency is not scalable, a significant increase in demand can cause an increase in latency because the system cannot immediately respond to all requests as they are received. This can lead to a poor customer experience as response times increase from fractions of a second to several seconds or worse, even potentially to an inability to respond to all requests. So while ensuring low latency for a single request may be essential, by itself it may not solve the challenge created by surging concurrency. You must find a way to scale the number of concurrent users while simultaneously maintaining the required response time. Further, applications must be able to scale seamlessly across hybrid environments that may span multiple cloud providers and on-premises servers.

Timing – A strategy that will take years to implement, such as rearchitecting an application from scratch, isn’t helpful for addressing immediate needs. The solution you adopt should enable you to begin scaling in weeks or months.

Cost – Few companies approach this challenge without budget restrictions, so a strategy that minimizes upfront investments and minimizes increased operational costs can be critical.

Tip 2: Plan for both the short term and the long term

Even as you address the challenge of increasing concurrency while maintaining latency, do not rush into a short-term fix that can lead to a costly dead end. If a complete redesign of the application isn’t planned or feasible, adopt a strategy that will enable the existing infrastructure to scale massively, as needed.

Tip 3: Choose the right technology

Open source in-memory computing solutions have proven to be the most cost-effective way to rapidly scale up system concurrency while maintaining or improving latency. Apache Ignite, for example, is a distributed in-memory computing solution which is deployed on a cluster of commodity servers. It pools the available CPUs and RAM of the cluster and distributes data and compute to the individual nodes. Deployed on-premises, in a public or private cloud, or in a hybrid environment, Ignite can be deployed as an in-memory data grid (IMDG) inserted between existing application and data layers without major modifications to either. Ignite also supports ANSI-99 SQL and ACID transactions.

With an Apache Ignite in-memory data grid in place, relevant data from the database is “cached” in the RAM of the compute cluster and is available for processing without the delays caused by normal reads and writes to a disk-based data store. The Ignite IMDG uses a MapReduce approach and runs application code on the cluster nodes to execute massively parallel processing (MPP) across the cluster with minimal data movement across the network. This combination of in-memory data caching, sending compute to the cluster nodes, and MPP dramatically increases concurrency and reduces latency, providing up to a 1,000X increase in application performance compared to applications built on a disk-based database.

The distributed architecture of Ignite makes it possible to increase the compute power and RAM of the cluster simply by adding new nodes. Ignite automatically detects the additional nodes and redistributes data across all nodes in the cluster, ensuring optimal use of the combined CPU and RAM. The ability to add nodes to the cluster easily also enables massive scalability to support rapid growth. Finally, the IMDG ensures data consistency by writing changes made to the data in the IMDG by the application layer back to the source data stores.

Apache Ignite can also future proof your infrastructure by powering two increasingly important strategies.

Digital Integration Hub (DIH) – A DIH architecture can power real-time business processes that require 360-degree data views. It provides a common data access layer for aggregating and processing data from a combination of data streams and on-premises and cloud-based sources, including on-premises and cloud databases, data lakes, data warehouses, and SaaS applications. Multiple customer-facing business applications can then access the aggregated data and process the data at in-memory speeds without moving the data over the network. The DIH automatically synchronizes changes to the data made by the consuming applications to the back-end data stores while reducing or eliminating the need for API calls to those data sources.

Hybrid transactional/analytical processing (HTAP) – HTAP is high-speed processing of the same in-memory dataset for both transactions and analytics. This eliminates the need for a time-consuming extract, transform, and load (ETL) process to periodically copy data from an online transactional processing (OLTP) system to a separate online analytical processing (OLAP) system. Powered by an in-memory computing platform, HTAP enables running pre-defined analytics queries on the operational data without impacting overall system performance.

Tip 4: Consider an open source stack

To continue creating a cost-effective, rapidly scalable infrastructure, consider these other proven open source solutions:

  • Apache Kafka or Apache Flink for building real-time data pipelines for delivering data from streaming sources, such as stock quotes or IoT devices, into the Apache Ignite in-memory data grid.
  • Kubernetes for automating the deployment and management of applications that have been containerized in Docker or other container solutions. Putting applications in containers and automating the management of them is key to successfully building real-time, end-to-end business processes in a distributed, hybrid, multi-cloud world.
  • Apache Spark for processing and analyzing large amounts of distributed data. Spark takes advantage of the Ignite in-memory computing platform to more effectively train machine learning models using the huge amounts of data being ingested via a Kafka or Flink streaming pipeline.

Tip 5: Build, deploy, and maintain it right

With the desire to deploy these solutions in an accelerated timeframe – and with the consequences of delays potentially very high – it is essential to make a realistic assessment of the in-house resources available for the project. If your company lacks the expertise or the availability, don’t hesitate to consult with third-party experts. You can easily obtain support for all these open source solutions on a contract basis, making it possible to gain the required expertise without the cost and time required to expand your in-house team.

Tip 6: Learn more

Many online resources are available to help you get up to speed on the potential of these technologies and determine which strategies may be right for your organization. Start by exploring the following.

Apache Ignite

Apache Kafka and Apache Flink

Kubernetes

Apache Spark

Whether your goal is to ensure an optimal customer experience in the face of surging business activity or to start planning for growth in the post-pandemic economic recovery, an open source infrastructure stack powered by in-memory computing is a cost-effective path to combining unprecedented speed with massive scalability to enable real-time business processes.

Nikita Ivanov is co-founder and CTO of GridGain Systems, where he has led the development of advanced and distributed in-memory data processing technologies. He has more than 20 years of experience in software application development, building HPC and middleware platforms, and contributing to the efforts of companies including Adaptec, Visa, and BEA Systems.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Source