There’s no doubt that cloud adoption continues to be one of the most relevant topics within every organization globally in 2021, something I do not see slowing. It is estimated that more than 80% of organizations run 30%, or more of their applications in a public cloud offering such as AWS, GCP or Azure in North America. Predictions indicate this adoption will only increase.
Organizations utilizing cloud offerings, mostly have net new workloads, or they are easy to move, low hanging fruit workloads running in the cloud. There is a common struggle with migration of legacy applications into cloud. A “brute force”, or manual migration of these legacy applications is the most common approach, with many choosing not to move these legacy applications, letting them age out naturally.
But what does one do with a legacy application that delivers value and rich outcomes to the organization, an application that is heavily relied upon?
One of these legacy applications organizations work with daily is SAS, a 50+ year old statistic and analytic solution. Our customers come to us as they look to adopt a modern, portable and cloud ready solution that delivers the same outcomes of their SAS implementation, our answer, PySpark.
Apache Spark is the leader in advanced analytics and is a portable platform enabling it to be run on-premise or within any public cloud environment on the planet. However, one cannot just simply move their years of SAS code into PySpark and expect it to simply work. The SAS code has been developed over many years, with a proprietary programming language that does not easily translate into anything else on the market today.
So what do you do? You cannot afford to start over, net new on a new platform without your data pipelines, it’s just not feasible. You can take the manual approach, hiring teams to re-code, or outsource to an organization, whom will provide the labour and also take the manual “brute force” approach. These sound like great alternatives, but did you know the average programmer converts 300-400 lines of code per day? You likely have 100’s of thousands, or millions of lines of code to process… That’s a lot of man power, taking a long time and coming with a big price tag. Not to mention, manual work often leads to mistakes, and each developer will have their own coding style allowing for inconsistencies, which can be an issue down the road.
There is an easier way. WiseWithData has developed the world’s only AI solution, SPROCKET which automates the process, converting legacy SAS code into true PySpark code. This not only speeds up the data migration process by as much as 40x, ensuring you can migrate quickly with little to no interruption to your business, but it also ensures the converted code is optimized to take advantage of the features found within Apache Spark.
The results, a much more cost effective migration completed in a significantly shorter time frame, delivering improved performance on a platform that is modern and supported by every cloud provider.
One last point on the topic of cloud migration from legacy platforms; moving directly to the cloud introduces significant risk, and may not have the outcomes your lines of business desire or are used to, making all of your efforts appear to be wasted. The walk, don’t run strategy is often the best approach, migrate the code to your new Spark platform on-premise prior to moving to the cloud. This not only helps with the transition, but it will also provide a benchmark, on-prem vs cloud. You’ll thank me for this later when you’re troubleshooting connectivity or performance, which is bound to happen.
If you are interested in learning more about a SAS to PySpark migration, our approach and how we can deliver a compelling ROI, on-going cost savings and a time-frame to meet your needs let us know by sending us an email. hello@wisewithdata.com