This session presents a simple, human-based approach to create test suites targeting multiple points of contact in a data solution. Commonly, an enterprise will pick a data processing solution with heavy GUIs because it can make an easy to understand workflow around data. However, those solutions still are not able to verify the simplest use case, i.e. “If I put data into a solution to process data, then I should get a desired result.”
FIS will demonstrate and teach you how to build a unique testing solution on top of Apache Spark. Under its solution, FIS can actually prove to users in their organization that when they put data in, they get the correct result out. They can also enlist their entire team from product owner to developer to write complete unit tests. The type of flexibility Spark enables allows you to take unique paths in building robust, understandable data flows. The transformational element is the ability to do this in milliseconds, and not wait till the entire pipeline finishes.
Session hashtag: #SFdev27
Aaron is the Director of Engineering, serving FIS Global’s big data initiative within its Digital Finance Division. Aaron has over 20 years in developing Business Intelligence solutions and has participated in technical editorship of several Big Data books. Aaron holds a Master degree in Information Technology from University of Wisconsin.
Zachary Nanfelt is a Business Intelligence Software Engineer at FIS. Coming from a Software Engineering background, he started his career in Big Data when he worked hand-in-hand with the CTO of a previous company to scrap together a Hadoop cluster in 2013 at a Hackathon. His latest work has involved providing value to the enterprise by leading the Spark development work for a project which has replaced a legacy ETL which was soon to collapse under the load with a more modern scalable approach using Apache Spark! When not coding, Zachary enjoys climbing rocks and going for long bike rides.