The OneLogic cloud platform leverages Amazon Web Services (AWS).
AWS cloud provides a broad set of infrastructure services that are delivered as a utility: on-demand, available in seconds, with pay-as-you-go pricing. AWS provides access to affordable and scalable infrastructure and enables us to focus on optimising algorithm execution and data management.
We make use of compute and storage resources primarily within AWS, with a focus on Big Data orientated services (S3, Lambda, EC2, EMR, RDS etc). Ancillary services such as code management are also used. Wherever possible, we leverage AWS core services to support the OneLogic application stack and use DevOps techniques to keep operational costs low.
The ALA cloud platform is split into three areas:
- Data Ingestion and Store
- Intelligence Engine
- Query Engine
Processes support input sources such as financial history, price data and sentiment feeds. Such data sources are ingested using serverless processes with data being deposited in S3 and DynamoDB. This ensures ingestion is as cost effective as possible whilst not being constrained by the volume of data we capture.
Once organised and indexed through ingestion, data is made available to differing analytics clusters for onward processing and analytics. These clusters are referred to as the ALA Intelligence Engine.
The platform is wrapped by management services that enable operations to be managed, and front ended via API gateways enabling web and app access to be securely managed and federated.
The diagram shown below enables modular addition of any number of ingestion feeds and processing algorithms whilst maintaining a scalable and resilient and consistent infrastructure. We make use of AWS Machine Learning services for comprehension and Natural Language Processing (NLP) in addition to machine learning tasks on ingested data. AWS Machine Learning is highly scalable and can generate billions of predictions daily and serve those predictions in real time and at high throughput.