Trading Technology

05:00 PM
Connect Directly
Facebook
Google+
Twitter
RSS
E-Mail
50%
50%

Context Relevant Automates Machine Learning for Data Scientists

Context Relevant is using machine learning tools to help data scientists automate quant behavior that previously required rare experts and months to tackle.

It's one thing to secure talented data scientists, it's entirely another to equip them for success. Enter Context Relevant, a fast growing and in-demand startup that is offering automated predictive analytics software to help data experts build the solutions to financial services’ most complex big-data problems.

As any executive knows, an unfortunately large part of any job is the busy-work and the preparations that facilitate the esteemed end-results. Cutting back on that low- or no-value added work has been the focus of many tools and services, but the relatively new and complex role of data scientists has left them a bit shortchanged. Shouldn't firms be enabling them to be as productive as possible?

Data scientists spend an inordinate amount of time loading high-quality data and setting up sampling and filtering strategies before running them through algorithms. To make it manageable, millions, even trillions, of rows of data may be pared down to tens of thousands -- a still cumbersome volume that nevertheless lessens the sharpness of the model.

Context Relevant's solution uses machine learning and off-the-shelf behavior analytic models that help the data scientist speed through the foundational processes. The software integrates data from multiple sources (encouraging the import of raw, uncleansed data) and identifies patterns in the data based on models and heuristics. The software automatically pinpoints the data features most relevant to optimizing the model or strategy. Models are tested in real-time, possibly improving on each run based on the auto-detected problems and incremental changes made to optimize results.

To build the platform, Stephen Purpura, co-founder and CEO of Context Relevant explains, distributed system teams from top firms, including financial service organizations, were brought in to "assemble what we believe is best core data science application... A variety of institutions' understanding of data science applications came together to solve this one problem."

The company claims its results outperform manually developed models and can be built in a fraction of the time (hours, versus weeks or months). To maximize speed, the software platform runs on Lynx, not Hadoop, says Purpura.

"We clocked automation of quant behavior at 40 thousand times faster than existing solutions. That automation and time is a huge benefit. For example, we watch the market in real-time -- the entire market in coordination with news events and in coordination with customer behavior to identify the right products to sell at the right time. And not only products, but the right hedges based on portfolios. It's being done at a pace never seen before."

Context Relevant is also working with a financial organization to rate which bonds available on the market are a good value compared to other values on the market, based on changing dynamics and risk profiles of underlying securities. "In addition to that, the technology can recommend hedges if you make a purchase into the product. This is happening in real-time as market conditions change. From a sales perspective, it can offer performance never before possible within seconds."

As another example, insurance companies may want to monitor how the market is changing and how that impact flows into 401(k)s. Purpura says the software can help figure out which people are likely to shift 401(k)s based on market dynamics to better get opportunities

Rapid growth
Context Relevant recently announced it has raised $21 million in Series B funding led by the San Francisco-based venture capital firm Formation 8. Madrona Venture Group, Bloomberg Beta, Vulcan Capital, and several prominent Seattle-area angels also joined in the round, according to a press release. One of the largest banks and one of the largest insurance companies in the United States also joined.

The Series B brings total funding to-date for Context Relevant to $28 million.

The company's rapid growth speaks to the demand for efficiency and automation in this area. Last week the company announced the appointment of Neil Zane, SVP for the Technology Partnership Development team at Bank of America, and Chris Mueller, former CFO and vice chairman of telecom leader 360Networks, to Context Relevant’s Advisory Board. According to the press release, they join Richard Clarke, White House national security expert and chairman and CEO of Good Harbor Risk Management; Mike Kail of Netflix; Darren Vengroff, chief scientist at RichRelevance; David Farber, professor at Carnegie Mellon University; and Gary Kazantsev of Bloomberg.

"Take any of the biggest banks in the world," Purpura tells us. "No matter how advanced you think they are, they will all say they struggle with making their data useful for analysis… It's certain every one of these organizations has figured out getting more value out of information is the No. 1 priority. Now that they have made that decision, the second question is how to get there…

"You're seeing the best organizations on Wall Street partnering with us because we give them the path to get more out of data everyday. They use [the platform] for an incredible range of things. We're 27 months old, and this explosion happened so fast."

Becca Lipman is Senior Editor for Wall Street & Technology. She writes in-depth news articles with a focus on big data and compliance in the capital markets. She regularly meets with information technology leaders and innovators and writes about cloud computing, datacenters, ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
anon6541378980
100%
0%
anon6541378980,
User Rank: Apprentice
7/21/2014 | 1:30:37 PM
Re: Big Data Replacing Legacy Systems
The idea of applying machine learning to the financial markets has been around for quite some time. There was a flurry of literature in the late 1980s and early 1990s. For example, here is a well cited article utilizing neural networks to predict the TOPIX from 1990. Recently, there has been another wave of literature focused on applying machine learning, individual algorithms (decision trees, random forests, SVMs, Naive Bayes, logistic regressions, etc...) and ensembles, to the stock and FX market.

Quantitative funds have utilized machine learning in their trading for quite some time with varying levels of success. Rebellion Research has been in the media quite a bit recently and their fund has performed well over the past couple of years. They use machine learning for feature selection and price forecasting. Two Sigma is another fund utilizing machine learning 

Machine learning is already making its way into indivudal investors, for example, here. Additionally, traders have been employing R and MATLAB to utilize machine learning for a long time.

For big data and machine learning to reach the average individual investors, it needs to be easier to gather clean, reliable and disparate data and it has to be straightforward to analyze.

We use big data technologies to give our traders access to a huge database of clean and reliable market data ranging from US Equities to the FX market to Bitcoins. We have a variety of structured data sources; libraries of technical, macroeconomic, and fundamental indicators and unstructured data; social indicators like StockTwit sentiment and public surveys.

These are data sources that are normally too expensive for any one investor or difficult to quantify and analyze in a traditional database.

To make this available to an individual investor, we have taken out the necessity of programming. We use machine-learning techniques to analyze the data behind the scenes and we display the results in a visual and interactive interface. The most important aspect of machine learning is selecting quality features/indicators. Big data allows us to give traders access to any indicator they want to analyze over any asset.
Greg MacSweeney
50%
50%
Greg MacSweeney,
User Rank: Author
7/21/2014 | 6:56:13 AM
Re: Big Data Replacing Legacy Systems
Machine learning is something that financial firms are only starting to explore. When it comes to individual investors, big data and machine learning are used even less at this point. How does your company use big data to help individual investors?
justinc123
50%
50%
justinc123,
User Rank: Apprentice
7/16/2014 | 2:50:40 PM
Big Data Replacing Legacy Systems
Good article. It seems machine learning and big data analytics are seeping into larger firms and institutions. It is a slow process, though. It takes a long time for a firm like BoA to upgrade its legacy systems, for example. 

We are using similar technology but aimed at the individual traders. There really is not anything out there for individual investors on this level. You can get early access to our platform here. We hope the individual investor community will be as open to machine learning and big data analytics as the more sophisticated firms are. In order to compete, I think they will have to.

I strongly believe there is information to be gained in unstructured data like StockTwits. I also believe there is an enormous amount of data out there- too much to be analyzing without using a technique like machine learning.

 
Register for Wall Street & Technology Newsletters
White Papers
Current Issue
Wall Street & Technology - July 2014
In addition to regular audits, the SEC will start to scrutinize the cyber-security preparedness of market participants.
Video
Exclusive: Inside the GETCO Execution Services Trading Floor
Exclusive: Inside the GETCO Execution Services Trading Floor
Advanced Trading takes you on an exclusive tour of the New York trading floor of GETCO Execution Services, the solutions arm of GETCO.