High data volumes and data throughput are a central feature of the CMS detector experiment in the search for new physics. The aim of this project is to develop prototype systems capable of speeding up and improving the quasi-real-time analyses performed by the triggers during the data-acquisition stage of the experiment. This is of importance as the high luminosity upgrade of the LHC is expected to increase the raw data throughput significantly. The options explored to improve the trigger farm performance are the use of GPUs for parallelization of razor variable analysis, and inference based on distributed machine learning algorithms.