proposing to build a Spark-like big data and compute platform, fed by an entirely new event-streaming bus, modeled closely to what the tech giants all have built to solve their data problems at scale. It would allow hundreds, even thousands, of CPU cores to be thrown at the computations, allowing analyses that currently take days or weeks to be done in minutes or hours. Maxine is familiar with these techniques. Their use exploded after the famous 2004 Google Map/Reduce research paper was published, which described the techniques Google used to massively parallelize the indexing of the entire
  
  ...more




