Image for post
Image for post
Monument offers useful predictions with no-code at all!

H2O can get you to a similar end, though it is a much different tool — more similar to Scikit-learn, SparkML, or packages of choice from Github.

Monument is strictly no-code and everything is “batteries included” — a much lower hurdle for enterprise-grade analysis than firing up a code development environment. We enable automatic model selection with our Autopilot feature, and also have AutoML.

Ideal use cases for Monument are:

1. One-off analyses with quick turnaround, and
2. Rapid enterprise-deployment, basically anywhere between original data and BI/reporting.

Overall, H2O is oriented toward people who expect to write and maintain code…

Image for post
Image for post
A COVID active cases and deaths model for New Jersey, built in Monument.

COVID has dramatically altered all of our lives. The corresponding flurry of information — and sometimes misinformation — has made these changes more acute and harder to unpack.

To empower citizens, journalists, and policymakers, Monument has assembled below easy-to-follow instructions for building your own machine learning models for COVID’s spread, mortality rate, and other important factors.

We have laid out the instructions in such a way to make them easy to modify for your own analyses. As always, if you have any questions, feel free to contact us at

Step 1: What we want to test

The specific questions we’re answering here are:

  1. What is the…

Image for post
Image for post
Quickly build production-ready Machine Learning workflows with model serving.

NEW YORK, N.Y. — Monument is pleased to announce the roll out of “model serving” capabilities. Model serving enables users to train algorithms on historical data, save the parameterized model, and deploy this model on new data that the model has not previously seen.

Model serving complements the “training” and “validation” steps in the Machine Learning workflow. It is used for two principal workflows: “testing” and “live:”

  1. In testing, the new data contains the values of the target column that is being predicted. …

Image for post
Image for post
“Let’s not lose our cool — then we’re no better than the machine!” (Copyright Paramount Pictures.)

Stop putting prices on things by guessing. You’re leaving money on the table. A great use-case for Machine Learning is building pricing engines. No coding or data science expertise required.

Whether you’re buying or selling a product — any product! — it’s helpful to use Machine Learning to get a sense of whether a given price is too high or too low. This is true for any product, in particular commodities like computers, cars, steel, corn, fertilizer, airline tickets, concert tickets — whatever.

All you need to get started is some data to train an algorithm and a no-code Machine…

Image for post
Image for post
Two enter, one remains…(Source: Wikipedia)

Pop quiz. In the above chart of eleven historical datapoints, which of the two lines in the picture above would be the most accurate for predicting the location of a new, twelfth datapoint? The blue line seems great — it’s fit perfectly to your known historical datapoints! The diagonal black line seems “dumb” in comparison, as it only manages to intersect three out of the eleven historical datapoints.

It may be surprising to some readers that the diagonal black line may predict the location of the next datapoint more accurately than the blue line.

Deciding how to choose between these…

Image for post
Image for post
Take it easy.

1. It’s easier to get started.

Data science does not have to mean months or years of work to get started. With a no-code platform you can go from literally zero coding or data science experience to running your first algorithm in minutes — literally!

2. It’s easier to visualize your results.

It’s hard to find the meaning in arrays of numbers, and visualizing your data is its own coding hurdle. No-code tools automatically visualize your results so that you can see the results of your work.

3. It’s easier to experiment with different methods.

Regressions, Neural Networks, Kalman Filters. If you’re learning to code with, for example, Python, you’re going to deal with an entirely new learning curve for each…

Image for post
Image for post
Apply a classification algorithm in seconds!

NEW YORK, N.Y. — The Monument team is pleased to announce the addition of classification and regression methods into our zero-code machine intelligence platform. Users can now estimate any types of classes or levels for your data. Monument will automatically adjust to your data set.

These classification and regression algorithms open up entirely new use cases across a variety of industries, such as fraud detection, value appraisal, customer categorization, and machine failure.

The classification algorithms are Support Vector Machine (SVM), Light Gradient Boosted Machine (LightGBM), and Logistic Regression (LogReg). These features expand on Monument’s robust suite of time-series algorithms.


Image for post
Image for post
“Credit Cards and Cash” by Sean MacEntee is licensed under CC BY 2.0

Predicting the likelihood of future events like fraud or payment defaults is a classic use-case of machine learning. With its drag-and-drop interface, Monument makes it easier than ever to tackle this classification problem. In this tutorial, in a matter of minutes, we will use real-world data on credit card default probability to train an algorithm to detect payment defaults.

Obtaining & Inspecting The Data

In the “Data Folder” of the University of California-Irvine repository linked above, there is a file called default of credit card clients.xls. When we open it up in a spreadsheet, it looks like this:

Monument ( enables you to quickly apply algorithms to data in a no-code interface. But, after you drag the algorithms onto data to generate predictions, you need to decide which algorithm or combination of algorithms is most reliable for your task.

In the ocean temperature tutorial, we cleaned open remote sensing data and fed the data into Monument in order to forecast future ocean temperatures. In that case, we used visual inspection to evaluate the accuracy of different algorithms, which was possible because the historical data roughly formed a sine curve. …

Image for post
Image for post
The NOAA Ocean Data in this Tutorial Covers Florida & the Gulf of Mexico.

In this tutorial, we’re going to show you how to take open source data from the National Oceanic and Atmospheric Administration (NOAA), clean it, and forecast future temperatures using no-code machine learning methods.

This particular data comes from the Harmful Algal BloomS Observation System (HABSOS). There are several interesting questions to ask of this data — namely, what is the relationship between algal blooms and water temperature fluctuations. For this tutorial, we’re going to start with a basic question: can we predict what temperatures will be over the next five months?

The first part of this tutorial deals with acquiring…


Predictions to keep you two steps ahead.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store