Tidy Word Vectors Revisited

Julia Silge revisits her Hacker News word vectorization problem:

So hooray! We have found word vectors again, a bit faster, with clearer and easier-to-understand code. I do argue that this is a real benefit of this approach; it’s based on counting, dividing, and matrix decomposition and is thus much easier to understand and implement than anything with a neural network. And the results?

Click through to see the new method, as well as some basic analogy testing.

Related Posts

Patterns for ML Models in Production

Jeff Fletcher shows four patterns for productionalizing Machine Learning models, as well as some things to take care of once you’re in production: Operational DatabasesThis option is sometimes considered to be  real-time as the information is provided “as its needed,” but it is still a batch method. Using our telco example, a batch process can […]

Read More

Hyperparameter Tuning with MLflow

Joseph Bradley shows how you can perform hyperparameter tuning of an MLlib model with MLflow: Apache Spark MLlib users often tune hyperparameters using MLlib’s built-in tools CrossValidator and TrainValidationSplit.  These use grid search to try out a user-specified set of hyperparameter values; see the Spark docs on tuning for more info. Databricks Runtime 5.3 and 5.3 ML and above support […]

Read More

Categories

November 2017
MTWTFSS
« Oct Dec »
 12345
6789101112
13141516171819
20212223242526
27282930