The Python integrators are written in [nbdev](https://nbdev.fast.ai/)([video](https://www.youtube.com/watch?v=9Q6sLbz37gk&t=1301s)). With nbdev, it is encouraged to write code in
<p>Often, plugins require some configuration before running. For example you might want to run plugin on data from a specific service, or run a specific version of your machine learning model. To configure your plugin when running it from the front-end, plugins require a <code>config.json</code> in the root of the plugin repository. This file contains a declarative definition, which our front-end app uses to display a configuration screen.</p>
<p>This module contains utilities to directly generate this config file from the plugin definition, by inferring all arguments from the plugin <code>__init__</code> method. All configuration fields are textboxes by default, which can be changed to different types in the future. For a full example, see our guide on <ahref="https://memri.docs.memri.io/docs.memri.io/guides/build_and_deploy_your_model/">building plugins</a>.</p>
<p>This module contains utilities to directly generate this config file from the plugin definition, by inferring all arguments from the plugin <code>__init__</code> method. All configuration fields are textboxes by default, which can be changed to different types in the future. For a full example, see our guide on <ahref="https://docs.memri.io/guides/build_and_deploy_your_model/">building plugins</a>.</p>
"Some weights of the model checkpoint at distilroberta-base were not used when initializing RobertaForSequenceClassification: ['lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.bias', 'roberta.pooler.dense.bias', 'lm_head.dense.weight', 'roberta.pooler.dense.weight']\n",
"Some weights of the model checkpoint at distilroberta-base were not used when initializing RobertaForSequenceClassification: ['lm_head.dense.bias', 'roberta.pooler.dense.bias', 'lm_head.bias', 'roberta.pooler.dense.weight', 'lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias']\n",
"- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
"- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
"Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at distilroberta-base and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.dense.bias', 'classifier.out_proj.weight']\n",
"Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at distilroberta-base and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.bias', 'classifier.dense.weight']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
Some weights of the model checkpoint at distilroberta-base were not used when initializing RobertaForSequenceClassification: ['lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.bias', 'roberta.pooler.dense.bias', 'lm_head.dense.weight', 'roberta.pooler.dense.weight']
Some weights of the model checkpoint at distilroberta-base were not used when initializing RobertaForSequenceClassification: ['lm_head.dense.bias', 'roberta.pooler.dense.bias', 'lm_head.bias', 'roberta.pooler.dense.weight', 'lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at distilroberta-base and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.dense.bias', 'classifier.out_proj.weight']
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at distilroberta-base and are newly initialized: ['classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.bias', 'classifier.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
> Pymemri is a python library for creating <b>Plugins</b> for the Memri Personal online datastore [(pod)](https://gitlab.memri.io/memri/pod). Pymemri has a PodClient to communicate with the pod, and tools to build and test plugins.
%% Cell type:markdown id: tags:
[](https://gitlab.memri.io/memri/pymemri/-/pipelines/latest)
Plugins connect and add the information to your Pod. Plugins that <b>import your data from external services</b> are called **Importers** (Gmail, WhatsApp, etc.). Plugins that <b>connect new data to the existing data</b> are called **indexers** (face recognition, spam detection, object detection, etc.). Lastly there are plugins that <b>execute actions</b> (sending messages, uploading files). This repository is built with [nbdev](https://github.com/fastai/nbdev), which means that the repo structure has a few differences compared to a standard python repo.
%% Cell type:markdown id: tags:
## Installing
%% Cell type:markdown id: tags:
### As a package
```bash
pip install pymemri
```
%% Cell type:markdown id: tags:
### Development
To install the Python package, and correctly setup nbdev for development run:
```bash
pip install-e.&& nbdev_install_git_hooks
```
The last command configures git to automatically clean metadata from your notebooks before a commit.
%% Cell type:markdown id: tags:
## Quickstart: Pod Client
All interaction between plugins and the pod goes via the Pymemri `PodClient`. To use this client in development, we first need to have a pod running locally. The quickest way to do this is to install from the [pod repo](https://gitlab.memri.io/memri/pod), and run `./examples/run_development.sh`.
If you have a running pod, you can define and add your own item definitions:
# Connect to the pod and add the Dog item definition
client=PodClient()
client.add_to_schema(Dog)
# Add a Dog to the pod
dog=Dog("bob",3)
client.create(dog)
```
%% Cell type:markdown id: tags:
## Quickstart: Running a plugin
%% Cell type:markdown id: tags:
After installation, users can use the plugin CLI to manually run a plugin. For more information on how to build a plugin, see `run_plugin`.
<b>With the pod running, run in your terminal|: </b>
%% Cell type:markdown id: tags:
```bash
store_keys
run_plugin --metadata"example_plugin.json"
```
%% Cell type:markdown id: tags:
This stores a random owner key and database key on your disk for future use, and runs the pymemri example plugin. If everything works correctly, the output should read `Plugin run success.`
The Python integrators are written in [nbdev](https://nbdev.fast.ai/)([video](https://www.youtube.com/watch?v=9Q6sLbz37gk&t=1301s)). With nbdev, it is encouraged to write code in
[Jupyter Notebooks](https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html). Nbdev syncs all the notebooks in `/nbs` with the python code in `/pymemri`. Tests are written side by side with the code in the notebooks, and documentation is automatically generated from the code and markdown in the notebooks and exported into the `/docs` folder. Check out the [nbdev quickstart](wiki/nbdev_quickstart.md) for an introduction, **watch the video linked above**, or see the [nbdev documentation](https://nbdev.fast.ai/) for a all functionalities and tutorials.
"Often, plugins require some configuration before running. For example you might want to run plugin on data from a specific service, or run a specific version of your machine learning model. To configure your plugin when running it from the front-end, plugins require a `config.json` in the root of the plugin repository. This file contains a declarative definition, which our front-end app uses to display a configuration screen.\n",
"\n",
"This module contains utilities to directly generate this config file from the plugin definition, by inferring all arguments from the plugin `__init__` method. All configuration fields are textboxes by default, which can be changed to different types in the future. For a full example, see our guide on [building plugins](https://memri.docs.memri.io/docs.memri.io/guides/build_and_deploy_your_model/).\n",
"This module contains utilities to directly generate this config file from the plugin definition, by inferring all arguments from the plugin `__init__` method. All configuration fields are textboxes by default, which can be changed to different types in the future. For a full example, see our guide on [building plugins](https://docs.memri.io/guides/build_and_deploy_your_model/).\n",
"\n",
"Usage:\n",
"```\n",
...
...
%% Cell type:code id:9dc1dd55 tags:
``` python
%load_extautoreload
%autoreload2
# default_exp template.config
```
%% Cell type:code id:ba7f7b8e tags:
``` python
# export
fromtypingimportList
fromfastcore.scriptimportcall_parse,Param
importinspect
importos
importimportlib
importjson
frompathlibimportPath
frompymemri.plugin.pluginbaseimportget_plugin_cls
frompymemri.pod.clientimportPodClient
```
%% Cell type:code id:7726d3b4 tags:
``` python
# hide
# test imports
frompymemri.plugin.pluginbaseimportPluginBase
frompprintimportpprint
importos
```
%% Cell type:markdown id:d9a80467 tags:
# Plugin Configuration
%% Cell type:markdown id:a554cfac tags:
Often, plugins require some configuration before running. For example you might want to run plugin on data from a specific service, or run a specific version of your machine learning model. To configure your plugin when running it from the front-end, plugins require a `config.json` in the root of the plugin repository. This file contains a declarative definition, which our front-end app uses to display a configuration screen.
This module contains utilities to directly generate this config file from the plugin definition, by inferring all arguments from the plugin `__init__` method. All configuration fields are textboxes by default, which can be changed to different types in the future. For a full example, see our guide on [building plugins](https://memri.docs.memri.io/docs.memri.io/guides/build_and_deploy_your_model/).
This module contains utilities to directly generate this config file from the plugin definition, by inferring all arguments from the plugin `__init__` method. All configuration fields are textboxes by default, which can be changed to different types in the future. For a full example, see our guide on [building plugins](https://docs.memri.io/guides/build_and_deploy_your_model/).
As example, we create a test plugin with various configuration arguments. The `create_config` method is called on the plugin, and generates a declarative configuration field for each plugin argument. Note that arguments that start with a `_`, and untyped arguments are skipped in the annotation.
Pymemri offers a range of plugin templates to set up testing, docker and CI for you. This way, you can focus on building your plugin, and be sure it works within the Memri ecosystem.
All plugins are hosted on our [GitLab](https://gitlab.memri.io/). In order to make your own plugin from a template,
1. Create an account on [GitLab](https://gitlab.memri.io/)
2. Create a _public_ [blank repository](https://gitlab.memri.io/projects/new#blank_project)
3. Clone the repository
4. run the `plugin_from_template` CLI inside the repository folder.
The CLI will infer most settings for you from your git account and repository name, only a template name and optional description are required.
```
plugin_from_template --template classifier_plugin --description "My Classifier Plugin"
```
To make sure all settings are correct, you can inspect `metadata.json`, which holds all information like your plugin name, and python package name.
-----------------
You can list the available templates with. All plugin templates are hosted [here](https://gitlab.memri.io/memri/plugin-templates).
```
plugin_from_template --list
```
The CLI has options to customize the plugin name, package name and other aspects of your plugin. For advanced use, run:
```
plugin_from_template --help
```
%% Cell type:markdown id:2c9883e9 tags:
## Utility functions -
%% Cell type:code id:2933c056 tags:
``` python
# export
# hide
# If the owner of the repository is one of these groups, the CLI requires an additional `user` argument
With the `plugin_from_template` CLI, you can easily create a plugin where all CI pipelines, docker files, and test setups are configured for you. Multiple templates are available, to see the complete list use: