architxt.cli#
Functions
|
|
|
|
|
Display overall statistics. |
|
Generate synthetic database instances. |
|
|
|
|
|
Launch the web-based UI using Streamlit. |
- architxt.cli.clear_cache(*, force=typer.Option(False, help='Force the deletion of the cache without asking.'))[source]#
- Return type:
- architxt.cli.compare(file1=typer.Argument(..., exists=True, readable=True, help='Path of the first data file to load.'), file2=typer.Argument(..., exists=True, readable=True, help='Path of the first data file to load.'), *, tau=typer.Option(0.7, help='The similarity threshold.', min=0, max=1))[source]#
- Return type:
- architxt.cli.inspect(files=typer.Argument(..., exists=True, readable=True, help='Path of the data files to load.'))[source]#
Display overall statistics.
- Return type:
- architxt.cli.instance_generator(*, sample=typer.Option(100, help='Number of sentences to sample from the corpus.', min=1), output=typer.Option(None, help='Path to save the result.'))[source]#
Generate synthetic database instances.
- Return type:
- architxt.cli.simplify(files=typer.Argument(..., exists=True, readable=True, help='Path of the data files to load.'), *, tau=typer.Option(0.7, help='The similarity threshold.', min=0, max=1), epoch=typer.Option(100, help='Number of iteration for tree rewriting.', min=1), min_support=typer.Option(20, help='Minimum support for tree patterns.', min=1), workers=typer.Option(None, help='Number of parallel worker processes to use. Defaults to the number of available CPU cores.', min=1), output=typer.Option(None, help='Path to save the result.'), debug=typer.Option(False, help='Enable debug mode for more verbose output.'), metrics=typer.Option(False, help='Show metrics of the simplification.'), log=typer.Option(False, help='Enable logging to MLFlow.'), log_system_metrics=typer.Option(False, help='Enable logging of system metrics to MLFlow.'))[source]#
- Return type:
- architxt.cli.simplify_llm(files=typer.Argument(..., exists=True, readable=True, help='Path of the data files to load.'), *, tau=typer.Option(0.7, help='The similarity threshold.', min=0, max=1), min_support=typer.Option(20, help='Minimum support for vocab.', min=1), refining_steps=typer.Option(0, help='Number of refining steps.'), output=typer.Option(None, help='Path to save the result.'), intermediate_output=typer.Option(None, help='Path to save intermediate results.'), debug=typer.Option(False, help='Enable debug mode for more verbose output.'), metrics=typer.Option(False, help='Show metrics of the simplification.'), log=typer.Option(False, help='Enable logging to MLFlow.'), log_system_metrics=typer.Option(False, help='Enable logging of system metrics to MLFlow.'), model_provider=typer.Option('huggingface', help='Provider of the model.'), model=typer.Option('HuggingFaceTB/SmolLM2-135M-Instruct', help='Model to use for the LLM.'), max_tokens=typer.Option(2048, help='Maximum number of tokens to generate.'), local=typer.Option(True, help='Use local model.'), openvino=typer.Option(False, help='Enable Intel OpenVINO optimizations.'), rate_limit=typer.Option(None, help='Rate limit for the LLM.'), estimate=typer.Option(False, help='Estimate the number of tokens to generate.'), temperature=typer.Option(0.2, help='Temperature for the LLM.'))[source]#
- Return type:
Modules