Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
9
This is a sentence-transformers model finetuned from Shuu12121/CodeModernBERT-Owl-v1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'#\nDeletes the cluster, including the Kubernetes endpoint and all worker\nnodes.\n\nFirewalls and routes that were configured during cluster creation\nare also deleted.\n\nOther Google Compute Engine resources that might be in use by the cluster,\nsuch as load balancer resources, are not deleted if they weren\'t present\nwhen the cluster was initially created.\n\n@overload delete_cluster(request, options = nil)\nPass arguments to `delete_cluster` via a request object, either of type\n{::Google::Cloud::Container::V1::DeleteClusterRequest} or an equivalent Hash.\n\n@param request [::Google::Cloud::Container::V1::DeleteClusterRequest, ::Hash]\nA request object representing the call parameters. Required. To specify no\nparameters, or to keep all the default parameter values, pass an empty Hash.\n@param options [::Gapic::CallOptions, ::Hash]\nOverrides the default settings for this call, e.g, timeout, retries, etc. Optional.\n\n@overload delete_cluster(project_id: nil, zone: nil, cluster_id: nil, name: nil)\nPass arguments to `delete_cluster` via keyword arguments. Note that at\nleast one keyword argument is required. To specify no parameters, or to keep all\nthe default parameter values, pass an empty Hash as a request object (see above).\n\n@param project_id [::String]\nDeprecated. The Google Developers Console [project ID or project\nnumber](https://cloud.google.com/resource-manager/docs/creating-managing-projects).\nThis field has been deprecated and replaced by the name field.\n@param zone [::String]\nDeprecated. The name of the Google Compute Engine\n[zone](https://cloud.google.com/compute/docs/zones#available) in which the\ncluster resides. This field has been deprecated and replaced by the name\nfield.\n@param cluster_id [::String]\nDeprecated. The name of the cluster to delete.\nThis field has been deprecated and replaced by the name field.\n@param name [::String]\nThe name (project, location, cluster) of the cluster to delete.\nSpecified in the format `projects/*/locations/*/clusters/*`.\n\n@yield [response, operation] Access the result along with the RPC operation\n@yieldparam response [::Google::Cloud::Container::V1::Operation]\n@yieldparam operation [::GRPC::ActiveCall::Operation]\n\n@return [::Google::Cloud::Container::V1::Operation]\n\n@raise [::Google::Cloud::Error] if the RPC is aborted.\n\n@example Basic example\nrequire "google/cloud/container/v1"\n\n# Create a client object. The client can be reused for multiple calls.\nclient = Google::Cloud::Container::V1::ClusterManager::Client.new\n\n# Create a request. To set request fields, pass in keyword arguments.\nrequest = Google::Cloud::Container::V1::DeleteClusterRequest.new\n\n# Call the delete_cluster method.\nresult = client.delete_cluster request\n\n# The returned object is of type Google::Cloud::Container::V1::Operation.\np result',
'def delete_cluster request, options = nil\n raise ::ArgumentError, "request must be provided" if request.nil?\n\n request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Container::V1::DeleteClusterRequest\n\n # Converts hash and nil to an options object\n options = ::Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h\n\n # Customize the options with defaults\n metadata = @config.rpcs.delete_cluster.metadata.to_h\n\n # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers\n metadata[:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \\\n lib_name: @config.lib_name, lib_version: @config.lib_version,\n gapic_version: ::Google::Cloud::Container::V1::VERSION\n metadata[:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty?\n metadata[:"x-goog-user-project"] = @quota_project_id if @quota_project_id\n\n header_params = {}\n if request.name\n header_params["name"] = request.name\n end\n\n request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&")\n metadata[:"x-goog-request-params"] ||= request_params_header\n\n options.apply_defaults timeout: @config.rpcs.delete_cluster.timeout,\n metadata: metadata,\n retry_policy: @config.rpcs.delete_cluster.retry_policy\n\n options.apply_defaults timeout: @config.timeout,\n metadata: @config.metadata,\n retry_policy: @config.retry_policy\n\n @cluster_manager_stub.call_rpc :delete_cluster, request, options: options do |response, operation|\n yield response, operation if block_given?\n end\n rescue ::GRPC::BadStatus => e\n raise ::Google::Cloud::Error.from_error(e)\n end',
'device(deviceType, deviceId = 0) {\n\t return new DLDevice(deviceType, deviceId, this.lib);\n\t }',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
sentence_0, sentence_1, and label| sentence_0 | sentence_1 | label | |
|---|---|---|---|
| type | string | string | float |
| details |
|
|
|
| sentence_0 | sentence_1 | label |
|---|---|---|
Set the column title |
setHeader = function(column, newValue) { |
|
Elsewhere this is known as a "Weak Value Map". Whereas a std JS WeakMap |
makeFinalizingMap = (finalizer, opts) => { |
|
Creates a function that memoizes the result of |
function memoize(func, resolver) { |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
per_device_train_batch_size: 150per_device_eval_batch_size: 150num_train_epochs: 1fp16: Truemulti_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 150per_device_eval_batch_size: 150per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss |
|---|---|---|
| 0.0188 | 500 | 0.2957 |
| 0.0375 | 1000 | 0.1174 |
| 0.0563 | 1500 | 0.1148 |
| 0.0750 | 2000 | 0.104 |
| 0.0938 | 2500 | 0.0977 |
| 0.1125 | 3000 | 0.0944 |
| 0.1313 | 3500 | 0.0885 |
| 0.1500 | 4000 | 0.083 |
| 0.1688 | 4500 | 0.0817 |
| 0.1875 | 5000 | 0.077 |
| 0.2063 | 5500 | 0.0764 |
| 0.2250 | 6000 | 0.0725 |
| 0.2438 | 6500 | 0.0698 |
| 0.2625 | 7000 | 0.0663 |
| 0.2813 | 7500 | 0.0644 |
| 0.3000 | 8000 | 0.0606 |
| 0.3188 | 8500 | 0.0587 |
| 0.3375 | 9000 | 0.0596 |
| 0.3563 | 9500 | 0.0566 |
| 0.3750 | 10000 | 0.0536 |
| 0.3938 | 10500 | 0.0514 |
| 0.4125 | 11000 | 0.0532 |
| 0.4313 | 11500 | 0.0501 |
| 0.4500 | 12000 | 0.0478 |
| 0.4688 | 12500 | 0.0483 |
| 0.4875 | 13000 | 0.0461 |
| 0.5063 | 13500 | 0.0444 |
| 0.5251 | 14000 | 0.0443 |
| 0.5438 | 14500 | 0.0402 |
| 0.5626 | 15000 | 0.0417 |
| 0.5813 | 15500 | 0.0386 |
| 0.6001 | 16000 | 0.0421 |
| 0.6188 | 16500 | 0.0368 |
| 0.6376 | 17000 | 0.036 |
| 0.6563 | 17500 | 0.0352 |
| 0.6751 | 18000 | 0.0339 |
| 0.6938 | 18500 | 0.0336 |
| 0.7126 | 19000 | 0.0334 |
| 0.7313 | 19500 | 0.0312 |
| 0.7501 | 20000 | 0.0325 |
| 0.7688 | 20500 | 0.0317 |
| 0.7876 | 21000 | 0.0284 |
| 0.8063 | 21500 | 0.0281 |
| 0.8251 | 22000 | 0.0294 |
| 0.8438 | 22500 | 0.0283 |
| 0.8626 | 23000 | 0.0277 |
| 0.8813 | 23500 | 0.0268 |
| 0.9001 | 24000 | 0.0254 |
| 0.9188 | 24500 | 0.0249 |
| 0.9376 | 25000 | 0.0255 |
| 0.9563 | 25500 | 0.0251 |
| 0.9751 | 26000 | 0.0244 |
| 0.9938 | 26500 | 0.0249 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
Shuu12121/CodeModernBERT-Owl-v1