What is embedding_model.onnx and melspectrogram.onnx? #230
Unanswered
sangheonEN
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When I created the F object below, I used the embedding_model.onnx and melspectrogram.onnx models provided by openwakeword.
F = openwakeword.utils.AudioFeatures()
These are not models developed for my dataset, but models provided by openwakeword, so could the performance not be optimized during training?
If so, could you tell me how to create the embedding_model.onnx and melspectrogram.onnx models that fit my dataset?
Code content. class AudioFeatures():
"""
A class for creating audio features from audio data, including melspectograms and Google's
speech_embedding
features."""
def init(self,
melspec_model_path: str = "",
embedding_model_path: str = "",
sr: int = 16000;
ncpu: int = 1;
inference_framework: str = "onnx",
device: str = 'cpu'
):
"""
Initialize the AudioFeatures object.
Args:
melspec_model_path (str): The path to the model for computing melspectograms from audio data
embedding_model_path (str): The path to the model for Google's
speech_embedding
modelsr (int): The sample rate of the audio (default: 16000 khz)
Beta Was this translation helpful? Give feedback.
All reactions