Skip to content

Commit

Permalink
Rename Audios to Audio
Browse files Browse the repository at this point in the history
  • Loading branch information
alexrudall committed Aug 14, 2023
1 parent 1cd4be9 commit 433aa68
Show file tree
Hide file tree
Showing 9 changed files with 29 additions and 29 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Changed

- [BREAKING] Move audios related method to Audios model from Client model. You will need to update your code to handle this change, changing `client.translate` to `client.audios.translate` and `client.transcribe` to `client.audios.transcribe`
- [BREAKING] Move audio related method to Audio model from Client model. You will need to update your code to handle this change, changing `client.translate` to `client.audio.translate` and `client.transcribe` to `client.audio.transcribe`.

## [4.3.2] - 2023-08-14

Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
PATH
remote: .
specs:
ruby-openai (4.3.2)
ruby-openai (5.0.0)
faraday (>= 1)
faraday-multipart (>= 1)

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -417,7 +417,7 @@ Whisper is a speech to text model that can be used to generate text based on aud
The translations API takes as input the audio file in any of the supported languages and transcribes the audio into English.

```ruby
response = client.audios.translate(
response = client.audio.translate(
parameters: {
model: "whisper-1",
file: File.open("path_to_file", "rb"),
Expand All @@ -431,7 +431,7 @@ puts response["text"]
The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.

```ruby
response = client.audios.transcribe(
response = client.audio.transcribe(
parameters: {
model: "whisper-1",
file: File.open("path_to_file", "rb"),
Expand Down
2 changes: 1 addition & 1 deletion lib/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
require_relative "openai/finetunes"
require_relative "openai/images"
require_relative "openai/models"
require_relative "openai/audios"
require_relative "openai/audio"
require_relative "openai/version"

module OpenAI
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/audios.rb → lib/openai/audio.rb
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
module OpenAI
class Audios
class Audio
def initialize(access_token: nil, organization_id: nil)
OpenAI.configuration.access_token = access_token if access_token
OpenAI.configuration.organization_id = organization_id if organization_id
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/client.rb
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ def moderations(parameters: {})
json_post(path: "/moderations", parameters: parameters)
end

def audios
@audios ||= OpenAI::Audios.new
def audio
@audio ||= OpenAI::Audio.new
end

def azure?
Expand Down

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
RSpec.describe OpenAI::Client do
describe "#audios" do
describe "#audio" do
describe "#transcribe" do
context "with audio", :vcr do
let(:filename) { "audio_sample.mp3" }
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }

let(:response) do
OpenAI::Client.new.audios.transcribe(
OpenAI::Client.new.audio.transcribe(
parameters: {
model: model,
file: File.open(audio, "rb")
}
)
end
let(:content) { response["text"] }
let(:cassette) { "audios #{model} transcribe".downcase }
let(:cassette) { "audio #{model} transcribe".downcase }

context "with model: whisper-1" do
let(:model) { "whisper-1" }
Expand All @@ -34,15 +34,15 @@
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }

let(:response) do
OpenAI::Client.new.audios.translate(
OpenAI::Client.new.audio.translate(
parameters: {
model: model,
file: File.open(audio, "rb")
}
)
end
let(:content) { response["text"] }
let(:cassette) { "audios #{model} translate".downcase }
let(:cassette) { "audio #{model} translate".downcase }

context "with model: whisper-1" do
let(:model) { "whisper-1" }
Expand Down

0 comments on commit 433aa68

Please sign in to comment.