UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn(“FP16 is not supported on CPU; using FP32 instead”)

Q: Are you trying to use whisper on your Mac instead of a Google Colab?

Short answer:

model.transcribe(file, fp16=False)

Example:

import whisper
import time

model = whisper.load_model("base")

start = time.time()
file = "my_video.mp4"

print(f"transcribing: {file}")

base_result = model.transcribe(file, fp16=False, language='English')
wc = len(base_result["text"].split(" "))

end = time.time()
total_seconds = end - start

print(f"total words: {wc}")
print(f"total seconds: {total_seconds}")
print(f"words/seconds: {wc}/{total_seconds}={wc/total_seconds}")

text = base_result['text']

print(text)

Source: https://github.com/openai/whisper/discussions/301