Initial commit

This commit is contained in:
Paillat
2023-05-15 10:11:04 +02:00
commit 5410752853
24 changed files with 742 additions and 0 deletions

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
# Auto detect text files and perform LF normalization
* text=auto
*.exe filter=lfs diff=lfs merge=lfs -text

160
.gitignore vendored Normal file
View File

@@ -0,0 +1,160 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintainted in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
#Venv directory
youtuber/
#results
videos/
test/
ideas/

BIN
Sigmar-Regular.ttf Normal file

Binary file not shown.

BIN
bcg.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 514 KiB

25
generators/ideas.py Normal file
View File

@@ -0,0 +1,25 @@
import openai
import os
from dotenv import load_dotenv
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
subject = os.getenv("SUBJECT")
with open('prompts/ideas.txt') as f:
prompt = f.read().replace('[subject]', subject)
f.close()
async def generate_ideas():
with open('ideas/ideas.json', 'r') as f:
ideas = f.read()
f.close()
prmpt = prompt.replace('[existing ideas]', ideas)
print(prmpt)
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[
{"role":"user","content":prmpt},
],
)
return response['choices'][0]['message']['content']

112
generators/miniature.py Normal file
View File

@@ -0,0 +1,112 @@
import openai
import os
from PIL import Image, ImageDraw, ImageFont
import random
from dotenv import load_dotenv
from PIL import Image
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
'''
Putpose of this file is to generate a miniature of the video.
It has a function that takes a path, title, and description and generates a miniature.
It uses pillow to generate the image, and openai to generate text1 and text2.
text 1 is a short text max 2 words to put on the top of the image.
text 2 is a 3 word text to put in the middle of the image.
The function returns the path of the image.
First open bcg.png. Then create a new image and add a random gradient to it from top to bottom.
then put the png on top of the gradient.
Then add text1 and text2 to the image.
'''
prompt = '''Generate 2 short textes OF MAX 2-4 WORDS each to put on the top of the miniature of the video. Here are some examples:
For the title "Python Exception Handling" the text1 could be "No more crashes!" and the text2 could be "Easy!"
The second text is often shorter than the first one.
Answer without anything else, just with the 2 textes. Answer with text1 on the first line and text2 on the second line. Nothing else.
Here is the title of the video: [TITLE]
Here is the description of the video: [DESCRIPTION]'''
def rand_gradient(image):
randr = random.SystemRandom().randint(1, 20)
randg = random.SystemRandom().randint(1, 20)
randb = random.SystemRandom().randint(1, 20)
for i in range(image.size[0]):
for j in range(image.size[1]):
colors = [i//randr, j//randg, i//randb]
position1 = [image.size[0]//5, image.size[1]//5]
position2 = [image.size[0]//5, image.size[1]//2]
if i == position1[0] and j == position1[1]:
textcolor1 = colors
if i == position2[0] and j == position2[1]:
textcolor2 = colors
image.putpixel((i,j), (colors[0], colors[1], colors[2]))
return image, textcolor1, textcolor2
def generate_miniature(path, title, description):
prmpt = prompt.replace("[TITLE]", title).replace("[DESCRIPTION]", description)
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role":"user","content":prmpt},
],
)
response['choices'][0]['message']['content']
text1 = response['choices'][0]['message']['content'].split("\n")[0]
text2 = response['choices'][0]['message']['content'].split("\n")[1]
generate_image(path, text1, text2)
def generate_image(path, text1, text2):
bcg = Image.open("bcg.png")
img = Image.new('RGBA', (1920, 1080))
img, textcolor1, textcolor2 = rand_gradient(img)
draw = ImageDraw.Draw(img)
font1 = ImageFont.truetype("./Sigmar-Regular.ttf", 200)
font2 = ImageFont.truetype("./Sigmar-Regular.ttf", 200)
text1words = text1.split(" ")
text2words = text2.split(" ")
text1def = ""
text2def = ""
#max charachters per line is 7, but if a word is longer than 7 charachters, do not split it. Howerver if 2 or more words can fit on the same line, put them on the same line.
for word in text1words:
if len(text1def.split("\n")[-1]) + len(word) > 7:
text1def += "\n"
text1def += word + " "
for word in text2words:
if len(text2def.split("\n")[-1]) + len(word) > 7:
text2def += "\n"
text2def += word + " "
maxlen1 = max([len(line) for line in text1def.split("\n")])
maxlen2 = max([len(line) for line in text2def.split("\n")])
#if the text is too long, reduce the font size proportionally
if maxlen1 > 7:
font1 = ImageFont.truetype("./Sigmar-Regular.ttf", 200 - (maxlen1 - 7)*10)
if maxlen2 > 7:
font2 = ImageFont.truetype("./Sigmar-Regular.ttf", 200 - (maxlen2 - 7)*10)
text1def = text1def.upper().strip()
text2def = text2def.upper().strip()
textcolor1 = [255 - textcolor1[0], 255 - textcolor1[1], 255 - textcolor1[2]]
textcolor2 = [255 - textcolor2[0], 255 - textcolor2[1], 255 - textcolor2[2]]
imgtext1 = Image.new('RGBA', (1920, 1080))
imgtext2 = Image.new('RGBA', (1920, 1080))
drawtext1 = ImageDraw.Draw(imgtext1)
drawtext1.text((imgtext1.size[0]//8*2, 0), text1def, font=font1, fill=(textcolor1[0], textcolor1[1], textcolor1[2]))
imgtext1 = imgtext1.rotate(-5, expand=True)
drawtext2 = ImageDraw.Draw(imgtext2)
drawtext2.text((imgtext2.size[0]//8*2.5, imgtext2.size[1]//5*2), text2def, font=font2, fill=(textcolor2[0], textcolor2[1], textcolor2[2]))
imgtext2 = imgtext2.rotate(5, expand=True)
#paste the textes on the image
img.paste(bcg, (0, 0), bcg)
img.paste(imgtext1, (0, 0-img.size[1]//8), imgtext1)
if len(text1def.split("\n")) > 2: #if the text is too long, put the second text on the third line
img.paste(imgtext2, (0, img.size[1]//8), imgtext2)
else:
img.paste(imgtext2, (0, 0), imgtext2)
img.save(path + "/miniature.png")
return path + "/miniature.png"
generate_image("test", "Master python loops", "Effortlessly")

128
generators/montage.py Normal file
View File

@@ -0,0 +1,128 @@
import json
import os
import requests
import pysrt
import deepl
import random
from generators.speak import generate_voice, voices
from moviepy.video.VideoClip import ImageClip
from moviepy.editor import VideoFileClip, concatenate_videoclips, CompositeAudioClip, concatenate_audioclips
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.audio.fx.all import volumex, audio_fadein, audio_fadeout
from dotenv import load_dotenv
load_dotenv()
unsplash_access = os.getenv("UNSPLASH_ACCESS_KEY")
unsplash_url = "https://api.unsplash.com/photos/random/?client_id=" + unsplash_access + "&query="
deepl_access = os.getenv("DEEPL_ACCESS_KEY")
translator = deepl.Translator(deepl_access)
def prepare(path):
with open(path + "/script.json", 'r', encoding='utf-8') as f:
script = json.load(f)
f.close()
if not os.path.exists(path + "/slides"): os.mkdir(path + "/slides")
fresh = False
if not os.path.exists(path + "/audio"):
os.mkdir(path + "/audio")
fresh = True
with open("prompts/marp.md", 'r', encoding='utf-8') as f:
marp = f.read()
f.close()
if fresh:
choosen_voice = random.choice(voices)
for i in range(len(script)):
audio_path = path + "/audio/audio" + str(i) + ".mp3"
if not os.path.exists(audio_path):
generate_voice(audio_path, script[i]['spoken'], choosen_voice)
if "image" in script[i]:
if not os.path.exists(path + "/slides/assets"):
os.mkdir(path + "/slides/assets")
url= unsplash_url + script[i]['image']
r = requests.get(url)
real_url = r.json()['urls']['raw']
with open(path + "/slides/assets/slide" + str(i) + ".jpg", 'wb') as f:
f.write(requests.get(real_url).content)
f.close()
content = marp + f"\n\n![bg 70%](assets/slide{i}.jpg)"
with open(path + "/slides/slide" + str(i) + ".md", 'w', encoding='utf-8') as f:
f.write(content)
elif "markdown" in script[i]:
with open(path + "/slides/slide" + str(i) + ".md", 'w', encoding='utf-8') as f:
f.write(marp + "\n\n" + script[i]['markdown'])
elif "huge" in script[i]:
#use fit
with open(path + "/slides/slide" + str(i) + ".md", 'w', encoding='utf-8') as f:
f.write(marp + "\n\n# <!-- fit --> " + script[i]['huge'])
else:
pass
for i in range(len(script)):
marrkdown_path = "./" + path + "/slides/slide" + str(i) + ".md"
command = f"marp.exe {marrkdown_path} -o {path}/slides/slide{i}.png --allow-local-files"
os.system(command)
return script
def convert_seconds_to_time_string(seconds):
milliseconds = int((seconds - int(seconds)) * 1000)
seconds = int(seconds)
minutes, seconds = divmod(seconds, 60)
hours, minutes = divmod(minutes, 60)
return f"{hours:02}:{minutes:02}:{seconds:02},{milliseconds:03}"
def subs(length, total, text, srt, index):
#first format the start and end in xx:xx:xx,xxx from float seconds like xx.xxxxxx
start = convert_seconds_to_time_string(total - length)
stop = convert_seconds_to_time_string(total)
sub = pysrt.SubRipItem(index=index, start=start, end=stop, text=text)
srt.append(sub)
return srt
def translate(target, text):
translation = translator.translate_text(text, target_lang=target).text
return translation
def mount(path, script):
num_slides = len(os.listdir(path + "/audio"))
clips = []
srt = pysrt.SubRipFile()
srt_fr = pysrt.SubRipFile()
total_length = 0
for i in range(num_slides):
audio = AudioFileClip(path + "/audio/audio" + str(i) + ".mp3")
complete_audio = CompositeAudioClip([
AudioFileClip("silence.mp3").set_duration(1),
audio,
AudioFileClip("silence.mp3").set_duration(1)
])
length = complete_audio.duration
total_length += length
srt = subs(length, total_length, script[i]['spoken'], srt, i)
srt_fr = subs(length, total_length, translate("FR", script[i]['spoken']), srt_fr, i)
slide = ImageClip(path + "/slides/slide" + str(i) + ".png").set_duration(length)
slide = slide.set_audio(complete_audio)
clips.append(slide)
randmusic = random.choice(os.listdir("musics"))
while randmusic.endswith(".txt"): randmusic = random.choice(os.listdir("musics"))
randpath = "musics/" + randmusic
music = AudioFileClip(randpath).set_duration(total_length)
music = audio_fadein(music, 20)
music = audio_fadeout(music, 20)
music = volumex(music, 0.2)
musics = []
if music.duration < total_length:
for i in range(int(total_length / music.duration)):
musics.append(music)
music = concatenate_audioclips(musics)
final_clip = concatenate_videoclips(clips, method="compose")
existing_audio = final_clip.audio
final_audio = CompositeAudioClip([existing_audio, music])
final_clip = final_clip.set_audio(final_audio)
final_clip.write_videofile(path + "/montage.mp4", fps=60, codec="nvenc")
srt.save(path + "/montage.srt")
srt_fr.save(path + "/montage_fr.srt")
with open (randpath.split(".")[0] + ".txt", 'r', encoding='utf-8') as f:
music_credit = f.read()
f.close()
return music_credit

22
generators/script.py Normal file
View File

@@ -0,0 +1,22 @@
import os
import json
import asyncio
import openai
from dotenv import load_dotenv
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
async def generate_script(title, description):
with open('prompts/script.txt') as f:
prompt = f.read()
f.close()
prompt = prompt.replace("[title]", title)
prompt = prompt.replace("[description]", description)
response = await openai.ChatCompletion.acreate(
model="gpt-4",
messages=[
{"role":"user","content":prompt}
],
)
return response['choices'][0]['message']['content']

23
generators/speak.py Normal file
View File

@@ -0,0 +1,23 @@
from TTS.api import TTS
import os
# Running a multi-speaker and multi-lingual model
# List available 🐸TTS models and choose the first one
model_best_multi = "tts_models/en/vctk/vits"
fakenames = {
"Alexander": "p230",
"Benjamin": "p240",
"Amelia": "p270",
"Katherine": "p273"
}
voices = ["Alexander", "Benjamin", "Amelia", "Katherine"]
# Init TTS
def generate_voice(path, text, speaker="Alexander"):
tts = TTS(model_best_multi, gpu=True)
speaker = fakenames[speaker] if speaker in fakenames else speaker
tts.tts_to_file(text=text, file_path=path, speaker=speaker, speed=1)

137
generators/uploader.py Normal file
View File

@@ -0,0 +1,137 @@
#!/usr/bin/python
'''Uploads a video to YouTube.'''
from http import client
import httplib2
import os
import random
import time
import json
import google.oauth2.credentials
import google_auth_oauthlib.flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from googleapiclient.http import MediaFileUpload
from google_auth_oauthlib.flow import InstalledAppFlow
httplib2.RETRIES = 1
MAX_RETRIES = 10
RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError, client.NotConnected,
client.IncompleteRead, client.ImproperConnectionState,
client.CannotSendRequest, client.CannotSendHeader,
client.ResponseNotReady, client.BadStatusLine)
RETRIABLE_STATUS_CODES = [500, 502, 503, 504]
CLIENT_SECRETS_FILE = 'env/client_secret.json'
SCOPES = ['https://www.googleapis.com/auth/youtube.upload', 'POST https://www.googleapis.com/upload/youtube/v3/thumbnails/set', 'https://www.googleapis.com/auth/youtube.force-ssl']
API_SERVICE_NAME = 'youtube'
API_VERSION = 'v3'
VALID_PRIVACY_STATUSES = ('public', 'private', 'unlisted')
# Authorize the request and store authorization credentials.
def get_authenticated_service():
if os.path.exists('env/credentials.json'):
with open('env/credentials.json') as json_file:
data = json.load(json_file)
credentials = google.oauth2.credentials.Credentials(
token=data['token'],
refresh_token=data['refresh_token'],
token_uri=data['token_uri'],
client_id=data['client_id'],
client_secret=data['client_secret'],
scopes=data['scopes']
)
else:
flow = InstalledAppFlow.from_client_secrets_file(
CLIENT_SECRETS_FILE, SCOPES)
credentials = flow.run_local_server()
with open('env/credentials.json', 'w') as outfile:
outfile.write(credentials.to_json())
return build(API_SERVICE_NAME, API_VERSION, credentials=credentials)
def initialize_upload(youtube, options):
tags = None
if options['keywords']:
tags = options['keywords'].split(',')
body = dict(
snippet=dict(
title=options['title'],
description=options['description'],
tags=tags,
categoryId=options['category']
),
status=dict(
privacyStatus=options['privacyStatus']
)
)
# Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=','.join(body.keys()),
body=body,
media_body=MediaFileUpload(options['file'], chunksize=-1, resumable=True)
)
resumable_upload(insert_request)
def resumable_upload(request):
response = None
error = None
retry = 0
while response is None:
try:
print('Uploading file...')
status, response = request.next_chunk()
if response is not None:
if 'id' in response:
print('Video id "%s" was successfully uploaded.' %
response['id'])
else:
exit('The upload failed with an unexpected response: %s' % response)
except HttpError as e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = 'A retriable HTTP error %d occurred:\n%s' % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS as e:
error = 'A retriable error occurred: %s' % e
if error is not None:
print(error)
retry += 1
if retry > MAX_RETRIES:
exit('No longer attempting to retry.')
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print('Sleeping %f seconds and then retrying...' % sleep_seconds)
time.sleep(sleep_seconds)
if __name__ == '__main__':
sample_options = {
'file': './test.mp4',
'title': 'Test Title',
'description': 'Test Description',
'category': 22,
'keywords': 'test, video',
'privacyStatus': 'private'
}
youtube = get_authenticated_service()
try:
initialize_upload(youtube, sample_options)
except HttpError as e:
print('An HTTP error %d occurred:\n%s' % (e.resp.status, e.content))

55
main.py Normal file
View File

@@ -0,0 +1,55 @@
import os
import json
import asyncio
import logging
from generators.ideas import generate_ideas
from generators.script import generate_script
from generators.montage import mount, prepare, translate
from generators.miniature import generate_miniature
logging.basicConfig(level=logging.INFO)
async def main():
if input("Do you want to generate new ideas? (y/n)") == "y":
ideas = await generate_ideas()
if not os.path.exists('ideas'): os.makedirs('ideas')
with open('ideas/ideas.json', 'w', encoding='utf-8') as f:
f.write(ideas)
with open('ideas/ideas.json', 'r', encoding='utf-8') as f:
ideas = json.load(f)
f.close()
for i in range(len(ideas)):
print(str(i) + ". " + ideas[i]['title'])
idea = int(input("Which idea do you want to generate a script for? (enter the number): "))
idea = ideas[idea]
title = idea['title']
title = title[:25]
i = 0
path = "videos/" + title
path = path.replace(" ", "_").replace(":", "")
if not os.path.exists(path + "/script.json"):
script = await generate_script(idea['title'], idea['description'])
if os.path.exists(path) and os.path.exists(path + "/script.json"):
if input("There is already a script for this idea. Do you want to overwrite it? (y/n)") != "y":
print("Exiting...")
exit(1)
if not os.path.exists(path): os.makedirs(path)
with open(path + "/script.json", 'w', encoding='utf-8') as f:
f.write(script)
f.close()
script = prepare(path)
credits = mount(path, script)
with open(path + "/meta.txt", 'w', encoding='utf-8') as f:
f.write(f"Title: {idea['title']}\nDescription: {idea['description']}\nMusic credits: {credits}")
f.close()
with open(path + "/meta_FR.txt", 'w', encoding='utf-8') as f:
transtitle = translate('FR', idea['title']) #use the non formatted title
transdesc = translate('FR', idea['description'])
f.write(f"Titre: {transtitle}\nDescription: {transdesc}\nCrédits musicaux: {credits}")
f.close()
generate_miniature(path, title=idea['title'], description=idea['description'])
print(f"Your video is ready! You can find it in {path}.")
if __name__ == "__main__":
asyncio.run(main())

3
marp.exe Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:48a6cd215f23f7c2ec59fc2e42e9dbc078dd614370f241492fbe53b5683a8abe
size 116473098

BIN
montage.mp4 Normal file

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,3 @@
Lost In Thought by Ghostrifter bit.ly/ghostrifter-yt
Creative Commons — Attribution-NoDerivs 3.0 Unported — CC BY-ND 3.0
Music promoted by https://www.chosic.com/free-music/all/

BIN
musics/When-I-Was-A-Boy.mp3 Normal file

Binary file not shown.

View File

@@ -0,0 +1,4 @@
When I Was A Boy by Tokyo Music Walker | https://soundcloud.com/user-356546060
Music promoted by https://www.chosic.com/free-music/all/
Creative Commons CC BY 3.0
https://creativecommons.org/licenses/by/3.0/

View File

@@ -0,0 +1,4 @@
Sin and Sensitivity (Rendition of Bachs "Air") by Aila Scott • Johann Sebastian Bach | https://ailascott.com
Music promoted by https://www.free-stock-music.com
Creative Commons / Attribution 4.0 International (CC BY 4.0)
https://creativecommons.org/licenses/by/4.0/

BIN
nobg_logo_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 331 KiB

20
prompts/ideas.txt Normal file
View File

@@ -0,0 +1,20 @@
You will generate a list of ideas of videos about [subject]. You will suggest topics for videos that will be created and posted on YouTube.
You will output the list of ideas in a json format. The following fields will be included:
- title
- description
Here is an example of the output:
```
[
{
"title": "Python Tutorial for Beginners",
"description": "A video for beginners to learn Python. The following topics are covered: variables, data types, functions, classes, and more."
},
{
"title": "DRAMA: MrBeast did WHAT?!",
"description": "MrBeast did something crazy. You won't believe what he did. Watch the video to find out."
}
]
```
You will not answer anything else in your message. Your answer will only be the json output, without any other text. This is very important. no codeblock, nothing like "Here are .....". Just the json. You will generate 10 ideas. You will never repeat yourself.
Here are the existing ideas wich you should not repeat again.
[existing ideas]

6
prompts/marp.md Normal file
View File

@@ -0,0 +1,6 @@
---
marp: true
theme: default
class: invert
backgroundImage: url(https://images.unsplash.com/photo-1651604454911-fdfb0edde727)
---

37
prompts/script.txt Normal file
View File

@@ -0,0 +1,37 @@
Here is a YouTube video title :
"[title]"
Description:
"[description]"
What you will do is generate a script for the video. You can think the video being separated in different "slides", and each has its json part. For each json part you can add different "values". The values are :
1. "spoken"(the text that should be spoken in the slide).
2. "image" (facultative, 2 to 4 search words in the format word1+word2+word3 to search a stock photo) Images cannot be detailed or tecnical programming things, no result will be found. Images can be used only in the beginning and in the end of the video. The rest should be markdown or huge text!!!!
3. "markdown": anything in markdown format. It will be shown in the screen. You can use it to add titles, subtitles, lists, etc and CODE SNIPPETS!
4. "huge": huge is used to show a huge text in the screen. It is useful for transitions.
You need to add either an image or a markdown or a huge text. You can not add more than one of them.
Images can be used only in the beginning and in the end of the video. The rest should be markdown or huge text!!!!
Use only 2 images per video, one in the beginning and one in the end. The rest should be markdown or huge text.
Your video will be detailed, long and very complete. Here is an example:
[
{
"spoken":"Hello, and welcom in this new video",
"image":"hello+welcome+greetigs",
},
{"spoken":"Let's get started doing an hello world code",
"huge":"1. Hello World"
},
{"spoken":"This is sample code. In this example sample code we first import os. Os is a library that allows us to do things with the operating system. Then we open a file called hello.txt. We read the file and we store it in a variable called hello. Then we print the variable hello. You can see the code on the screen.",
"markdown":"```python\nimport os\nwith open("hello.txt") as f:\n hello = f.read()\nif __name__ == "__main__": print(hello)\n```""
},
{
"spoken":"This is a latex formula wich is very important. It is the formula of the integral of x squared from 0 to infinity. You can see it on the screen.",
"markdown":"$$\n\\int_0^\\infty x^2 dx\n$$"
}
]
IF YOU EXPLAIN SOMETHIING IN THE markdown, YOU NEED TO EXPLAIN IT IN THE SPOKEN TOO. You cannot just write text in the markdown, also repeat that text in the spoken.
At the end remember to Like, Share and Subscribe if they found the video useful.
There is NO maximal length for the spoken, so ALWAYS add the code explanation in the same slide the code is in. The spoken cannot be alone.
You will not answer anything else in your message. Your answer will only be the json output, without any other text. This is very important. no codeblock, nothing like "Here are .....". Just the json.

BIN
silence.mp3 Normal file

Binary file not shown.