Skip to content

Commit

Permalink
Add English ability CSV/JSON files and update helper-scripts and HTML…
Browse files Browse the repository at this point in the history
…s to use them
  • Loading branch information
luceleaftea committed Jan 18, 2023
1 parent 5d77ccc commit 1ef0596
Show file tree
Hide file tree
Showing 33 changed files with 1,066 additions and 407 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,12 @@ To the best of my ability, I attempt to follow [Semantic Versioning](https://sem
I try to increment version numbers with this general logic:

* Major - The schema for the CSV / JSON was changed, large file organizational changes were made, etc.
* Minor - New data was added, new scripts were added or updated, etc. (Scripts receiving breaking changes is not always guaranteed to warrant a major patch bump, use with caution!)
* Minor - New data was added, new scripts were added or updated, etc. (Scripts receiving breaking changes are not always guaranteed to warrant a major patch bump, use with caution!)
* Patch - Errors in the data or helper scripts were fixed

If you'd like a stable experience, please use the main branch and pin a specific tagged version. I try to keep the develop branch as clean as possible, but even that is broken or has big changes in-flight from time to time. For bigger changes to the data set or during spoiler seasons, I spin off feature branches to work in. You are welcome to use them while I am working on them, but please be aware things can break at any time!

Unlike code packages, I do not go back and support past major/minor releases with bugfixes, so if you want the most up-to-date data, you will always need to be on the latest version, even if that version has breaking changes. The versioning system is purely to give you a heads up so that you don't update and find your project blowing up unexpectedly!
Unlike many code packages, I do not go back and support past major/minor releases with bugfixes, so if you want the most up-to-date data, you will always need to be on the latest version, even if that version has breaking changes. The versioning system is purely to give you a heads up so that you don't update and find your project blowing up unexpectedly!


## Changelogs and Contribution Credit
Expand Down
10 changes: 10 additions & 0 deletions csvs/english/ability.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Unique ID Name
HCCWPfFrNzCwPDntPw6gH Action
kQNwpnQqhC8RGPcpDKdJD Attack Reaction
GJdGNktHCbbzmHNkkNfDz Instant
7jJ7hBjrMQd6fcWcqTb6D Once Per Turn Action
9DbtWdGfndKG7CCDPmb9t Once Per Turn Attack Reaction
KzGJdT7tz9LfLhWfbnFwb Once Per Turn Defense Reaction
7mcpJ8zrDHJWkf9r68bhH Once Per Turn Effect
t7KdJFMrpcTDBBnGWf6wK Once per Turn Instant
cTkzhh6mR7NHCWnBRnbWW Twice per Turn Instant
71 changes: 34 additions & 37 deletions documentation/csv-schemas.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,16 @@

The CSVs are tab delimited and use " as string indicators.

## Set
## Ability
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Identifier | string | The set code. | WTR |
| Name | string | The full name of the set. | Welcome to Rathe |
| Editions | string[] | The list of editions printed. | Alpha, Unlimited |
| Edition Unique IDs | string[] | The unique identifiers for this set's editions within the data set. This is generated by a script, do not manually fill this out. | ntpBR6gq8FMwbrc7nD8Nz – F, jHWMhDgqDnK6RpCJtcn9C – U |
| Initial Release Dates | datetime[] | The initial release date for the set in ISO 8601 format and UTC timezone, correlated to the Editions field. | 2019-10-11T00:00:00.000Z, 2020-11-06T00:00:00.000Z |
| Out of Print Dates | datetime[] | The Out of Print (OOP) announcement date for the set in ISO 8601 format and UTC timezone, correlated to the Editions field. If the set is still in print, use `null` instead of a date. | 2019-10-11T00:00:00.000Z, 2021-12-01T00:00:00.000Z |
| Start Card Id | string | The id of the first card in the set. | WTR000 |
| End Card Id | string | The id of the first card in the set. | WTR225 |
| Product Pages | string[] | The list of urls for fabtcg.com product pages, correlated to the Editions field. | https://fabtcg.com/products/booster-set/welcome-rathe/, https://fabtcg.com/products/booster-set/welcome-rathe-unlimited/ |
| Collector's Center | string[] | The list of urls for fabtcg.com collector's center pages, correlated to the Editions field. | https://fabtcg.com/collectors-centre/welcome-rathe/, https://fabtcg.com/collectors-centre/welcome-rathe/ |
| Card Galleries | string[] | The list of urls for fabtcg.com card gallery pages, correlated to the Editions field. | https://fabtcg.com/resources/card-galleries/welcome-rathe-booster/, https://fabtcg.com/resources/card-galleries/welcome-rathe-unlimited-booster/ |

| Unique ID | string | The unique identifier for this ability within the data set. This is generated by a script, do not manually fill this out in the English CSV. Other languages' CSVs should use the same IDs as the corresponding keyword in the English CSV. | HCCWPfFrNzCwPDntPw6gH |
| Name | string | Name of the type. | Attack |

## Artist
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Name | string | Name of the artist. | Saad Irfan |

## Card
| Field Name | Intended Data Type | Explanation | Example |
Expand Down Expand Up @@ -61,15 +55,19 @@ The CSVs are tab delimited and use " as string indicators.
| Variation Unique IDs | string[] | The unique identifiers for this card's variations within the data set. This is generated by a script, do not manually fill this out. | d87QcRDq6rLqD86pG6H8C – 1HP397 – N, LHFWfJHzHMJkLnMBmbJQp – WTR192 – A, HbtNt6M8NkMWLTLfJFWRW – WTR192 – U, RLM9MbnpqjJCwQtLBnKRg – UPR210 – N |
| Image URLs | string[] | Links to images from fabtcg.com's [image galleries](https://fabtcg.com/resources/card-galleries/) for a set/edition combination. If there is an alternate art, make a separate entry in this array and tack on the shorthand for the alternate art type. Format: `{Image URL} - {Card Identifier} - {Set Edition Shorthand} (- {Alternate Art Shorthand})` (Using Channel Lake Frigid in this example.) | https://storage.googleapis.com/fabmaster/media/images/ELE146.width-450.png - ELE146 - F, https://product-images.tcgplayer.com/fit-in/400x558/248564.jpg - ELE146 - F - AA, https://storage.googleapis.com/fabmaster/media/images/U-ELE146.width-450.png - ELE146 - U |

Note: Cards are organized by what main set they were initially released in, in order of release. If they were never released in a main set, they are organized by non-set/promo in order of release.

Note: Cards in are organized by what main set they were initially released in in that language, in order of release. If they were never released in a main set, they are organized by non-set/promo in order of release, with non-set/promo cards above main set cards.

## Rarity
## Edition
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Shorthand | string | Shorthand representation of the rarity, intended for quick typing and correlating between other CSVs. | M |
| Text | string | Full name of the rarity. | Majestic |
| Shorthand | string | Shorthand representation of the edition. | U |
| Name | string | Name of the edition. | Unlimited Edition |

## Foiling
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Shorthand | string | Shorthand representation of the foiling. | R |
| Name | string | Name of the foiling. | Rainbow Foil |

## Icon
| Field Name | Intended Data Type | Explanation | Example |
Expand All @@ -78,37 +76,36 @@ Note: Cards are organized by what main set they were initially released in, in o
| Name | string | Name of the icon. | Attack |
| Image URL | string | Url to the icon. | TODO |


## Keyword
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Unique ID | string | The unique identifier for this keyword within the data set. This is generated by a script, do not manually fill this out in the English CSV. Other languages' CSVs should use the same IDs as the corresponding keyword in the English CSV. | 77JKJwz6k8BRrGbmRDPTT |
| Name | string | Name of the keyword. | Battleworn |
| Description | string | Description of the keyword's meaning. | TODO |


## Type
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Name | string | Name of the type. | Attack |


## Foiling
## Rarity
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Shorthand | string | Shorthand representation of the foiling. | R |
| Name | string | Name of the foiling. | Rainbow Foil |

| Shorthand | string | Shorthand representation of the rarity, intended for quick typing and correlating between other CSVs. | M |
| Text | string | Full name of the rarity. | Majestic |

## Edition
## Set
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Shorthand | string | Shorthand representation of the edition. | U |
| Name | string | Name of the edition. | Unlimited Edition |
| Identifier | string | The set code. | WTR |
| Name | string | The full name of the set. | Welcome to Rathe |
| Editions | string[] | The list of editions printed. | Alpha, Unlimited |
| Edition Unique IDs | string[] | The unique identifiers for this set's editions within the data set. This is generated by a script, do not manually fill this out. | ntpBR6gq8FMwbrc7nD8Nz – F, jHWMhDgqDnK6RpCJtcn9C – U |
| Initial Release Dates | datetime[] | The initial release date for the set in ISO 8601 format and UTC timezone, correlated to the Editions field. | 2019-10-11T00:00:00.000Z, 2020-11-06T00:00:00.000Z |
| Out of Print Dates | datetime[] | The Out of Print (OOP) announcement date for the set in ISO 8601 format and UTC timezone, correlated to the Editions field. If the set is still in print, use `null` instead of a date. | 2019-10-11T00:00:00.000Z, 2021-12-01T00:00:00.000Z |
| Start Card Id | string | The id of the first card in the set. | WTR000 |
| End Card Id | string | The id of the first card in the set. | WTR225 |
| Product Pages | string[] | The list of urls for fabtcg.com product pages, correlated to the Editions field. | https://fabtcg.com/products/booster-set/welcome-rathe/, https://fabtcg.com/products/booster-set/welcome-rathe-unlimited/ |
| Collector's Center | string[] | The list of urls for fabtcg.com collector's center pages, correlated to the Editions field. | https://fabtcg.com/collectors-centre/welcome-rathe/, https://fabtcg.com/collectors-centre/welcome-rathe/ |
| Card Galleries | string[] | The list of urls for fabtcg.com card gallery pages, correlated to the Editions field. | https://fabtcg.com/resources/card-galleries/welcome-rathe-booster/, https://fabtcg.com/resources/card-galleries/welcome-rathe-unlimited-booster/ |

## Artist
## Type
| Field Name | Intended Data Type | Explanation | Example |
| --- | --- | --- | --- |
| Name | string | Name of the artist. | Saad Irfan |


| Unique ID | string | The unique identifier for this type within the data set. This is generated by a script, do not manually fill this out in the English CSV. Other languages' CSVs should use the same IDs as the corresponding keyword in the English CSV. | FwdBrBrwHWcPN78TcRCMT |
| Name | string | Name of the type. | Attack |
1 change: 1 addition & 0 deletions documentation/json-schemas.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# JSON Schemas

* [Ability Schema](https://the-fab-cube.github.io/flesh-and-blood-cards/web/json-schemas/ability-schema.html)
* [Artist Schema](https://the-fab-cube.github.io/flesh-and-blood-cards/web/json-schemas/artist-schema.html)
* [Card Schema](https://the-fab-cube.github.io/flesh-and-blood-cards/web/json-schemas/card-flattened-schema.html)
* [Card (Flattened) Schema](https://the-fab-cube.github.io/flesh-and-blood-cards/web/json-schemas/card-schema.html)
Expand Down
8 changes: 4 additions & 4 deletions helper-scripts/generate-artists/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def create_artists_csv_from_card_csv(language, artist_column_index):


create_artists_csv_from_card_csv("english", 19)
create_artists_csv_from_card_csv("french", 13)
create_artists_csv_from_card_csv("german", 13)
create_artists_csv_from_card_csv("italian", 13)
create_artists_csv_from_card_csv("spanish", 13)
create_artists_csv_from_card_csv("french", 9)
create_artists_csv_from_card_csv("german", 9)
create_artists_csv_from_card_csv("italian", 9)
create_artists_csv_from_card_csv("spanish", 9)
1 change: 1 addition & 0 deletions helper-scripts/generate-csv-htmls/generate.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#!/bin/bash

pyenv exec poetry run csvtotable ../../csvs/english/ability.csv ../../web/csvs/ability.html -d $'\t' -q $'"' -o
pyenv exec poetry run csvtotable ../../csvs/english/artist.csv ../../web/csvs/artist.html -d $'\t' -q $'"' -o
pyenv exec poetry run csvtotable ../../csvs/english/card.csv ../../web/csvs/card.html -d $'\t' -q $'"' -o
pyenv exec poetry run csvtotable ../../csvs/english/edition.csv ../../web/csvs/edition.html -d $'\t' -q $'"' -o
Expand Down
30 changes: 30 additions & 0 deletions helper-scripts/generate-json/generate_json_file/ability.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import csv
import json
from pathlib import Path

def generate_json_file(language):
print(f"Generating {language} ability.json from ability.csv...")

ability_array = []

csvPath = Path(__file__).parent / f"../../../csvs/{language}/ability.csv"
jsonPath = Path(__file__).parent / f"../../../json/{language}/ability.json"

with csvPath.open(newline='') as csvfile:
reader = csv.reader(csvfile, delimiter='\t', quotechar='"')
next(reader)

for row in reader:
ability_object = {}

ability_object['unique_id'] = row[0]
ability_object['name'] = row[1]

ability_array.append(ability_object)

json_object = json.dumps(ability_array, indent=4, ensure_ascii=False)

with jsonPath.open('w', newline='\n', encoding='utf8') as outfile:
outfile.write(json_object)

print(f"Successfully {language} generated ability.json\n")
30 changes: 20 additions & 10 deletions helper-scripts/generate-json/main.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from os import makedirs
from os.path import exists

import generate_json_file.ability
import generate_json_file.artist
import generate_json_file.card
import generate_json_file.card_flattened
Expand All @@ -26,6 +27,8 @@
print(english_json_dir_path + " does not exist, creating it")
makedirs(english_json_dir_path)

# English JSON files #
generate_json_file.ability.generate_json_file("english")
generate_json_file.artist.generate_json_file("english")
generate_json_file.card.generate_json_file()
generate_json_file.card_flattened.generate_json_file("english")
Expand All @@ -37,26 +40,22 @@
generate_json_file.set.generate_json_file("english")
generate_json_file.type.generate_json_file("english")

# Non-English JSON files #
generate_json_file.artist.generate_json_file("french")
generate_json_file.artist.generate_json_file("german")
generate_json_file.artist.generate_json_file("italian")
generate_json_file.artist.generate_json_file("spanish")

# generate_json_file.ability.generate_json_file("french")
# generate_json_file.ability.generate_json_file("german")
# generate_json_file.ability.generate_json_file("italian")
# generate_json_file.ability.generate_json_file("spanish")

generate_json_file.keyword.generate_json_file("french")
generate_json_file.keyword.generate_json_file("german")
generate_json_file.keyword.generate_json_file("italian")
generate_json_file.keyword.generate_json_file("spanish")

generate_json_file.card_non_english.generate_json_file("french")
generate_json_file.card_non_english.generate_json_file("german")
generate_json_file.card_non_english.generate_json_file("italian")
generate_json_file.card_non_english.generate_json_file("spanish")

generate_json_file.card_flattened.generate_json_file("french")
generate_json_file.card_flattened.generate_json_file("german")
generate_json_file.card_flattened.generate_json_file("italian")
generate_json_file.card_flattened.generate_json_file("spanish")

generate_json_file.set.generate_json_file("french")
generate_json_file.set.generate_json_file("german")
generate_json_file.set.generate_json_file("italian")
Expand All @@ -67,4 +66,15 @@
generate_json_file.type.generate_json_file("italian")
generate_json_file.type.generate_json_file("spanish")

# These rely on the other Non-English JSON files being generated first
generate_json_file.card_non_english.generate_json_file("french")
generate_json_file.card_non_english.generate_json_file("german")
generate_json_file.card_non_english.generate_json_file("italian")
generate_json_file.card_non_english.generate_json_file("spanish")

generate_json_file.card_flattened.generate_json_file("french")
generate_json_file.card_flattened.generate_json_file("german")
generate_json_file.card_flattened.generate_json_file("italian")
generate_json_file.card_flattened.generate_json_file("spanish")

print("Finished generating JSON data")
12 changes: 12 additions & 0 deletions helper-scripts/generate-unique-ids/helper-functions.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
const nanoid = require('nanoid')
const nanoidDictionary = require('nanoid-dictionary')
const customNanoId = nanoid.customAlphabet(nanoidDictionary.nolookalikesSafe)

function capitalizeFirstLetter(string) {
return string.charAt(0).toUpperCase() + string.slice(1);
}

module.exports = {
customNanoId,
capitalizeFirstLetter
}
Loading

0 comments on commit 1ef0596

Please sign in to comment.