-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path2024-11-27_last_log.txt
168 lines (168 loc) · 27 KB
/
2024-11-27_last_log.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
[27.11.2024 22:10] Read previous papers.
[27.11.2024 22:10] Generating top page (month).
[27.11.2024 22:10] Writing top page (month).
[27.11.2024 23:10] Read previous papers.
[27.11.2024 23:10] Get feed.
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17465
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17116
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.16819
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17686
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.15296
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.14740
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17673
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17451
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17467
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.15411
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.16856
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17223
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17691
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.16173
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.16801
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.14721
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.16754
[27.11.2024 23:10] Get page data from previous paper. URL: https://huggingface.co/papers/2411.17383
[27.11.2024 23:10] Downloading and parsing papers (pdf, html). Total: 18.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17465.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17465.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17465.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17116.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17116.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17116.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.16819.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.16819.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.16819.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17686.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17686.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17686.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.15296.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.15296.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.15296.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.14740.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.14740.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.14740.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17673.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17673.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17673.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17451.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17451.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17451.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17467.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17467.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17467.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.15411.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.15411.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.15411.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.16856.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.16856.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.16856.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17223.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17223.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17223.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17691.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17691.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17691.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.16173.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.16173.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.16173.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.16801.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.16801.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.16801.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.14721.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.14721.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.14721.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.16754.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.16754.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.16754.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Downloading and parsing paper https://huggingface.co/papers/2411.17383.
[27.11.2024 23:10] Extra JSON file exists (./assets/json/2411.17383.json), skip PDF parsing.
[27.11.2024 23:10] Paper image links file exists (./assets/img_data/2411.17383.json), skip HTML parsing.
[27.11.2024 23:10] Success.
[27.11.2024 23:10] Enriching papers with extra data.
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 0. Building Graphical User Interface (GUI) assistants holds significant promise for enhancing human workflow productivity. While most agents are language-based, relying on closed-source API with text-rich meta-information (e.g., HTML or accessibility tree), they show limitations in perceiving UI visual...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 1. Inference with Transformer-based Large Language Models (LLMs) on long sequences is both costly and slow due to the quadratic complexity of the self-attention mechanism. We introduce Star Attention, a two-phase block-sparse approximation that improves computational efficiency by sharding attention ac...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 2. Recent advances in image editing, driven by image diffusion models, have shown remarkable progress. However, significant challenges remain, as these models often struggle to follow complex edit instructions accurately and frequently compromise fidelity by altering key elements of the original image....
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 3. To accelerate the inference of heavy Multimodal Large Language Models (MLLMs), this study rethinks the current landscape of training-free token reduction research. We regret to find that the critical components of existing methods are tightly intertwined, with their interconnections and effects rema...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 4. As a prominent direction of Artificial General Intelligence (AGI), Multimodal Large Language Models (MLLMs) have garnered increased attention from both industry and academia. Building upon pre-trained LLMs, this family of models further develops multimodal perception and reasoning capabilities that ...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 5. While high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for test-time opt...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 6. Sketching serves as a versatile tool for externalizing ideas, enabling rapid exploration and visual communication that spans various disciplines. While artificial systems have driven substantial advances in content creation and human-computer interaction, capturing the dynamic and abstract nature of...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 7. Vision-language generative reward models (VL-GenRMs) play a crucial role in aligning and evaluating multimodal AI systems, yet their own evaluation remains under-explored. Current assessment methods primarily rely on AI-annotated preference labels from traditional VL tasks, which can introduce biase...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 8. Self-supervised learning has emerged as a promising approach for acquiring transferable 3D representations from unlabeled 3D point clouds. Unlike 2D images, which are widely accessible, acquiring 3D assets requires specialized expertise or professional 3D scanning equipment, making it difficult to s...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 9. The advent of large Vision-Language Models (VLMs) has significantly advanced multimodal tasks, enabling more sophisticated and accurate reasoning across various applications, including image and video captioning, visual question answering, and cross-modal retrieval. Despite their superior capabiliti...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 10. Autoregressive models have demonstrated remarkable success across various fields, from large language models (LLMs) to large multimodal models (LMMs) and 2D content generation, moving closer to artificial general intelligence (AGI). Despite these advances, applying autoregressive approaches to 3D ob...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 11. Subject-driven image inpainting has emerged as a popular task in image editing alongside recent advancements in diffusion models. Previous methods primarily focus on identity preservation but struggle to maintain the editability of inserted objects. In response, this paper introduces DreamMix, a dif...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 12. We reveal that low-bit quantization favors undertrained large language models (LLMs) by observing that models with larger sizes or fewer training tokens experience less quantization-induced degradation (QiD) when applying low-bit quantization, whereas smaller models with extensive training tokens su...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 13. Despite advances in Large Multi-modal Models, applying them to long and untrimmed video content remains challenging due to limitations in context length and substantial memory overhead. These constraints often lead to significant information loss and reduced relevance in the model responses. With th...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 14. We present BootComp, a novel framework based on text-to-image diffusion models for controllable human image generation with multiple reference garments. Here, the main bottleneck is data acquisition for training: collecting a large-scale dataset of high-quality reference garment images per human sub...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 15. Molecule discovery is a pivotal research field, impacting everything from the medicines we take to the materials we use. Recently, Large Language Models (LLMs) have been widely adopted in molecule understanding and generation, yet the alignments between molecules and their corresponding captions rem...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 16. The proliferation of AI techniques for image generation, coupled with their increasing accessibility, has raised significant concerns about the potential misuse of these images to spread misinformation. Recent AI-generated image detection (AGID) methods include CNNDetection, NPR, DM Image Detection,...
[27.11.2024 23:10] ********************************************************************************
[27.11.2024 23:10] Abstract 17. The automatic generation of anchor-style product promotion videos presents promising opportunities in online commerce, advertising, and consumer engagement. However, this remains a challenging task despite significant advancements in pose-guided human video generation. In addressing this challenge, ...
[27.11.2024 23:10] Read previous papers.
[27.11.2024 23:10] Generating reviews via LLM API.
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#training", "#data", "#games", "#optimization", "#cv", "#agents", "#graphs", "#dataset"], "emoji": "🖥️", "ru": {"title": "ShowUI: Революция в создании интеллектуальных графических интерфейсов", "desc": "Статья представляет ShowUI - модель для создания графических пользовательских ин
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#long_context", "#inference", "#optimization", "#architecture"], "emoji": "⭐", "ru": {"title": "Звездное внимание: ускорение LLM без потери точности", "desc": "Статья представляет метод Star Attention для улучшения эффективности вычислений в трансформерных моделях большого языка (LL
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#diffusion", "#video", "#cv", "#multimodal"], "emoji": "🎬", "ru": {"title": "Редактирование изображений через призму видео: новый взгляд на старую задачу", "desc": "Эта статья предлагает новый подход к редактированию изображений с использованием моделей генерации видео. Авторы перео
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#optimization", "#inference", "#benchmark", "#multimodal"], "emoji": "🚀", "ru": {"title": "Ускорение MLLM: новая парадигма сокращения токенов", "desc": "Данная статья представляет новый подход к ускорению вывода мультимодальных больших языковых моделей (MLLM). Авторы предлагают униф
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#benchmark", "#agi", "#survey", "#multimodal"], "emoji": "🧠", "ru": {"title": "Комплексный подход к оценке мультимодальных языковых моделей", "desc": "Эта статья представляет собой обзор методов оценки мультимодальных больших языковых моделей (MLLM). Авторы рассматривают четыре ключ
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#3d", "#games", "#architecture", "#cv", "#diffusion"], "emoji": "🎨", "ru": {"title": "Революция в генерации 3D-текстур: обучение диффузионной модели прямо в UV-пространстве", "desc": "Статья представляет новый подход к генерации текстурных карт для 3D-объектов. Авторы обучили крупну
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#multimodal", "#agents"], "emoji": "✏️", "ru": {"title": "SketchAgent: диалоговое рисование с помощью языковых моделей", "desc": "В статье представлен SketchAgent - метод генерации эскизов, управляемый языком. Он позволяет пользователям создавать и модифицировать эскизы через диалог
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#optimization", "#hallucinations", "#reasoning", "#benchmark", "#multimodal", "#alignment"], "emoji": "🧠", "ru": {"title": "VL-RewardBench: новый стандарт оценки визуально-языковых моделей вознаграждения", "desc": "Статья представляет VL-RewardBench - новый бенчмарк для оценки генер
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#3d", "#dataset", "#transfer_learning", "#synthetic"], "emoji": "🧊", "ru": {"title": "Самообучение 3D-представлениям без семантики: геометрия важнее смысла", "desc": "Статья представляет новый подход к самообучению для получения трехмерных представлений из немаркированных облаков то
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#training", "#multimodal", "#games", "#cv", "#dataset", "#reasoning"], "emoji": "🔬", "ru": {"title": "Новый подход к композиционному описанию изображений с помощью усовершенствованных мультимодальных языковых моделей", "desc": "В статье представлена новая модель FINECAPTION, способн
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#multimodal", "#3d", "#architecture", "#games", "#agi"], "emoji": "🧊", "ru": {"title": "SAR3D: Быстрая генерация и глубокое понимание 3D объектов", "desc": "Статья представляет новый фреймворк SAR3D для генерации и понимания 3D объектов с использованием авторегрессионного подхода. S
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#diffusion", "#multimodal", "#open_source", "#cv"], "emoji": "🎨", "ru": {"title": "DreamMix: Вставка и редактирование объектов на изображениях с помощью текста", "desc": "DreamMix - это генеративная модель на основе диффузии для вставки объектов в изображения. Она позволяет не тольк
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#low_resource", "#dataset", "#open_source", "#inference"], "emoji": "🧠", "ru": {"title": "Квантование раскрывает тайны обучения языковых моделей", "desc": "Исследование показывает, что квантование с низким битрейтом менее вредно для недообученных больших языковых моделей (LLM), чем
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#architecture", "#video", "#long_context", "#multimodal", "#dataset"], "emoji": "🎥", "ru": {"title": "SALOVA: умный помощник для анализа длинных видео", "desc": "SALOVA - это новая система для обработки длинных видео с помощью больших мультимодальных моделей. Она решает проблему огр
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#data", "#diffusion", "#synthetic", "#cv", "#dataset"], "emoji": "👚", "ru": {"title": "BootComp: Генерация изображений людей с контролируемой одеждой без ручного сбора данных", "desc": "В статье представлен BootComp - новый фреймворк для генерации изображений людей с контролируемой
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#alignment", "#multimodal", "#dataset", "#interpretability", "#reasoning", "#training"], "emoji": "🧪", "ru": {"title": "MolReFlect: Точное соответствие молекул и текста с помощью LLM", "desc": "Статья представляет MolReFlect - новый метод для улучшения соответствия между молекулами
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#open_source", "#cv", "#benchmark", "#security", "#ethics", "#dataset"], "emoji": "🕵️", "ru": {"title": "Новый подход к выявлению ИИ-генерированных изображений", "desc": "В статье рассматривается проблема обнаружения изображений, сгенерированных искусственным интеллектом (ИИ), в кон
[27.11.2024 23:10] Using data from previous issue: {"categories": ["#training", "#video", "#cv", "#diffusion"], "emoji": "🎬", "ru": {"title": "AnchorCrafter: ИИ создает реалистичные промо-видео с взаимодействием человека и товара", "desc": "Статья представляет AnchorCrafter - новую систему на основе диффузии для генерации видео с взаимодействием чел
[27.11.2024 23:10] Loading Chinese text from previous data.
[27.11.2024 23:10] Renaming data file.
[27.11.2024 23:10] Renaming previous data. hf_papers.json to ./d/2024-11-27.json
[27.11.2024 23:10] Saving new data file.
[27.11.2024 23:10] Generating page.
[27.11.2024 23:10] Renaming previous page.
[27.11.2024 23:10] Renaming previous data. index.html to ./d/2024-11-27.html
[27.11.2024 23:10] [Experimental] Generating Chinese page for reading.
[27.11.2024 23:10] Chinese vocab [{'word': '图形用户界面', 'pinyin': 'tú xíng yòng hù jiē miàn', 'trans': 'graphical user interface'}, {'word': '助手', 'pinyin': 'zhù shǒu', 'trans': 'assistant'}, {'word': '模型', 'pinyin': 'mó xíng', 'trans': 'model'}, {'word': '视觉', 'pinyin': 'shì jué', 'trans': 'vision'}, {'word': '语言', 'pinyin': 'yǔ yán', 'trans': 'language'}, {'word': '动作', 'pinyin': 'dòng zuò', 'trans': 'action'}, {'word': '旨在', 'pinyin': 'zhǐ zài', 'trans': 'aim to'}, {'word': '生产力', 'pinyin': 'shēng chǎn lì', 'trans': 'productivity'}, {'word': '引导', 'pinyin': 'yǐn dǎo', 'trans': 'guide'}, {'word': '标记', 'pinyin': 'biāo jì', 'trans': 'mark'}, {'word': '选择', 'pinyin': 'xuǎn zé', 'trans': 'selection'}, {'word': '交错', 'pinyin': 'jiāo cuò', 'trans': 'interleave'}, {'word': '流', 'pinyin': 'liú', 'trans': 'flow'}, {'word': '规模', 'pinyin': 'guī mó', 'trans': 'scale'}, {'word': '高质量', 'pinyin': 'gāo zhì liàng', 'trans': 'high quality'}, {'word': '指令', 'pinyin': 'zhǐ lìng', 'trans': 'command'}, {'word': '跟随', 'pinyin': 'gēn suí', 'trans': 'follow'}, {'word': '数据集', 'pinyin': 'shù jù jí', 'trans': 'dataset'}, {'word': '计算', 'pinyin': 'jì suàn', 'trans': 'computation'}, {'word': '成本', 'pinyin': 'chéng běn', 'trans': 'cost'}, {'word': '训练', 'pinyin': 'xùn liàn', 'trans': 'training'}, {'word': '效率', 'pinyin': 'xiào lǜ', 'trans': 'efficiency'}, {'word': '零样本', 'pinyin': 'líng yàng běn', 'trans': 'zero-shot'}, {'word': '截图', 'pinyin': 'jié tú', 'trans': 'screenshot'}, {'word': '定位', 'pinyin': 'dìng wèi', 'trans': 'localization'}, {'word': '任务', 'pinyin': 'rèn wu', 'trans': 'task'}, {'word': '准确率', 'pinyin': 'zhǔn què lǜ', 'trans': 'accuracy'}, {'word': '环境', 'pinyin': 'huán jìng', 'trans': 'environment'}, {'word': '表现', 'pinyin': 'biǎo xiàn', 'trans': 'performance'}, {'word': '出色', 'pinyin': 'chū sè', 'trans': 'outstanding'}, {'word': '公开', 'pinyin': 'gōng kāi', 'trans': 'public'}, {'word': 'GitHub', 'pinyin': 'GitHub', 'trans': 'GitHub'}]
[27.11.2024 23:10] Renaming previous Chinese page.
[27.11.2024 23:10] Renaming previous data. zh.html to ./d/2024-11-26_zh_reading_task.html
[27.11.2024 23:10] Writing Chinese reading task.
[27.11.2024 23:10] Writing result.
[27.11.2024 23:10] Renaming log file.
[27.11.2024 23:10] Renaming previous data. log.txt to ./logs/2024-11-27_last_log.txt