Do Not Let AI Drive You: Keep the Ability to Build Independently
How to avoid copy-paste dependence when using AI for coding: Feynman technique, deliberate practice, retrieval practice, and a practical self-check workflow.
How to avoid copy-paste dependence when using AI for coding: Feynman technique, deliberate practice, retrieval practice, and a practical self-check workflow.
Build a Hugo Blog with GitHub Pages in 10 Minutes Subtitle / Abstract This guide takes you from zero to a deployed Hugo blog on GitHub Pages with GitHub Actions. It is beginner-friendly and explains the key moving parts. Target readers Hugo beginners Developers who want a quick technical blog Users of GitHub Pages and GitHub Actions Anyone who wants free static hosting Background / Motivation Common pain points when publishing a blog: ...
How to Publish with Hugo: From Markdown to Online Blog Subtitle / Abstract This guide explains how to create, manage, and publish Hugo posts: front matter, drafts, images, directory structure, local preview, and deployment. Target readers Hugo beginners Developers building a technical blog with Hugo Writers using Markdown + static sites Users of PaperMod, DoIt, and similar themes Background / Motivation After setting up a Hugo site, common questions include: ...
Title How to Write a Qualified API Document: From Swagger to Modern OpenAPI Subtitle / Abstract Want developers to actually enjoy using your API? This article covers the structure, examples, and best practices of high-quality API documentation based on Swagger/OpenAPI (originally by Tony Tam). Target readers Beginners who want a standard API doc structure Mid-level developers improving maintainability Architects and leads defining API standards Background / Motivation Common problems in API docs: ...
For a system, a single thread should be a single assistant. We should provide each user with one assistant and optimize that assistant. Providing many parallel threads per user is too expensive and unnecessary.
Bengio-style ML Task Specification: From Research to Engineering Subtitle: How to write a reproducible, explainable, and comparable fine-tuning task document based on Yoshua Bengio’s methodology. Reading time: 10 minutes Tags: ML documentation, fine-tuning, technical standards, deep learning practice Audience: mid to senior ML engineers, researchers, technical writers 1. Why do we need this document? In ML projects, teams often run fine-tuning experiments. Months later, nobody can reproduce results or explain why a learning rate or LoRA layer was chosen. ...
Introduction I want to build an AI system that supports tree-shaped or graph-shaped Q&A, instead of a traditional single-thread chat flow. Exploration Open-source framework research flowise
How to Truly Master a Paper Conclusion To truly master a paper, reading once is not enough. You need to decompose, verify, and reconstruct it, and then express the key points in your own words or implementation. The goal: explain the core contribution in 5 minutes, derive key formulas by hand, and reproduce a core experiment. Principles and background A paper is a compressed expression of a problem. It omits background, intuition, failed attempts, and many details. Mastery requires “decompressing” that information into your own knowledge network: assumptions, derivations, engineering steps, and the limits of the results. Only then can you judge when to use it and when not to. ...
What problem does this paper solve, and what are the results? We know AI systems are expanding and can solve general tasks. But many AI agent applications today target small tasks. NVIDIA argues that small language models (SLMs) are capable, more suitable, and cheaper, and should be a main direction for future agents. The paper discusses: What tasks current SLMs can handle Where general language ability matters The limits of SLMs as agents Conclusion: moving from LLMs to SLMs has advantages in both capability and cost.