<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Hashi Warsame Blog</title><description>Portfolio Site</description><link>https://fuwari.vercel.app/</link><language>en</language><item><title>Building an AI Scholarship Matching Platform with Claude</title><link>https://fuwari.vercel.app/posts/whizzia/</link><guid isPermaLink="true">https://fuwari.vercel.app/posts/whizzia/</guid><description>A deep-dive into building Whizzia — a scholarship intelligence platform that uses the Claude API to match international students with funding opportunities, analyse CVs, and review personal statements.</description><pubDate>Mon, 11 May 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Every year, billions of dollars in scholarship funding goes unclaimed — not because students aren&apos;t deserving, but because the matching problem is genuinely hard. Eligibility criteria span citizenship, GPA thresholds, field of study, programme level, and dozens of softer factors that no spreadsheet can sensibly aggregate. &lt;strong&gt;Whizzia&lt;/strong&gt; is our answer to that problem: an AI-powered scholarship intelligence platform that maps a student&apos;s full academic profile to a ranked shortlist of opportunities, then helps them actually win.&lt;/p&gt;
&lt;p&gt;You can see the live app here: &lt;a href=&quot;https://scholarshipwhizz-dgx-public.netlify.app&quot;&gt;WhizzIA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I&apos;m the lead developer on the project. Here&apos;s how we built it.&lt;/p&gt;
&lt;h2&gt;The Core Idea&lt;/h2&gt;
&lt;p&gt;The platform sits at the intersection of three problems students face:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Discovery&lt;/strong&gt; — finding scholarships they&apos;re actually eligible for, not just a generic list&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Positioning&lt;/strong&gt; — understanding how competitive their profile is and what&apos;s missing&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Application quality&lt;/strong&gt; — getting their CV and personal statement to a fundable standard&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Claude handles all three. The architecture is a Next.js frontend talking to a Supabase backend, with the Claude API powering the intelligence layer — profile analysis, CV critique, essay feedback, and scholarship scoring.&lt;/p&gt;
&lt;h2&gt;Authentication and Onboarding&lt;/h2&gt;
&lt;p&gt;The entry point is a clean split-screen login. A dark teal panel on the left carries the Whizzia logo — a graduation cap overlaid on a location pin, which neatly encodes the &quot;where are you going?&quot; question at the heart of the product. The right panel handles the email OTP flow.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./email.png&quot; alt=&quot;Email verification screen — split-panel layout&quot; /&gt;&lt;/p&gt;
&lt;p&gt;We deliberately avoided social login for the first version. Scholarship applications involve sensitive data — GPA, citizenship, disability status — and a verified email creates a cleaner audit trail. The six-digit OTP is sent via Supabase Auth and expires in ten minutes.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// pages/api/auth/verify.ts
const { data, error } = await supabase.auth.verifyOtp({
  email,
  token: otp,
  type: &apos;email&apos;,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The Profile Builder&lt;/h2&gt;
&lt;p&gt;Once authenticated, students are walked through a seven-step profile wizard. Step one collects the demographic information that gates the largest share of scholarships: country of citizenship, country of residence, gender, ethnicity (optional), and disability status.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./profile.png&quot; alt=&quot;Profile creation wizard — Personal Information step&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The sidebar shows progress through all seven steps — Personal Information, Academic Background, Study Preferences, Work Experience, Research &amp;amp; Projects, Achievements, and Final Details. A live &lt;strong&gt;Profile Readiness&lt;/strong&gt; score in the top-right corner updates as fields are completed, climbing toward 100% as the student fills in more data. This was a deliberate UX choice: visible progress reduces drop-off in long onboarding flows.&lt;/p&gt;
&lt;p&gt;Subsequent steps capture the academic data that drives matching: GPA, current degree level, target programme, intended study destination, language proficiency scores, and a free-text personal statement. Work experience and research sections feed directly into the CV review feature.&lt;/p&gt;
&lt;p&gt;All profile data is stored in Postgres via Supabase. No data is sent to Claude unless the user explicitly triggers an analysis — privacy by default.&lt;/p&gt;
&lt;h2&gt;Profile Analysis and Scholarship Readiness&lt;/h2&gt;
&lt;p&gt;The Profile Analysis page is the dashboard&apos;s centrepiece. It surfaces four headline metrics — Completeness, Readiness score, Percentile rank, and total Match count — alongside a breakdown of what&apos;s still missing and an Eligibility Alignment panel.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./analysis.png&quot; alt=&quot;Profile Analysis dashboard — readiness metrics and match breakdown&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Scholarship Matches&lt;/strong&gt; widget in the bottom right categorises opportunities into three buckets: Safe (18), Competitive (23), and Reach (9). These aren&apos;t static labels — they&apos;re computed at query time by comparing the student&apos;s normalised profile vector against each scholarship&apos;s eligibility matrix. The bucketing thresholds are tuned to the realistic acceptance rates in our dataset.&lt;/p&gt;
&lt;p&gt;The &quot;Missing Critical Data&quot; banner is generated by a lightweight rules engine that checks which high-signal fields are empty. Country of citizenship, GPA, programme level, field of study, study destination, and personal statement are the six fields with the most impact on match quality — filling them shifts the match count from 50 to upwards of 120 in most cases.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const CRITICAL_FIELDS = [
  { key: &apos;citizenship&apos;, label: &apos;Country of citizenship&apos; },
  { key: &apos;gpa&apos;, label: &apos;GPA&apos; },
  { key: &apos;programLevel&apos;, label: &apos;Program level&apos; },
  { key: &apos;fieldOfStudy&apos;, label: &apos;Field of study&apos; },
  { key: &apos;destination&apos;, label: &apos;Study destination&apos; },
  { key: &apos;statement&apos;, label: &apos;Personal statement&apos; },
];

export function getMissingCritical(profile: Profile) {
  return CRITICAL_FIELDS.filter(f =&amp;gt; !profile[f.key]);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;CV Review with Claude&lt;/h2&gt;
&lt;p&gt;The CV Review flow is a three-step pipeline: upload, AI analysis, template selection. Students drag in a PDF, DOC, DOCX, or TXT file (up to 10 MB), hit &lt;strong&gt;Analyse My CV&lt;/strong&gt;, and within seconds get a structured critique from Claude.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;cv-review.png&quot; alt=&quot;CV Review — upload interface with AI Insights panel&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The AI Insights panel on the right is empty until a CV is uploaded — a deliberate holding state that signals &quot;your analysis lives here.&quot; Once the document is processed, Claude returns a structured JSON object covering: overall strength score, section-by-section feedback, missing scholarship-relevant keywords, and three specific improvement actions ranked by impact.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const systemPrompt = `You are an expert scholarship application advisor. 
Analyse the student&apos;s CV for scholarship competitiveness.
Return a JSON object with this exact shape:
{
  &quot;overallScore&quot;: number,          // 0-100
  &quot;summary&quot;: string,               // 2-3 sentence executive summary
  &quot;sections&quot;: [                    // per-section feedback
    { &quot;name&quot;: string, &quot;score&quot;: number, &quot;feedback&quot;: string }
  ],
  &quot;missingKeywords&quot;: string[],     // scholarship-relevant terms not present
  &quot;topActions&quot;: [                  // ranked improvement actions
    { &quot;priority&quot;: number, &quot;action&quot;: string, &quot;impact&quot;: string }
  ]
}
Return only valid JSON. No preamble, no markdown fences.`;

const response = await fetch(&apos;https://api.anthropic.com/v1/messages&apos;, {
  method: &apos;POST&apos;,
  headers: { &apos;Content-Type&apos;: &apos;application/json&apos; },
  body: JSON.stringify({
    model: &apos;claude-sonnet-4-20250514&apos;,
    max_tokens: 1000,
    system: systemPrompt,
    messages: [{ role: &apos;user&apos;, content: cvText }],
  }),
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The previous analyses section below the upload zone shows a history of past CV versions with their scores, letting students track improvement over time. This has been one of the most-used features in early testing — students revise based on feedback, re-upload, and watch their score climb.&lt;/p&gt;
&lt;h2&gt;Essay Review&lt;/h2&gt;
&lt;p&gt;The Essay Review module follows the same pattern as CV Review but is tuned specifically for scholarship personal statements. Claude evaluates the essay against five scholarship-specific rubrics: clarity of purpose, evidence of impact, fit with the target award&apos;s values, narrative coherence, and differentiation from a hypothetical applicant pool.&lt;/p&gt;
&lt;p&gt;The prompt instructs Claude to return feedback structured around the specific scholarship the student is applying for — if a student has indicated they&apos;re targeting a Commonwealth Scholarship, the feedback prioritises community impact and development goals over raw academic achievement.&lt;/p&gt;
&lt;h2&gt;Scholarship Matching Engine&lt;/h2&gt;
&lt;p&gt;The matching logic runs in two passes. The first is a hard-filter pass in SQL — ruling out any scholarship where the student doesn&apos;t meet mandatory criteria (citizenship, GPA floor, degree level). The second is a soft-ranking pass using Claude, which takes the filtered shortlist and the student&apos;s profile and returns a ranked list with per-scholarship reasoning.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const rankingPrompt = `
Given the following student profile and list of scholarships they are 
eligible for, rank the scholarships by fit and return a JSON array.
For each scholarship, include:
- scholarshipId
- fitScore (0-100)  
- keyStrengths (array of strings — why this student is a good fit)
- keyRisks (array of strings — what might count against them)
- recommendedFocus (string — what to emphasise in the application)

Student profile: ${JSON.stringify(profile)}
Eligible scholarships: ${JSON.stringify(eligibleScholarships)}
`;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This two-pass approach keeps costs predictable — the SQL filter reduces the average shortlist from 400+ scholarships to around 50 before Claude ever sees it, keeping token usage per ranking call under 3,000.&lt;/p&gt;
&lt;h2&gt;What&apos;s Next&lt;/h2&gt;
&lt;p&gt;The roadmap has three near-term priorities. First, a &lt;strong&gt;deadline tracker&lt;/strong&gt; integrated into the Checklist feature — scholarship windows are brutally short and students consistently miss them. Second, a &lt;strong&gt;recommendation letter assistant&lt;/strong&gt; that drafts referee briefing documents, giving students a polished summary of their accomplishments to hand to professors. Third, a &lt;strong&gt;mock interview module&lt;/strong&gt; using Claude&apos;s conversational capability to run practice scholarship interviews for high-value awards like Rhodes and Gates Cambridge that include a panel stage.&lt;/p&gt;
&lt;p&gt;The infrastructure is largely in place. The interesting work now is in the product layer — understanding precisely what blocks a student between &quot;matched&quot; and &quot;funded,&quot; and building the features that close that gap.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Built by &lt;a href=&quot;mailto:hashi.warsame21@gmail.com&quot;&gt;Hashi Warsame&lt;/a&gt; — Lead Developer, Whizzia&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>How I Built an AI Health Risk Dashboard with Claude</title><link>https://fuwari.vercel.app/posts/health-dashboard/</link><guid isPermaLink="true">https://fuwari.vercel.app/posts/health-dashboard/</guid><description>A walkthrough of building a heart disease risk intelligence dashboard powered by the Claude API, using the UCI Cleveland dataset.</description><pubDate>Mon, 11 May 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Heart disease is notoriously difficult to catch early — not least because the highest-risk patients are often the ones showing no symptoms at all. In this project we build a &lt;strong&gt;Heart Disease Risk Intelligence Dashboard&lt;/strong&gt; that visualises the UCI Cleveland dataset across 303 patients and uses the &lt;strong&gt;Claude API&lt;/strong&gt; to surface the patterns a human analyst might miss.&lt;/p&gt;
&lt;p&gt;You can see the live dashboard here: &lt;a href=&quot;https://tiny-biscuit-82b936.netlify.app/&quot;&gt;tiny-biscuit-82b936.netlify.app&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./dash.png&quot; alt=&quot;Dashboard hero — Heart Disease Risk Intelligence&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Dataset&lt;/h2&gt;
&lt;p&gt;The dataset comes from the UCI Cleveland Clinic Heart Disease study. It contains 303 patient records with 14 clinical features including age, sex, chest pain type, resting blood pressure, serum cholesterol, and maximum heart rate achieved. 165 of the 303 patients tested positive for heart disease, giving a prevalence rate of &lt;strong&gt;54.5%&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;All the raw data is baked directly into the dashboard as a JavaScript object, keeping the project dependency-free and instantly deployable.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const DATA = {
  age: [
    { label:&apos;&amp;lt; 45&apos;,  rate:20,   d:12,  t:58  },
    { label:&apos;45–54&apos;, rate:44,   d:39,  t:88  },
    { label:&apos;55–64&apos;, rate:62,   d:60,  t:97  },
    { label:&apos;65+&apos;,   rate:72,   d:43,  t:60  },
  ],
  cp: [
    { label:&apos;Typical Angina&apos;,    rate:28, d:8,  t:23 },
    { label:&apos;Atypical Angina&apos;,   rate:42, d:21, t:50 },
    { label:&apos;Non-anginal Pain&apos;,  rate:47, d:39, t:86 },
    { label:&apos;Asymptomatic&apos;,      rate:75, d:105,t:144 },
  ],
  // ...scatter and donut data
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;KPI Cards with Animated Counters&lt;/h2&gt;
&lt;p&gt;The four headline numbers at the top of the dashboard animate up from zero on load using GSAP. Each card reads its target value from a &lt;code&gt;data-val&lt;/code&gt; attribute, so adding a new metric is as simple as dropping in a new card element.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function counter(el, to, sfx, dur = 1.5) {
  const float = String(to).includes(&apos;.&apos;);
  gsap.to({ v: 0 }, {
    v: to, duration: dur, ease: &apos;power3.out&apos;,
    onUpdate: function () {
      el.textContent =
        (float
          ? this.targets()[0].v.toFixed(1)
          : Math.round(this.targets()[0].v)) + sfx;
    }
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Scroll-triggered versions of the same function fire when each chart card enters the viewport, so numbers only count up once they&apos;re actually visible.&lt;/p&gt;
&lt;h2&gt;Bar Charts and the Scatter Plot&lt;/h2&gt;
&lt;p&gt;The bar charts are built from plain HTML and CSS — no charting library needed. Each bar starts at &lt;code&gt;width: 0&lt;/code&gt; and expands to its target percentage via a CSS transition triggered by a &lt;code&gt;ScrollTrigger&lt;/code&gt; callback. Bars are coloured red when prevalence exceeds 50% and slate otherwise, giving an immediate at-a-glance risk signal.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function mkBars(id, items) {
  const el = document.getElementById(id);
  el.innerHTML = items.map(d =&amp;gt; `
    &amp;lt;div class=&quot;group space-y-1.5 cursor-default&quot;&amp;gt;
      &amp;lt;div class=&quot;flex justify-between text-[11px] font-bold uppercase tracking-wider text-slate-500&quot;&amp;gt;
        &amp;lt;span&amp;gt;${d.label}&amp;lt;/span&amp;gt;
        &amp;lt;span class=&quot;font-mono&quot;&amp;gt;${d.rate}% (${d.d}/${d.t})&amp;lt;/span&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;div class=&quot;h-3 bg-slate-100 rounded-full overflow-hidden&quot;&amp;gt;
        &amp;lt;div class=&quot;h-full rounded-full transition-all duration-1000&quot;
             style=&quot;width:0; background:${d.rate &amp;gt; 50 ? &apos;#dc2626&apos; : &apos;#64748b&apos;};&quot;
             data-w=&quot;${d.rate}&quot;&amp;gt;
        &amp;lt;/div&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;`).join(&apos;&apos;);

  ScrollTrigger.create({ trigger: el, start: &apos;top 95%&apos;, once: true,
    onEnter: () =&amp;gt; el.querySelectorAll(&apos;[data-w]&apos;).forEach((b, i) =&amp;gt;
      setTimeout(() =&amp;gt; b.style.width = b.dataset.w + &apos;%&apos;, i * 80))
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The scatter plot is raw SVG. Each patient is a &lt;code&gt;&amp;lt;circle&amp;gt;&lt;/code&gt; element positioned by age on the x-axis and maximum heart rate on the y-axis, coloured red for positive and blue for negative. Points animate in with a staggered GSAP fade and respond to hover with a tooltip showing the individual&apos;s age, heart rate, and diagnosis.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./dash2.png&quot; alt=&quot;Scatter plot — Age vs Max Heart Rate&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The AI Insights Panel&lt;/h2&gt;
&lt;p&gt;The most interesting part of the dashboard is the AI panel, which calls the &lt;strong&gt;Claude API&lt;/strong&gt; to generate a clinical summary of the dataset on load. The response streams back character-by-character using a typewriter effect, giving it the feel of a live analysis rather than static text.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;async function runAI() {
  const el = document.getElementById(&apos;ai-text&apos;);
  el.innerHTML = `&amp;lt;span class=&quot;text-slate-400&quot;&amp;gt;Crunching 303 patient records...&amp;lt;span class=&quot;cursor&quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;`;

  // Response is typed out character by character
  setTimeout(() =&amp;gt; {
    el.textContent = &apos;&apos;;
    el.classList.add(&apos;cursor&apos;);
    let i = 0;
    function type() {
      if (i &amp;lt; raw.length) {
        el.innerHTML += raw[i];
        i++;
        setTimeout(type, 6);
      } else {
        el.classList.remove(&apos;cursor&apos;);
      }
    }
    type();
  }, 800);
}

setTimeout(runAI, 1000);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Claude picks out three key findings from the data: the asymptomatic blind spot (75% positive rate among patients showing no symptoms), the early decline in maximum heart rate visible from age 45 onwards, and the 240 mg/dl serum cholesterol threshold above which 72% of positive cases cluster. A &lt;strong&gt;Refresh&lt;/strong&gt; button lets you re-run the analysis at any time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./Ai.png&quot; alt=&quot;AI Insights panel — live typewriter output&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Styling and Animation&lt;/h2&gt;
&lt;p&gt;The visual identity leans on a tight palette of off-white, slate, and crimson. A subtle paper texture is applied via an SVG &lt;code&gt;feTurbulence&lt;/code&gt; filter fixed to the viewport, giving the dashboard a tactile editorial feel without any image assets. Every card lifts slightly on hover through a shared &lt;code&gt;hover-card&lt;/code&gt; class.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;.hover-card {
  transition: transform 0.3s cubic-bezier(0.4, 0, 0.2, 1),
              box-shadow 0.3s cubic-bezier(0.4, 0, 0.2, 1);
}
.hover-card:hover {
  transform: translateY(-4px);
  box-shadow: 0 14px 28px -10px rgba(15, 23, 42, 0.12);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Fonts are served from Google Fonts: &lt;strong&gt;Playfair Display&lt;/strong&gt; for the large numerics and headings, &lt;strong&gt;Inter&lt;/strong&gt; for body copy, and &lt;strong&gt;JetBrains Mono&lt;/strong&gt; for the model tag and raw numbers in the charts.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Dataset source: &lt;a href=&quot;https://www.kaggle.com/datasets/cherngs/heart-disease-cleveland-uci&quot;&gt;UCI Cleveland Heart Disease · Kaggle&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Built by &lt;a href=&quot;mailto:hashi.warsame21@gmail.com&quot;&gt;Hashi Warsame&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>How I built a Neural Network from Scratch in Rust</title><link>https://fuwari.vercel.app/posts/neural-net/</link><guid isPermaLink="true">https://fuwari.vercel.app/posts/neural-net/</guid><description>A step-by-step summary and video guide on demystifying AI by building a neural network.</description><pubDate>Sun, 09 Mar 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;AI often feels like magic, but beneath the surface, it&apos;s just a combination of mathematical foundations and code. In this post, we&apos;ll build a neural network from scratch in &lt;strong&gt;Rust&lt;/strong&gt; — no ML libraries, just math and well-structured code.&lt;/p&gt;
&lt;h2&gt;The Matrix Structure&lt;/h2&gt;
&lt;p&gt;To handle the math, we first define our &lt;code&gt;Matrix&lt;/code&gt; structure. We use a flat &lt;code&gt;Vec&lt;/code&gt; to represent the matrix data, which is more cache-friendly than a vector of vectors. From this base struct we implement the core operations we&apos;ll need throughout the network: dot products, addition, and subtraction.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pub struct Matrix {
    pub rows: usize,
    pub cols: usize,
    pub data: Vec&amp;lt;f64&amp;gt;,
}

impl Matrix {
    pub fn dot_multiply(&amp;amp;self, other: &amp;amp;Matrix) -&amp;gt; Matrix {
        // ... implementation of matrix multiplication
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The Network Architecture&lt;/h2&gt;
&lt;p&gt;Our &lt;code&gt;Network&lt;/code&gt; struct uses the &lt;code&gt;Matrix&lt;/code&gt; type to manage the weights and biases between layers. It also holds the learning rate and the chosen activation function. The &lt;code&gt;feed_forward&lt;/code&gt; method handles transforming input data through each layer in sequence, producing the final prediction.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pub struct Network {
    pub layers: Vec&amp;lt;usize&amp;gt;,
    pub weights: Vec&amp;lt;Matrix&amp;gt;,
    pub biases: Vec&amp;lt;Matrix&amp;gt;,
    pub activation: Activation,
    pub learning_rate: f64,
}

impl Network {
    pub fn feed_forward(&amp;amp;mut self, inputs: Matrix) -&amp;gt; Matrix {
        // Compute output by passing data through layers
        // Using our dot_multiply implementation
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Backpropagation&lt;/h2&gt;
&lt;p&gt;The most critical part of learning is backpropagation. After a forward pass we compute the error between the prediction and the target, then propagate that error backwards through each layer. At each step we calculate the gradient and apply it to the weights and biases via gradient descent, nudging them in the direction that reduces the error.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pub fn back_propagate(&amp;amp;mut self, inputs: Matrix, targets: Matrix) {
    let errors = targets.subtract(&amp;amp;inputs);

    // Iterate through layers in reverse
    for i in (0..self.layers.len() - 1).rev() {
        // Apply gradient descent
        self.weights[i] = self.weights[i].add(&amp;amp;gradients);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The key fields driving this process are the layer topology (e.g. &lt;code&gt;[2, 3, 1]&lt;/code&gt;), the weight matrices representing connection strengths, the bias vectors that shift each activation, and the activation function itself which introduces the non-linearity the network needs to learn complex patterns.&lt;/p&gt;
&lt;h2&gt;Training the Model&lt;/h2&gt;
&lt;p&gt;Finally, we loop through our training data over several epochs. On each pass the network runs a forward prediction, measures its error, and backpropagates to improve. After 10,000 epochs the weights have converged and the network reliably predicts the correct outputs.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;fn main() {
    let mut network = Network::new(vec![2, 3, 1], Activation::SIGMOID, 0.5);

    // Train the network 10,000 times
    network.train(inputs, targets, 10000);

    println!(&quot;Training complete!&quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Final Result&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;./img4.png&quot; alt=&quot;Video screenshot — Training loop output and XOR results&quot; /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Watch the full step-by-step video tutorial below:&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;YouTube&lt;/h2&gt;
&lt;p&gt;&amp;lt;iframe width=&quot;100%&quot; height=&quot;468&quot; src=&quot;https://www.youtube.com/embed/DKbz9pNXVdE&quot; title=&quot;Neural Networks From Scratch in Rust&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; allowfullscreen&amp;gt;&amp;lt;/iframe&amp;gt;&lt;/p&gt;
</content:encoded></item><item><title>Building a Web Server from Scratch in Rust</title><link>https://fuwari.vercel.app/posts/server/</link><guid isPermaLink="true">https://fuwari.vercel.app/posts/server/</guid><description>A deep dive into how web servers work under the hood by building one in Rust.</description><pubDate>Mon, 08 Jul 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We often rely on frameworks like Actix and Rocket to build web applications, but do you know what is happening under the hood? In this post, we&apos;ll strip away the abstractions and build a functional web server from scratch in &lt;strong&gt;Rust&lt;/strong&gt; to understand the foundations of HTTP and TCP.&lt;/p&gt;
&lt;h2&gt;The Networking Stack (OSI Model)&lt;/h2&gt;
&lt;p&gt;To understand web servers, we first need to look at the &lt;strong&gt;OSI Model&lt;/strong&gt;, which breaks computer networking into seven layers. For a web server, the two most critical layers are the &lt;strong&gt;Transport Layer (Layer 4)&lt;/strong&gt;, which manages the actual delivery of data using protocols like TCP, and the &lt;strong&gt;Application Layer (Layer 7)&lt;/strong&gt;, where protocols like HTTP define how the client and server communicate.&lt;/p&gt;
&lt;h2&gt;Setting up a TCP Listener&lt;/h2&gt;
&lt;p&gt;The core of every web server is a &lt;strong&gt;TCP Listener&lt;/strong&gt;. We use Rust&apos;s standard library to bind to a port and listen for incoming connections. For each accepted connection, we spawn a new thread to handle it concurrently so the main loop remains free to accept the next client.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;use std::net::TcpListener;

fn main() {
    let listener = TcpListener::bind(&quot;127.0.0.1:8080&quot;).unwrap();

    for stream in listener.incoming() {
        match stream {
            Ok(stream) =&amp;gt; {
                std::thread::spawn(move || handle_client(stream));
            }
            Err(e) =&amp;gt; println!(&quot;Error: {}&quot;, e),
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Handling Client Requests&lt;/h2&gt;
&lt;p&gt;When a client connects, we read the incoming byte stream into a buffer and then parse it into a structured &lt;code&gt;Request&lt;/code&gt; type. This struct captures the HTTP method (e.g., GET, POST), the requested URI, the HTTP version, any headers, and an optional body. From there we can inspect the request and generate an appropriate &lt;code&gt;Response&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pub struct Request {
    pub method: HttpMethod,
    pub uri: String,
    pub version: String,
    pub headers: HashMap&amp;lt;String, String&amp;gt;,
    pub body: Option&amp;lt;String&amp;gt;,
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Routing and Middleware&lt;/h2&gt;
&lt;p&gt;To make our server useful, we need a way to map specific URL paths to handler functions. We achieve this with a &lt;code&gt;HashMap&lt;/code&gt; of routes keyed by path and HTTP method. We can also define a &lt;code&gt;Middleware&lt;/code&gt; trait to intercept requests and responses — useful for cross-cutting concerns like logging or authentication without cluttering individual route handlers.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pub trait Middleware: Send + Sync {
    fn on_request(&amp;amp;self, request: Request) -&amp;gt; FutureRequest;
    fn on_response(&amp;amp;self, response: Response) -&amp;gt; FutureResponse;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Running the Server&lt;/h2&gt;
&lt;p&gt;Finally, we instantiate our server, bind it to an address, and register our routes. For handling multiple concurrent connections efficiently, we can swap raw threads for &lt;strong&gt;Tokio&lt;/strong&gt; — Rust&apos;s async runtime — which gives us non-blocking I/O without the overhead of spawning a thread per connection.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#[tokio::main]
async fn main() {
    let addr = SocketAddr::from(([127, 0, 0, 1], 8080));
    let server = ServerBuilder::new()
        .bind(addr)
        .route(&quot;/&quot;, HttpMethod::GET, hello_handler)
        .build();

    server.run().await.unwrap();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Watch the full step-by-step video tutorial below:&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;YouTube&lt;/h2&gt;
&lt;p&gt;&amp;lt;iframe width=&quot;100%&quot; height=&quot;468&quot; src=&quot;https://www.youtube.com/embed/YOUR_VIDEO_ID&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; allowfullscreen&amp;gt;&amp;lt;/iframe&amp;gt;&lt;/p&gt;
</content:encoded></item></channel></rss>