<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Rust on 300.Watts</title><link>https://300watts.me/tags/rust/</link><description>Recent content in Rust on 300.Watts</description><generator>Hugo</generator><language>en</language><managingEditor>morristai01@gmail.com (Morris)</managingEditor><webMaster>morristai01@gmail.com (Morris)</webMaster><copyright>This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.</copyright><lastBuildDate>Sun, 10 Mar 2024 10:04:55 +0800</lastBuildDate><atom:link href="https://300watts.me/tags/rust/index.xml" rel="self" type="application/rss+xml"/><item><title>Async in Traits Just Save Us</title><link>https://300watts.me/posts/async-in-traits-just-save-us/</link><pubDate>Sun, 10 Mar 2024 10:04:55 +0800</pubDate><author>morristai01@gmail.com (Morris)</author><guid>https://300watts.me/posts/async-in-traits-just-save-us/</guid><description>&lt;h2 id="the-challenge-of-implementing-the-future-trait-for-custom-types" class="headerLink"&gt;
 &lt;a href="#the-challenge-of-implementing-the-future-trait-for-custom-types" class="header-mark"&gt;&lt;/a&gt;The Challenge of Implementing the Future Trait for Custom Types&lt;/h2&gt;&lt;p&gt;Developers who venture into crafting their own &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; implementations in Rust may encounter the intricate task of implementing the &lt;code&gt;Future&lt;/code&gt; trait for their custom types. Rust&amp;rsquo;s approach to &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; is nuanced, offering a stark contrast to languages like Go, which employ preemptive scheduling. Instead, Rust embraces lazy evaluation and cooperative scheduling, allowing developers to meticulously control the yield points to the executor.
This level of control, however, introduces complexity in implementing the &lt;code&gt;Future&lt;/code&gt; trait for custom types. The intricacies arise because &lt;code&gt;.await&lt;/code&gt; can&amp;rsquo;t be invoked within a &lt;strong&gt;non-async&lt;/strong&gt; function, necessitating the development of a state machine (or similar) for these custom types. This endeavor can be laborious and fraught with potential errors, difficult to maintain, and may prompt developers to opt for &lt;code&gt;BoxFuture&amp;lt;T&amp;gt;&lt;/code&gt;, a choice that could compromise performance.&lt;/p&gt;</description></item><item><title>Rust Profiling Essentials with perf</title><link>https://300watts.me/posts/rust-profiling-essentials-with-perf/</link><pubDate>Mon, 09 Oct 2023 10:04:55 +0800</pubDate><author>morristai01@gmail.com (Morris)</author><guid>https://300watts.me/posts/rust-profiling-essentials-with-perf/</guid><description>&lt;h2 id="what-is-profiling" class="headerLink"&gt;
 &lt;a href="#what-is-profiling" class="header-mark"&gt;&lt;/a&gt;What is profiling?&lt;/h2&gt;&lt;blockquote&gt;
 &lt;p&gt;A: &lt;strong&gt;Sampling&lt;/strong&gt; the program &lt;strong&gt;at specific times&lt;/strong&gt;, and do some statistics analysis.&lt;/p&gt;

&lt;/blockquote&gt;&lt;p&gt;It can be one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reading Backtraces&lt;/strong&gt; of the program for every 1000th cycles, and represent in ﬂamegraph.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reading Backtraces&lt;/strong&gt; of the program for every 1000th cache miss, and represent in ﬂamegraph.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reading Backtraces&lt;/strong&gt; of the program for every 10th memory allocation, and represent in ﬂamegraph.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Get return address&lt;/strong&gt; of the program for every 10th memory allocation, and show counts for every line.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="how-to-trigger-a-sample" class="headerLink"&gt;
 &lt;a href="#how-to-trigger-a-sample" class="header-mark"&gt;&lt;/a&gt;How to trigger a sample?&lt;/h2&gt;&lt;p&gt;Kind of triggers:&lt;/p&gt;</description></item><item><title>Serverless with Rust and Protocol Buffers</title><link>https://300watts.me/posts/serverless-with-rust-and-protocol-buffers/</link><pubDate>Wed, 08 Mar 2023 19:43:00 +0000</pubDate><author>morristai01@gmail.com (Morris)</author><guid>https://300watts.me/posts/serverless-with-rust-and-protocol-buffers/</guid><description>&lt;link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.16.2/dist/katex.min.css" integrity="sha384-bYdxxUwYipFNohQlHt0bjN/LCpueqWz13HufFEV1SUatKs1cm4L6fFgCi1jT643X" crossorigin="anonymous"&gt;
&lt;p&gt;I&amp;rsquo;ve recently been working on rewriting our small service with Rust for fun. One reason is to see if Rust, as a system programming language, is ready for cloud development in 2023. Another reason is that I wonder how much better it is compared to Python or Java in major cloud and data computing.&lt;/p&gt;
&lt;blockquote&gt;
 &lt;p&gt;💡 There is a lot of discussion over which language to use as a serverless service. In my point of view, dynamic languages like Python lack compile-time checks, which can cause more runtime errors than static languages like Rust or Java.&lt;br&gt;
Another reason is that Python and Java need a runtime process (JVM, CPython) to run your actual code, which means they can&amp;rsquo;t run natively like compiled Rust does. However, I&amp;rsquo;m not sure about the performance gap. Furthermore, Rust doesn&amp;rsquo;t require garbage collection, so ideally, a program&amp;rsquo;s heap size should fluctuate less than a garbage-collected language when running. That means Lambda or Function underlying infrastructure controller should be able to handle invocation or scaling more easily.&lt;br&gt;
In terms of language paradigms, in my opinion, compared to traditional OOP, the modern &lt;a href="https://en.wikipedia.org/wiki/ML_%2528programming_language%2529" target="_blank" rel="noopener noreferrer"&gt;ML (meta-language) family&lt;/a&gt; has a better design for scalable cloud services.&lt;/p&gt;</description></item></channel></rss>