Devon Burriss' Blog2023-01-07T07:44:20.0757124+00:00https://devonburriss.me/Devon Burrisshttps://devonburriss.me/yagnyagni/You ain't gonna need YAGNI2023-01-07T00:00:00+00:00Devon Burrisshttps://devonburriss.me/yagnyagni/<p>With some topics, you reliably get experienced software developers arguing on both sides of said topic. When this occurs frequently for a particular topic, I believe that it is that both sides are simultaneously right and wrong. This idea can be generalized but let's take the principle of YAGNI.
How can both sides be right and wrong? Well, because it depends.</p>
<!--more-->
<h2>The story so far</h2>
<img src="../img/posts/2023/76mwsg.jpg" alt="Personally, I find decoupling over-rated. Straightforward makes things easier to change and things are rarely truly decoupled." class="img-rounded pull-left" style="margin-right: 1em;">
When one developer says YAGNI, it is because she has been in a situation where a solution has been over-engineered, resulting in a complicated mess that was difficult to maintain. On the other hand, the developer pushing against YAGNI is probably trying to build some flexibility into the system. He has been in a situation before where the business comes with some last-minute "small change" that completely invalidates the current design. This resulted in a massive amount of work that threw the delivery date out or pressured him into crunching over a weekend.
<p>At this point, I would like to state that I err toward YAGNI. It is a principle though, not a law of nature. It is meant to be a guide toward better outcomes. In my experience, the systems that are easiest to change are those that are easy to understand. The YAGNI principle pushes toward short delivery cycles, where delivering earlier and getting feedback is more valuable than extra flexibility in the system that may never be needed. The underlying assumption here is that we cannot know the future, so get value out now.</p>
<h2>These are not the contexts you are looking for...</h2>
<p>Now let me switch gears a bit and put my architecture hat on. Architecture is about enabling business capabilities via software. It answers questions like "How much effort is it to add feature X?" or "Can we handle a 10x in customers next month?".</p>
<p>These are questions about a future state, so YAGNI is not the correct way to think about architectural questions.
Instead, in system architecture, we are making tradeoffs between complexity and future possibilities. These design choices are bets on what is likely to stay the same and what might change. A good design will make tradeoffs that allow the parts that are likely to evolve, to do so elegantly.</p>
<p>So why am I talking about architecture in a YAGNI post? When developers are arguing over the application of YAGNI, they have different hats on. The easiest way to come together is to identify what the context of the discussion is. Are they talking about system architecture? Then we are in the realm of trade-offs. Are we talking about a component in code? Then YAGNI is probably the safer bet since changes are more frequent and unpredictable. The cost of change is much smaller than large architectural changes.</p>
<h2>Conclusion</h2>
<p>Striving for a straightforward system architecture is a worthy goal. Sometimes sacrifices are made to provide future opportunities. At the level of code design, YAGNI becomes a far more compelling principle.
The above is illustrative of how people can disagree about something because they are talking about something in different contexts. So the next time you are involved in a disagreement, check your interlocutor's context. You might find you both actually agree when you get specific.</p>https://devonburriss.me/telemetry-tips/Telemetry tips2022-06-15T00:00:00+00:00Devon Burrisshttps://devonburriss.me/telemetry-tips/<p>When getting started with a new telemetry platform you may not know what conventions you need to set and follow. Even if you do, how do you get the rest of the team to follow them too. In this post I will give some tips for making sure the data hitting your telemetry tool is clean and organised so you can make the most of it, while not compromising the readability of your application code.</p>
<!--more-->
<h2>Define metric and tagging conventions early</h2>
<p>Anyone who has used a telemetry tool like Datadog in an organisation where the conventions are not clear will recognise the problem this is trying to solve. As teams start sending data, searching for anything becomes difficult as you sort through metrics ranging from <code>AcmeCoolServiceStartedUp</code> to <code>record_updated</code>. This can quickly get out of hand and can make searching for metrics quite frustrating. Since we want to make working with telemetry as [easy as possible], I suggest you tackle this as soon as possible.</p>
<p>Here are some of my recommendations:</p>
<ul>
<li>Use lowercase <code>snake_case</code> for metrics and and lowercase <code>kebab-case</code> tags. Reasons: Avoid case sensitive search issues. Readability.</li>
<li>Consider namespacing your metrics if you foresee multiple complex domains with lots of unique metrics eg. <code>some_domain.some_app_some_metric</code></li>
<li>Make use of key-value pairs in tags eg. <code>env:prod</code></li>
<li>Be clear on reserved tags eg. <code>env:prod</code> and <code>service:app-name</code></li>
<li>Use <code>result:success</code> and <code>result:fail</code> tags on the same metric rather than 2 separate metrics</li>
</ul>
<p>When running in the cloud, like many are, an important part of tagging that is easy to overlook is your infrastructure tagging. A system like <a href="https://docs.datadoghq.com/">Datadog</a> will pull tags from your cloud infrastructure resources and attach them to the metrics sent from that resource. This is why it is important to match your metric tagging and cloud infrastructure tagging conventions.</p>
<p>Infrastructure tags to consider:</p>
<ul>
<li>The environment as an <code>env</code> tag</li>
<li>The <code>service</code> sending telemetry</li>
<li>The <code>version</code> of the deployed application/service</li>
<li>You may want one or more of the following: <code>cost-centre</code> / <code>department</code> / <code>domain</code></li>
<li>The team to contact for support with the resource <code>team:team-name</code></li>
<li>The SLA for the application</li>
<li>The criticality of the application for the health of your system ranging from <code>criticality:very-high</code> to <code>criticality:low</code></li>
<li>The tool used to created eg. <code>tool:farmer</code></li>
<li>Dates like <code>created-at:2022-05-22</code> and <code>updated-at:2022-05-30</code></li>
</ul>
<p>Telemetry tools often require some of these to be useful. <code>env</code>, <code>service</code> and <code>version</code> are part of <a href="https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging">unified service tagging</a> for Datadog. Check what yours are for your telemetry tool.</p>
<p>This is not an exhaustive list but will hopefully give you a starting point.</p>
<h2>Build up tooling to help with standards</h2>
<p>Tooling for helping developers fall into the pit of success with the configuration and tagging of metrics can go a long way.</p>
<p>One such example is a thin wrapper around application setup that enforces the setup and sending of a service name, environment, etc. Often these things can be handled by the host environment but if not it is worth the small effort.</p>
<p>Rather than having each new project require configuring the application telemetry just right, make some sort of template, snippet, or package available to help developers fall into the pit of success.</p>
<pre><code class="language-csharp">// ASP.NET Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.AddTelemetry();
// other setup...
</code></pre>
<p>The example above relies on environment variables being present but your solution could require they are passed in. The important part is making it easy to do right and provide clear guidance if something is wrong. For example, the <code>AddTelemetry</code> method will explicitly check that all expected environment variables are present and throw an error with a clear list of all missing environment variable names.</p>
<pre><code class="language-csharp">missing.EnvVarCheck("DD_API_KEY", "Set `DD_API_KEY` with your Datadog API key.");
missing.EnvVarCheck("DD_ENV", "Set `DD_ENV` with the name of the current environment eg. prod");
missing.EnvVarCheck("DD_SERVICE", "Set `DD_SERVICE` with the name of this service.");
// etc.
</code></pre>
<p>The important takeaway is to make it easy to setup correctly, and give clear feedback when something is wrong.</p>
<p>The next helper that you can introduce is some way to make metric names discoverable.</p>
<pre><code class="language-csharp">public static class MetricName
{
public const string TodoListCreated = "observability_project.todo.list_created";
public const string TodoListCount = "observability_project.todo.list_count";
}
</code></pre>
<p>This may seem like a chore but it does have some advantages. You now have a repeatable way for developers to find a metric name via intellisense. You have a single place to change a metric name if you really needed to. And an often overlooked benefit is you have an overview of all metrics possibly sent and an IDE enabled way to find from where.</p>
<p>The same arguments can be made for tags. Remember that tags should always be of a fixed set, and so capturing them in code should be possible. If you are using an almost unconstrained range like a database identifier as a tag, expect a large bill from your telemetry provider for excessive indexes. Having them easily discoverable means you don't have multiple tags used for the same thing eg. <code>result:fail</code>, <code>result:failure</code>, and <code>result:error</code>.</p>
<pre><code class="language-csharp">public static class Tag
{
public const string Success = "result:success";
public const string Failure = "result:failure";
}
</code></pre>
<p>If you decide not to go with a static list like above, at least introduce some kind of check that helps point out poorly conforming metrics and tags.</p>
<h2>Enable metric and log correlation</h2>
<p>The true power of modern telemetry solutions is in the correlation of traces and logs through identifiers that allow correlation between parent and child processes, even across network boundaries. Hopefully most of you would have seen this in action already but if not, it unlocks another level of observability in your applications.</p>
<p><img src="../img/posts/2022/2022-06-14-21-44-08.png" alt="APM with linked logs" /></p>
<p>Above you can see we not only have a nice trace representing our request, we also all the logs linked to said request through the trace. It can often be some work to make sure these trace and span identifiers are flowing through your system well but when they are it is magic.</p>
<p>For many stacks these traces are injected in HTTP headers and so will carry across network boundaries. Sometimes you need you need to <a href="https://docs.datadoghq.com/tracing/connect_logs_and_traces/dotnet/?tab=serilog">configure them manually</a> and depending on your tooling and stack it may be as simple as installing a package. For a message queue, you will probably need to do some manual work to propagate a trace.</p>
<h2>Don't mix telemetry code and application code</h2>
<p>Instead of littering your code with random logs and metric pushes, I suggest starting with "what happened?". Once you have a call defining what happened, you decide what telemetry to send internally. This is better explained with an example:</p>
<pre><code class="language-csharp">DogStatsd.Set("observability_project.todo.list_count", dataStore.Count);
if (dataStore.Count == 0)
{
logger.LogInformation("TODO lists requested but none found.");
return (new List<TodoList>());
}
return dataStore.Values.ToList();
</code></pre>
<p>In the snippet above, we have metric and logging code scattered around our application code. This can be better refactored into telemetry calls saying WHAT happened, and we place the metric sending and logging within the method.</p>
<pre><code class="language-csharp">if (dataStore.Count == 0)
{
telemetryEvents.NoListsReturned();
return (new List<TodoList>());
}
telemetryEvents.ListsAvailable(dataStore.Count);
return dataStore.Values.ToList();
</code></pre>
<p>A complaint I have heard a few times with this approach is that logging and metrics are not the same so they shouldn't be bundled together this way. There are a few arguments to make here but instead I will ask a few questions. Is the code clearer because of this change? Will the core application logic change less frequently for unrelated reasons like adding a log?</p>
<p>If you are interested, an old colleague, <a href="https://www.erikheemskerk.nl/meaningful-logging-and-metrics/">Erik Heemskerk wrote this up in more detail</a>.</p>
<h2>Instrument where the action happens</h2>
<p>This is more a rule of thumb (no Sopranos here so no fingers will be broken if you ignore it). My suggestion is to try and keep the sending of telemetry to the boundaries of your application. Place them where calls come into your application and where calls go out to databases or other network calls.</p>
<p>What you want to try do is keep them out of your core domain calculations. Sometimes, if your domain is complex enough it may be worth the odd call to that logs some information. Mostly though, metrics are about something happened, and it didn't really happen until your application interacts with the outside world.</p>
<h2>Conclusion</h2>
<p>I touched on a few ways you can set a team up for success when using a telemetry tool. There are plenty of things to learn and maybe I will do more posts on the subject. Good luck on your telemetry journey. Feel free to hit me up on <a href="https://twitter.com/DevonBurriss">Twitter</a> with your tips.</p>
<h2>Resources</h2>
<ul>
<li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-opentelemetry-integration/">A great series on OpenTelemetry</a></li>
<li><a href="https://docs.datadoghq.com/getting_started/tagging/">Datadog tagging</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging">Azure naming and tagging</a></li>
<li><a href="https://opentelemetry.io/docs/reference/specification/metrics/semantic_conventions/">Open metrics</a></li>
<li><a href="https://opentelemetry.io/docs/reference/specification/common/attribute-naming/">OpenTelemetry attribute</a></li>
</ul>https://devonburriss.me/choosing-a-telemetry-platform/Choosing a telemetry platform2022-06-06T00:00:00+00:00Devon Burrisshttps://devonburriss.me/choosing-a-telemetry-platform/<p>Recently we decided to make the switch from using Azure Application Insights as our primary telemetry monitoring tool, to <a href="https://docs.datadoghq.com/">Datadog</a>. I wanted to drop a few thoughts on why this was a good choice so anyone else looking to make this decision could take a few more aspects into consideration.</p>
<!--more-->
<p>When choosing a new tool, it is easy to get get caught up in the technical requirements. Don't get me wrong, these are VERY important and it is important to do your homework. There is nothing worse that choosing a tool and then only noticing afterward it had a huge gap in functionality.</p>
<p>For us some of the broad technical requirements were:</p>
<ul>
<li>Good integration with Azure</li>
<li>Web App Service monitoring</li>
<li>MS SQL monitoring</li>
<li>Support for OpenTelemetry tracing</li>
<li>Integrated logging and custom metrics</li>
<li>Security alerting</li>
<li>Alerting and integrations ie. Slack, PagerDuty, etc.</li>
</ul>
<p>The things is, we were using Application Insights, and it does tick all of these boxes. So why change?</p>
<p>I can summarize it with this statement.</p>
<blockquote>
<p>A tool is useless if no-one wants to use it!</p>
</blockquote>
<p>Let me phrase it another way. To get adoption of any tool or practice (monitoring is both), the best thing you can do is make it easy and immediately beneficial.</p>
<p>I have been at at least 3 companies where Datadog has been introduced. What I have noticed is that before its introduction teams do not often build dashboards, and so teams do not use dashboards. Why? Well in most tools, dashboards are complicated to make, don't look great, and often don't have all the needed data. To be fair, not all of this is because it cannot be done. It doesn't help if the tool always feels like it is getting in the way.</p>
<h2>A dashboard challenge</h2>
<p>I decided to create 2 similar dashboards in both Datadog and Application Insights. The idea was to create a basic application dashboard that gave some high level overview and then a dive into some health metrics.</p>
<h3>Datadog Dashboard</h3>
<p>The following dashboard took me 10 minutes to design and build with no prior knowledge of what I wanted on the board, or what metrics I would use.</p>
<p><img src="../img/posts/2022/2022-06-06-13-21-12.png" alt="Datadog dashboard" /></p>
<h3>Application Insights Dashboard</h3>
<p>The Application Insights dashboard took me just over 15 minutes to create, with idea being of following the same or similar design use on the Datadog one.</p>
<p><img src="../img/posts/2022/2022-06-06-13-24-31.png" alt="Application Insights dashboard" /></p>
<h2>Alerts</h2>
<p>Alerts are a similar story to dashboards. With Datadog it is a single screen that is clearly arranged with all available metrics across all environments at your finger tips.</p>
<p><img src="../img/posts/2022/2022-06-06-16-51-05.png" alt="Datadog alerting" /></p>
<p>In Application Insights it is this weird process of refining the scope for the metrics you want to alert on.</p>
<p><img src="../img/posts/2022/2022-06-06-16-43-08.png" alt="Application Insights alert" /></p>
<p>And this refining with the hope of finding your metric later in your choice continues when setting up conditions.</p>
<p><img src="../img/posts/2022/2022-06-06-16-44-33.png" alt="Select a signal" /></p>
<p>Once you get to it setting up the condition is fairly intuitive.</p>
<p><img src="../img/posts/2022/2022-06-06-16-46-23.png" alt="AI condition" /></p>
<h3>Critique of the experience</h3>
<p>Datadog allows you to explore the metrics data and resulting graph or alert in real time. This makes for an interactive experience that allows you to learn what is possible while experimenting with different types of visualizations and queries. It also enables the quick creation of graphs and easy tweaking of data for better insights.</p>
<p><img src="../img/posts/2022/2022-06-06-16-57-27.png" alt="Easy search in Datadog" /></p>
<p>In contrast, Application Insights really makes you decide on your design upfront. A change usually means deleting a graph and choosing a new tile type or alert. Editing the graph requires changing between different modes and saving in a way that is really unintuitive. Lastly, and the most costly, is the fact that instead of just searching for metrics, you have to choose upfront whether you want metrics from a service, database, AI instance, etc. before being able to explore what metrics are available. Each time you need to pick a resource you need to drill down from the subscription/resource group/ resource layer in a tedious and time consuming way.</p>
<p>Alerts follow the same trend.</p>
<h2>Conclusion</h2>
<p>I am in no way affiliated with Datadog. I am clearly a fan of the product though. I have seen it succeed many times in raising the level of monitoring in a company. I think this is in no small part due to its excellent UI and the way it lets a user explore what is possible. One great thing not mentioned is that the types of products available in the Datadog suite is ever growing and the integration that they have with each other is great. This does also bring up one point to keep in mind. With Datadog becoming a 1 stop shop for metrics, APM, logging, security, etc. it can be overwhelming. I suggest starting with one or two and expanding slowly. So do you find yourself in the position where you wish you had a better handle on not only the errors in your system but also what normal behaviour looks like? Maybe you need to ask whether your tooling is holding you back...</p>https://devonburriss.me/tools-for-arch-docs/Tools for architecture documentation2022-05-24T00:00:00+00:00Devon Burrisshttps://devonburriss.me/tools-for-arch-docs/<p>Keeping documentation up to date can be difficult, and an extra barrier can be if you need extra tools setup "just so" to contribute to the docs. In this post I will give a quick run-through of setting up a <strong><a href="https://code.visualstudio.com/docs/remote/create-dev-container">devcontainer</a></strong> to help with great markdown editing, PlantUML, C4, and Mermaid diagrams. Another part of the documentation is the use of Architecture Decision Records, which will also be supported by the <strong>devcontainer</strong>.</p>
<!--more-->
<p>I recently setup a devcontainer for out "Patterns and Practices" repository at work. This makes it easy for a developer to jump in and contribute without needing to install the required tools.</p>
<p>In this post I won't be diving into the details of setting up a devcontainer. Instead I will show you what the container setup gives you and then point you toward where you can get it. I will callout the parts of the setup as I demo their results.</p>
<p>This setup uses the following tools:</p>
<ul>
<li><a href="https://code.visualstudio.com/docs">VS Code</a></li>
<li><a href="https://graphviz.org/">Graphviz</a></li>
<li><a href="https://plantuml.com">PlatUML</a></li>
<li><a href="https://c4model.com/">C4</a></li>
<li><a href="https://mermaid-js.github.io/">Mermaid</a></li>
<li><a href="https://github.com/npryce/adr-tools/blob/master/INSTALL.md">ADR Tool</a></li>
</ul>
<p>This will not be a thorough examination of each tool, although I will at least motivate why they are on the list.</p>
<h2>Local setup</h2>
<p>I will be starting with a clean VS Code setup to show that the devcontainer is doing all the work.</p>
<p><img src="../img/posts/2022/2022-05-23-21-01-51.png" alt="Clean VS Code setup" /></p>
<p>So the first things we need to do is make sure that you VS Code is ready to run a dev container:</p>
<ul>
<li>Make sure you have <a href="https://www.docker.com/">Docker</a> installed</li>
<li>Install the Remote - Containers extension</li>
</ul>
<p><img src="../img/posts/2022/2022-05-24-07-37-10.png" alt="Remote Containers extension" /></p>
<p>That's it! Your VC Code is now ready to run a devcontainer.</p>
<h2>Getting the devcontainer</h2>
<p>The example repository can be found at <a href="https://github.com/dburriss/tools-for-arch-docs-example">https://github.com/dburriss/tools-for-arch-docs-example</a>. Either clone this or copy the <em>.vscode</em> and <em>.devcontainer</em> folders into the folder where you will be storing your markdown documentation.</p>
<p>It should prompt you to open in the container but if not bring up the command pallette and choose "Reopen in Container".</p>
<p><img src="../img/posts/2022/2022-05-24-20-45-35.png" alt="Reopen in Container" /></p>
<p>Next let's look at the various diagrams available to us.</p>
<h2>Graphviz</h2>
<p><a href="https://graphviz.org/">Graphviz</a> is a fairly low-level drawing language (compared to other diagrams shown later). It uses .dot files to define diagrams.</p>
<p>Hit <em>Ctrl-Shift-v</em> to see a preview.</p>
<p><img src="../img/posts/2022/2022-05-24-21-54-54.png" alt="Graphviz image" /></p>
<p>Graphviz is really powerful and flexible and this is why PlatUML is actually built on top of it, which is why it is included in this list. Personally, I have not used it much for architecture drawings.</p>
<p>It can be useful to generate diagrams dynamically since it is on the command-line.</p>
<p><img src="../img/posts/2022/2022-05-24-21-59-46.png" alt="dot command" /></p>
<h2>PlantUML</h2>
<p>The next diagram we will try out will be a <a href="https://plantuml.com">PlatUML</a> diagram. PlatUML has been around for a while and so supports a wide variety of diagram types.</p>
<p>Pressing <em>Alt-d</em> will bring up the render of the PlantUML diagram.</p>
<p><img src="../img/posts/2022/2022-05-24-20-35-31.png" alt="Project structure in PlantUML" /></p>
<p>Pulling up the command pallette with <em>ctrl-p</em> then <em>></em> allows you to export.</p>
<p><img src="../img/posts/2022/2022-05-25-19-49-54.png" alt="export command" /></p>
<p>Good to know you can control the expected source and output folders in the <em>.vscode/settings.json</em> file:</p>
<pre><code class="language-json">"plantuml.diagramsRoot": "diagrams/src",
"plantuml.exportOutDir": "diagrams/out",
"plantuml.exportFormat": "png"
</code></pre>
<p>PlantUML is useful in the number of standard software diagram it supports. Check out the docs for a a complete list and usage.</p>
<h2>C4</h2>
<p><a href="https://c4model.com/">C4</a> diagrams are my goto when I want to explore or change an existing system or plan a new one. The creator Simon Brown had the insight that architecture diagrams should be like maps, and maps have a certain zoom level (show globe vs. country vs. city) and type (topographic vs. political). C4 diagrams have 4 main levels: System context, Container (running applications), and Code. There are others but these are the main ones that give it it's name.</p>
<p>Since it leverages PlantUML, <em>Alt-d</em> will get you a preview.</p>
<p><img src="../img/posts/2022/2022-05-25-08-55-01.png" alt="Context diagram" /></p>
<p>I highly recommend diving into C4. It is a valuable framework for organizing and structural drawing architecture diagrams. I have even experimented with functional modeling and you can judge the result <a href="https://devonburriss.me/functional-modeling/">here</a>.</p>
<h2>Mermaid</h2>
<p><a href="https://mermaid-js.github.io/">Mermaid</a> is the new kid on the block when it comes to software diagramming. It has some of the usual suspects like Sequence and Class diagrams but then has some new ones like User Journey and Gitgraph diagrams.</p>
<p>It also has some interesting tooling options like being able to embed into markdown via a link to the Mermaid site as well as support menus and links on elements. Of interest if you use GitHub to host markdown, they now <a href="https://github.blog/2022-02-14-include-diagrams-markdown-files-mermaid/">support Mermaid in their markdown</a>.</p>
<p><img src="../img/posts/2022/2022-05-25-20-14-42.png" alt="Mermaid git image" /></p>
<p>You can also have a standalone file, although I found this is a bit buggy.</p>
<p><img src="../img/posts/2022/2022-05-25-20-33-53.png" alt="Mermaid file" /></p>
<p>I am interested to see what new diagrams Mermaid releases. Hopefully a C4 diagram set!</p>
<h2>Conclusion</h2>
<p>That is it! What do you think? Using these already? Think you will use them going forward? With the devcontainer it is super easy to get started and give these diagrams a try.</p>
<p>There are a couple things in the devcontainer I have not covered that you may be interested in:</p>
<h3>Architecture Decision Records</h3>
<p><img src="../img/posts/2022/2022-05-25-20-43-46.png" alt="ADR CLI" /></p>
<p>Useful for recording the history of significant decisions made for an application or system of applications. Warning: requires discipline from the team to log decisions.</p>
<h3>Markdown plugins</h3>
<p>The Markdown Extension Pack comes with plenty of useful plugins for working with markdown from tables to emojis.</p>
<h3>Organization repository responsibility</h3>
<p>I am not going to dive into it but in the repository there is a script called <em>generate-applications-md.fsx</em> that generates a markdown file with a table of all GitHub repositories in an organisation. If you add 2 specific topics to a GitHub repo, namely <code>team-name</code> and <code>domain-name</code>, it will use these to populate the team name and domain. Usage is in comments at the top of the script.</p>
<p><img src="../img/posts/2022/2022-05-25-21-28-43.png" alt="repo table" /></p>
<h2>Plugin docs</h2>
<p>Here is the list of plugins included in this devcontainer.</p>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=bat67.markdown-extension-pack">Markdown Extension Pack</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=joaompinto.vscode-graphviz">Graphviz (dot) language support</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=jebbs.plantuml">PlantUML</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=vstirbu.vscode-mermaid-preview">Mermaid</a></li>
</ul>https://devonburriss.me/fp-architecture/A simple FP architecture2021-12-24T00:00:00+00:00Devon Burrisshttps://devonburriss.me/fp-architecture/<p>A recurring question I get after discussing the <a href="/what-is-fp">benefits of functional programming</a> (FP) with a developer who is not familiar with FP is, "Ok, that makes sense but how do I actually build a large application out of functions?" In this post I want to look at a simple functional architecture that could serve as a starting point.</p>
<!--more-->
<p>I will not be talking about Functional Reactive Programming (FRP) or Functional Relational Programming (also FRP) in this post. These are far more opinionated architectures, trying to achieve specific goals. Instead, I will describe a simple architecture that builds on the core ideas of functional programming covered in a <a href="/what-is-fp">previous post</a>. Let's refresh those here quickly:</p>
<ol>
<li>The language it is written in should support higher-order functions</li>
<li>More complex code should be built from composing simpler functions together</li>
<li>The programmer should follow the discipline of keeping functions pure as much as possible and push impure functions to the boundaries of the application</li>
</ol>
<p>Pure functions and higher-order functions will be the main ideas at play here, so if those are not familiar terms, go read <a href="/what-is-fp">this</a> first.</p>
<p>The "architecture" is actually embarrassingly simple. We adopt the pattern of wrapping our features in <strong>usecases</strong>. This is the name I prefer but I have seen them referred to as <strong>feature</strong> or even <strong>service</strong> (I dislike this as it is so overloaded already).
A <strong>usecase</strong> is called by the host, where the host is typically a web application or a console application (I don't have much experience with desktop or mobile but I don't see why it would differ). The host is also responsible for providing production implementations of impure functions to the <strong>usecase</strong>.</p>
<p><img src="/img/posts/2021/fp-arch-1.png" alt="Functional arch diagram" /></p>
<p>At this point you might be saying, "Wait, isn't this just Hexagonal/Onion/Clean architecture?". Yes. Maybe it is my background (OOP)? Maybe there are only so many ways to skin a cat? There are more similarities than there are differences.</p>
<p>The important part is the design inside of a <strong>usecase</strong>. A stark difference between OOP and FP seems to be the <em>separation of behaviour and data</em>. A revelation for me while learning FP was this focus on behaviour. All my professional career I had been modelling data and relationships, and trying to overlay behaviour over this in ways that allowed me to apply business requirements and still felt as if I was working with models of things in the real world. This focus on "things" means when it comes to implementing actual behaviour we end up splitting it between Aggregates, Repositories, Factories, Services, Managers, and ManagerMangers.</p>
<p><img src="/img/posts/2018/deeply-nested-dep.jpg" alt="scattered logic" /></p>
<blockquote>
<p>Sprinkling important application logic throughout an object graph makes it difficult to reason about. From post <a href="/managing-code-complexity">Managing code complexity</a>.</p>
</blockquote>
<p>Maybe you have worked on a codebase where dependency injection has run wild and each step is called on an object that was injected into the object executing the current step.</p>
<p>The first improvement for me, that I think was really sparked by more functional thinking, was to make sure I had an entry point that described in clear steps how a <strong>usecase</strong> is implemented. Functional programming encourages pipelines of behaviour that inspect data, make a decision, and change data based on this. If you think about it, this ETL process is the core of what the programs we write do. Now this isn't yet functional thinking. I <a href="/managing-code-complexity">wrote about this idea of usecases back in 2018</a> and whether you are applying FP or not it can improve the maintainability of your codebase.</p>
<p><img src="/img/posts/2018/use-case.jpg" alt="usecase" /></p>
<blockquote>
<p>Usecase as the entrypoint into your domain. From post <a href="/managing-code-complexity">Managing code complexity</a>.</p>
</blockquote>
<p>In using these top level <strong>usecases</strong> I had gained the benefit of clarity of what happens on a high level, as well as an easy entry point to dive into a specific step. In C# codebases I was using fluent method chaining, making it even <em>feel</em> more functional due to the style. I went through the effort of making my classes immutable.
What I did not yet have was the benefits of functional programming. Even though there was a nice clean entry point and a descriptive flow, reasoning and testability where not much improved. Where the functional programming part comes in is in trying to maximize how much of a <strong>usecase</strong> is <a href="/what-is-fp">pure</a>. The problem was I was mixing chained calls in a "functional style" without making much distinction between method calls that were pure and those that were not.</p>
<p><img src="/img/posts/2018/dependencies-on-boundary.jpg" alt="push IO dependencies to the boundary" /></p>
<blockquote>
<p>Impure operations should be pushed to the boundary. From post <a href="/managing-code-complexity">Managing code complexity</a>.</p>
</blockquote>
<p>So what do we get from this approach?</p>
<ol>
<li>A high-level list of all usecases in the system.</li>
<li>An entry point to inspect the steps in each usecase that can be a springboard into specific parts of the codebase.</li>
<li>Use cases only reference pure logic directly and impure functions can be substituted with test doubles.</li>
</ol>
<p>An architecture that is easy to understand and easy to test not a bad starting point.</p>
<h2>Example code</h2>
<p>Let's look at some code snippets of a small example. Unfortunately, architecture only starts becoming important once size and complexity scales up but then examples can become unwieldy. I am keeping things simple here to illustrate the moving parts. Module names like <code>DataAccess</code> and <code>Usecase</code> are a bit on the nose and in bigger applications would not be a good way to organize functionality. I use the names here to make it clear what is inside functions in these modules.</p>
<p>Imagine a little CLI based CRM system. Current functionality is to add a new customer and to change that customers email address. So we expect to have an implementation of a <code>changeEmail</code> usecase.</p>
<pre><code class="language-fsharp">// implementation of the change email usecase
let changeEmail (readCustomer : ReadCustomer) (saveCustomer : SaveCustomer) (cmd : ChangeEmailCommand) : Result<((DomainEvent list) * Customer),string> =
readCustomer cmd.CustomreId
|> Result.map (Customer.updateEmail cmd.NewEmail)
|> Result.bind saveCustomer
</code></pre>
<p>The host provides implementations for the <code>ReadCustomer</code> and <code>SaveCustomer</code> function types. You can see the setup of those below for the CLI tool. Although the details here are not important, something you may notice in the type signature is the events in the return type <code>DomainEvent list</code>. This is a pattern I started using many years ago, even before I started functional programming when I realized raising domain events the moment something happens can cause inconsistencies. If the event is handled and mutates state and then something goes wrong in the main execution path, your events that "happened" don't match your internal state. A safer pattern is to collect domain events as you execute and <a href="/reliability-with-intents">outbox</a> what you need to at the same time you persist your aggregates.</p>
<p>This <strong>usecase</strong> is easily testable since the IO parts are passed into the function. You can imagine that as more things need to happen, they can just be appended to the steps in the <strong>usecase</strong>.</p>
<p>If you are not familiar with F#'s <code>Result</code> type check out <a href="https://fsharpforfunandprofit.com/rop/">Railway oriented programming</a>.</p>
<p>Let's look at how this is used in the console application.</p>
<pre><code class="language-fsharp">let main args =
// composite root composes IO for app
let readCustomer = DataAccess.readCustomer // real impl would take some config
let saveCustomer = DataAccess.saveCustomer // real impl would take some config
let changeEmail = Usecase.changeEmail readCustomer saveCustomer
let newCustomer = Usecase.newCustomer saveCustomer
// Routing the parsed input
let handle command =
match command with
| Commands.NewCustomer cmd -> newCustomer cmd
| Commands.ChangeEmail cmd -> changeEmail cmd
// Parse the input to a command
let cmd = Mapper.inputToCommand args
// Get result by routing the command to correct usecase
let result = cmd |> Result.bind handle
// handle result output to CLI
</code></pre>
<p>With our <strong>usecase</strong> available, we can parse input to a command that matches to a <strong>usecase</strong>, in this case the <code>ChangeEmailCommand</code>. This is not an article on F# modelling and frankly it's a bit rough here but the point is that the usecase just received the command and knows nothing of the host.</p>
<h2>Tips</h2>
<ol>
<li>Don't use a usecase in another usecase. Rather have functions that can be shared across different usecases easily.</li>
<li>If you need to participate or kickoff sagas, an event list is a useful pattern.</li>
<li>Keep things simple and refactor when things get more complex</li>
<li>Use specific types as much as possible and build up pipelines inside the usecase that operate on those types</li>
</ol>
<h2>Conclusion</h2>
<p>In this post we saw how some core ideas of functional programming like higher-order functions and pure functions come together in guiding us toward an architecture. <a href="https://blog.ploeh.dk/">Mark Seeman</a> has a nice talk about the <a href="https://www.youtube.com/watch?v=US8QG9I1XW0">Pit of Success</a> and <a href="https://www.youtube.com/watch?v=cxs7oLGrxQ4">Dependency rejection</a> that are worth a watch. We noticed how this relates to patterns we are maybe already familiar with like Clean Architecture and having a core domain that is independent of implementation details. The value add for me is in having an entry-point that describes the steps and for keeping as much of that pure as possible. An alternative would be to really handle the IO outside of the usecase and have the host responsible for composing this together in a meaningful way. In my experience this leaves the usecase as so trivial that it is not worth having. I find it useful to have something that has the domain implementation and pointers to what it depends on to execute it's behaviour.</p>https://devonburriss.me/useful-fp-language-features/Useful FP language features2021-12-23T00:00:00+00:00Devon Burrisshttps://devonburriss.me/useful-fp-language-features/<p>In a <a href="/what-is-fp">previous post</a> we looked at the big ideas of functional programming. In this post we will look at some features that are often associated with functional programming but that I do not think are core to it.</p>
<!--more-->
<p>Some of these are conflated with functional programming but it turns out that the only language feature needed for functional programming is support for higher-order functions.</p>
<h2>Immutable data</h2>
<p>To work with pure functions, you need to be careful not to change the underlying state of you application. This includes the input to your functions. It is useful if your language can enforce this.</p>
<p>I was presenting to a group of Javascript and C# developers a few weeks ago and I showed the following C# snippet of code.</p>
<pre><code class="language-csharp">// what does this return?
var two = 1 + 1;
return two++;
</code></pre>
<p>Now maybe this is a bit unfair but I think it highlights the problem of reasoning about mutable state as statements are executed. When I polled the audience on this it seemed about a 50/50 split between answers of 2 and answers of 3. If anything, more answers of 3. If you are not sure, it turns out the number 2 is returned. Any subsequent references to <code>two</code> would reference the value 3.</p>
<p>Now granted, the <code>++</code> operator is not the most intuitive and you need to know the behaviour expected depending on what side of the variable it is place. It is useful in illustrating how state can change in ways we might not anticipate.</p>
<p>In the F# example below, you see that a value is immutable. Once it's value is set, it cannot be changed.</p>
<pre><code class="language-fsharp">let two = 1 + 1
//let two = 3 // will not compile
//let two <- 3 // will not compile
</code></pre>
<p>Once you have immutable values, it is important to have an easy way to create new values from old ones. An often overlooked area here is having good tools for working with immutable collections.</p>
<pre><code class="language-fsharp">let stock = [ ("chicken", 20);("grain", 50);("potatoes", 30) ] |> Map.ofList
// create a new map from an existing one
let newStock = stock |> Map.change "chicken" (fun vOpt -> vOpt |> Option.map (fun v -> v - 1))
</code></pre>
<p>Above we see that rather than changing the value in the map, a new map is returned with the changed value.</p>
<h3>Benefits</h3>
<ul>
<li>Easier to reason about</li>
<li>Fewer bugs due to unexpected state changes</li>
<li>Easier parallel processing</li>
</ul>
<h2>Algebraic data types</h2>
<p>Algebraic data types are comprised of <strong>product</strong> types and <strong>sum</strong> types.</p>
<p>Sidebar: I am not the person to be trying to explain Type Theory. I am not even sure if there exists a formal definition of class and how it relates to a type (in a language agnostic way). If you are an OO programmer think of a type as a concrete class. So <code>Nullable<T></code> is a class, <code>Nullable<int></code> is a type and <code>Nullable<decimal></code> is another type. My current thinking of a class is as a parameterized factory for a type, if it is generic. If not they can be considered equivalent. Experts, let me know in the comments all the ways this is wrong :)</p>
<p><strong>Product types</strong> are either records or tuples which in OO languages are common data structure types.</p>
<pre><code class="language-fsharp">type IntAndBool = {
I : uint
B : bool
}
let p = { I = 0u ; B = true }
// range of possible values
printfn "product %i" (((UInt32.MaxValue |> int64) + 1L) * (2L)) // range of uint * range of bool
// product 8589934592
</code></pre>
<p>Giving as a total possible range of 8589934592 combinations, found by multiplying the possible number of states in each field.</p>
<p>So I bet you can guess where <strong>sum types</strong> get there name from now...</p>
<p><strong>Sum types</strong> are known by many names and appear primarily in functional-first languages (tagged union, discriminated union, choice type, to name a few). The only OO leaning language I personally know that has something like <strong>sum types</strong> is TypeScript's Union types.</p>
<p>These types allow us to define types that can be something, or something else. An example will illustrate this best.</p>
<pre><code class="language-fsharp">type IntOrBool = I of uint | B of bool
let s = B true
(((UInt32.MaxValue |> int64) + 1L) + 2L)
printfn "sum %i" (((UInt32.MaxValue |> int64) + 1L) + (2L)) // range of uint + range of bool
// sum 4294967298
</code></pre>
<p>An instance of <code>IntOrBool</code> can be either one type or the other. There is no need to constrain these to combining simple types though. We can model using more complex types.</p>
<pre><code class="language-fsharp">type PostalCode = string
type Address = {
HouseNumber : int
HouseNumberOpt : char option
StreetName : string
City : string
PostalCode : PostalCode
}
type EmailAddress = string
type PhoneNumber = string
type ContactMethod = Email of EmailAddress | Post of Address | Phone of PhoneNumber
</code></pre>
<p>Here you see the <code>ContactMethod</code> type can be <code>EmailAddress</code> OR <code>Address</code> OR <code>PhoneNumber</code>. This gives a far more rich and intuitive way of modelling a domain.</p>
<p>A language that supports <strong>sum types</strong> typically provides elegant ways of dealing with 2 prickly issues in programming.<br />
Too often the absence of something is represented by <code>null</code>. " The billion dollar mistake yada yada...".
In functional languages the approach is to use a sum type, usually called <code>Option</code> or <code>Maybe</code>.</p>
<pre><code class="language-fsharp">let noValue = None
let someValueThatCouldBeNone = Some 42
printfn "is equal? %b" (noValue = someValueThatCouldBeNone)
// is equal? false
</code></pre>
<p>A similar approach can be taken to exceptions. Instead of throwing an exception that is hopefully handled somewhere, we return from the function that it was possible for an exception to have occurred.</p>
<pre><code class="language-fsharp">let success = Ok 42
let error = Error "Something went wrong calculating the meaning of life"
printfn "is equal? %b" (noValue = someValueThatCouldBeNone)
// is equal? false
</code></pre>
<blockquote>
<p>Note: This could be the point where some might be wondering where I am going to start throwing the word Monad. This article will not. Monad, monoid, etc. are patterns as far as I am concerned. Their origins may be far more formal than the observational origins of OOP patterns like Vistor, or Strategy, but they are patterns none the less (in my opinion). The are no more necessary for FP than patterns are for OOP. Using them well can improve your code. Using them poorly can make it overly complicated.</p>
</blockquote>
<h3>Benefits</h3>
<ul>
<li>They should be immutable</li>
<li>They should have value equality</li>
<li>More powerful modelling options without resorting to inheritance</li>
</ul>
<h2>Pattern matching</h2>
<p>The final language feature I will point out is pattern matching. This is making it's way into C# now but for me the combination of pattern matching with <strong>sum types</strong> is what I miss most when working in a language that does not support algebraic types.</p>
<pre><code class="language-fsharp">let calculateMeaning() =
if ((Random()).Next() % 2) = 0 then Ok 42
else Error "Something went wrong calculating the meaning of life"
match calculateMeaning() with
| Ok nr -> printfn "The answer to life is %i" nr
| Error err -> printfn "%s" err
</code></pre>
<p>When calculating the meaning of life, the returning result will be of type <code>Result<int,string></code>. We can <code>match</code> on this where we handle each case that is possible. If you have a statically typed language the compiler can tell you when your match is not covering every case.</p>
<p>If working with <code>Option</code> or <code>Result</code> sounds interesting to you, I suggest checking out <a href="https://fsharpforfunandprofit.com/rop/">Railway oriented programming</a>.</p>
<h3>Benefits</h3>
<ul>
<li>Often results in easier to understand control flow</li>
<li>In some languages, the compiler can tell you if all possibilities are matched against</li>
</ul>
<h2>Conclusion</h2>
<p>In this post we covered a few language features that are nice to have for making you development experience using functional programming productive. These support the ideas of FP and make it faster to write code that is bug free. This post was mostly about addressing things that where not in the <a href="/what-is-fp">previous post</a>. Finally, monads, etc. were not covered at all, since I consider them patterns. Although they are intimately connected with FP, I do not think they are strictly necessary to say you are writing code using the principles of FP.</p>https://devonburriss.me/what-is-fp/What is Functional Programming?2021-12-22T00:00:00+00:00Devon Burrisshttps://devonburriss.me/what-is-fp/<p>A few weeks ago I was preparing a small introduction to functional programming. It turns out, for me at least, to be fairly difficult to define what functional programming is. I distilled it down to 3 things via process of elimination. In this post I dive into what these 3 things are and what benefits they bring.</p>
<!--more-->
<p>Sidebar: As someone who has been doing OOP for 15+ years at this stage, I find OOP difficult to define too. This was not always the case. Learning functional programming ruined me, as it has ruined many before. As an OO programmer I was sure of my knowledge, my patterns, my design. Then I tried learning something that seemed to turn it all on it's head. Not just my knowledge but my self-assuredness in "right" and "wrong" ways to build software. Now with a little more experience in FP, I see many similarities in the problems and how they are solved. For me the benefit in FP is the number of "patterns and practices" that need to be understood to write better software. The point of this sidebar though is that words like abstraction and encapsulation are not claimed exclusively by OO. Except for inheritance... OO can have that if it wants it!</p>
<p>In the following sections I discuss my 3 aspects of programming that should be followed to reap the benefits of FP.</p>
<ol>
<li>The language should support higher-order functions</li>
<li>More complex functions should be composed out of simpler functions</li>
<li>The programmer should follow the discipline of making a distinction between pure and impure functions and try maximize the amount of pure functions</li>
</ol>
<blockquote>
<p>I find it interesting that there are 3 moving parts here. The language, how we build code, and how we architect code to interact with the world.</p>
</blockquote>
<h2>Higher-order functions</h2>
<p>A higher-order function is a function that meets at least one of the following two criteria:</p>
<ol>
<li>A function that takes another function as an input</li>
<li>A function that returns a function as its output</li>
</ol>
<p>Although most modern languages support this now days, functional-first languages tend to make this feel a lot more natural to use.</p>
<pre><code class="language-fsharp">let isEven x = (x % 2) = 0 // predicate function for determining an even number
let selectEven = List.filter isEven // Use predicate to returns new function of that selects even numbers
let evenInts = selectEven [0..10] // use the function
</code></pre>
<h3>Benefits</h3>
<ul>
<li>Functions are values and so can be passed around</li>
<li>Functions that take functions can be far more flexible as behaviour can be decided by the caller</li>
<li>When returning a function from another function it can be evaluated later (or not at all)</li>
</ul>
<h2>Function composition</h2>
<p>Function composition is the combination of simple functions into more complex ones. To compose functions, the output type of a function needs match the input type for the next function in the composition.</p>
<p>This is probably easiest explained with examples since you have probably used it in both school mathematics and programming.</p>
<p>Say we want to normalize some strings by trimming the whitespace off and making them lower-case. you could do this like this.</p>
<pre><code class="language-fsharp">let trim (s : string) = s.Trim()
let lower (s : string) = s.ToLowerInvariant()
let normalize (s : string) = lower(trim(s))
</code></pre>
<p>This is composition. In a functional-first language like F# we can build this up in a way that structurally matches the order the functions are called.</p>
<pre><code class="language-fsharp">let normalize = trim >> lower
</code></pre>
<p>This creates a new function <code>normalize</code> out of the 2 functions <code>trim</code> and <code>lower</code>. Remember that the output type of <code>trim</code> needs to match the input type of <code>lower</code>. In this case they are both <code>string</code>.</p>
<h3>Benefits</h3>
<ul>
<li>Functions are small testable units</li>
<li>Small and generic functions enable reusability</li>
<li>We build more and more complex functions out of simpler functions helps in building in small steps</li>
</ul>
<h2>Maximize use of pure functions</h2>
<p>This one is fairly uncontroversial. If you have heard people talking about FP then you have likely heard about pure functions.</p>
<p>At this point talk of <strong>referential transparency</strong> comes up. <a href="https://stackoverflow.com/a/9859966/2613363">Referential transparency seems to be a term borrowed from analytical philosophy</a>. If something is referentially transparent it means it's value is not dependent on some context, like the time it is referenced. From a code perspective, this means that once something is assigned a value, that value does not change over the lifetime of the programs execution. Put more flippantly, "equals equals equals".</p>
<p>Ok. Cool, cool, cool. What does this mean?</p>
<blockquote>
<p>At this point I need to issue a disclaimer: I am not a computer scientist. This is just my understanding on a topic where people tend to throw around terms like it is some kind of intellectual contest.</p>
</blockquote>
<p>The characteristic people are more often seeking with pure functions is <em>side-effect free</em> functions. Side-effect free is much easier to understand than referential transparency. It means that nothing outside the scope of the function is mutated.</p>
<p>So for a function to be <strong>pure</strong>, it needs to satisfy 2 criteria:</p>
<ol>
<li>The function must be referentially transparent</li>
<li>The function must be side effect free</li>
</ol>
<p>Note that I said <em>maximize</em> pure functions. We cannot build programs that interact with the outside world without having side effects. What we strive for in FP is increasing the amount of functions that are pure and pushing the side-effects to the boundary of our applications. We will dive into this in another post when discussing architecture.</p>
<pre><code class="language-fsharp">// Referentially transparent: Yes
// Side-effect free: No
// Pure: No
let i_am_rt x =
printfn "I am referentially transparent."
x
// Referentially transparent: No
// Side-effect free: No
// Pure: No
let i_am_not_rt x = (System.Console.ReadLine() |> int) + x
// Referentially transparent: Yes
// Side-effect free: Yes
// Pure: Yes
let i_am_pure x = x + 1
</code></pre>
<p>In practical terms, honouring referential transparency means you are not reading any data that is not in the input. Side-effect free is ensuring you are not changing input values (by reference) or mutating any state in the program or outside systems.
It is interesting to note that immutability comes along for the ride with pure functions, at least for where it really matters when programming with immutable values.</p>
<h3>Benefits</h3>
<ul>
<li>Calls to a function are idempotent, so they can be repeated without fear of unexpected state updates</li>
<li>If a function does not depend on the output of another function, the order does not matter</li>
<li>Since pure functions depend only on their input, they can be called in parallel without fear of deadlock or data corruption</li>
<li>Pure functions are easy to test because they depend on only the input and must have an output (to be useful)</li>
<li>Since a pure function only depends on input, reasoning about it should be simpler</li>
<li>If the value of a pure function is not used, it can be removed without altering the behaviour of a program</li>
</ul>
<blockquote>
<p>I shouldn't have to say it, but I will. Functional programming is not about AVOIDING mutating state. It is not BAD at mutating state. It is just more opinionated about WHERE those mutations occur.</p>
</blockquote>
<h2>Conclusion</h2>
<p>In this post we looked at some of the core ideas of functional programming.</p>
<p>My opinion then is that for code to claim to be functional:</p>
<ol>
<li>The language it is written in should support higher-order functions</li>
<li>More complex code should be built from composing simpler functions together</li>
<li>The programmer should follow the discipline of keeping functions pure as much as possible and push impure functions to the boundaries of the application</li>
</ol>
<p>These ideas all have a long tradition in mathematics but hopefully from the benefits listed you can see that they might have some real practical benefits if adopted into how you design and implement applications. Depending on what languages you have been exposed to, you may have expected other topics here like immutability and algebraic data types. These language constructs being built into the language can really help, but I don't believe are necessary for programming in a functional way. In the <a href="/useful-fp-language-features">next post</a> we will look at these to see what benefits they bring.</p>https://devonburriss.me/reliable-apis-part-3/Reliable APIs - Part 32021-08-29T00:00:00+00:00Devon Burrisshttps://devonburriss.me/reliable-apis-part-3/<p>The <a href="/reliable-apis-part-2">previous post</a> showed how things can go wrong when not thinking through edge cases carefully, especially where concurrency comes into play. In this post we will look at a truly idempotent endpoint design as well as discuss some alternative designs.</p>
<!--more-->
<p>Posts in this series:</p>
<ol>
<li><a href="/reliable-apis-part-1">Exploring reties, retry implications, and the failure modes they are appropriate for</a></li>
<li><a href="/reliable-apis-part-2">Using Idempotency-Key and a response cache</a></li>
<li>The epic saga of client-side IDs and true idempotence</li>
</ol>
<p>Once again we join our intrepid young developer as they try to implement a truly idempotent endpoint. They seek not idempotence for it's own sake but rather to finally claim that the endpoint is reliable.
As a reminder, this is the current design:</p>
<p><img src="../img/posts/2021/2021-08-22-10-38-55.png" alt="Current design" /></p>
<p>We have looked at all the different places this operation can fail in the two preceding posts. Currently, the DB calls are the only out-of-proc calls that have retry policies over them.</p>
<p>Our friend has not been idle during our absence. They have been leveling up knowledge on concurrency, REST, and architecture. The result is a design that is simple but our developer worries the team will think it unorthodox.</p>
<h2>A different perspective</h2>
<p>After standup the team has a short sharing session on designs for the stories they are working on. You pitch the use of client generated IDs. Although not something the rest of the team has heard of, the name kind of gives it away, and the team balks at the idea.</p>
<p><em>"IDs should be generated on the server!"</em> Why? We already use UUIDs generated in our code. Does it matter where this is generated?<br />
<em>"It's a security risk!"</em> Why? It is internal software in our network where we maintain the client and the server.<br />
<em>"It seems weird!"</em> Why? From a REST point of view, we are just telling the server to create a resource at a more specific URI.</p>
<p>Let me explain further.</p>
<p><a href="https://mermaid-js.github.io/mermaid-live-editor/edit/##eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gICAgQ2xpZW50LT4-K0FQSTogQ3JlYXRlIG9yZGVyIHJlcXVlc3Qgd2l0aCBJZFxuICAgIEFQSS0-PkRCIDogRmV0Y2ggc3VwcGxpZXIgaW5mb1xuICAgIEFQSS0-PkRCIDogUGVyc2lzdCByZWNvcmQgd2l0aCBJZCBhbmQgb3V0Ym94XG4gICAgQVBJLS0-Pi1DbGllbnQ6IE9yZGVyIGNyZWF0ZWQgcmVzcG9uc2VcbiAgICBXb3JrZXItPj5EQiA6IEZldGNoIG91dGJveFxuICAgIFdvcmtlci0-PlN1cHBsaWVyIEFQSSA6IFNlbmQgb3JkZXJcbiAgICBXb3JrZXItPj5EQiA6IFVwZGF0ZSBvdXRib3hcbiAgICBsb29wIFBvbGwgZW5kcG9pblxuICAgICAgICBDbGllbnQtPj5BUEk6IENoZWNrIGlmIG9yZGVyIGNyZWF0aW9uIGRvbmVcbiAgICBlbmQiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"><img src="https://mermaid.ink/img/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gICAgQ2xpZW50LT4-K0FQSTogQ3JlYXRlIG9yZGVyIHJlcXVlc3Qgd2l0aCBJZFxuICAgIEFQSS0-PkRCIDogRmV0Y2ggc3VwcGxpZXIgaW5mb1xuICAgIEFQSS0-PkRCIDogUGVyc2lzdCByZWNvcmQgd2l0aCBJZCBhbmQgb3V0Ym94XG4gICAgQVBJLS0-Pi1DbGllbnQ6IE9yZGVyIGNyZWF0ZWQgcmVzcG9uc2VcbiAgICBXb3JrZXItPj5EQiA6IEZldGNoIG91dGJveFxuICAgIFdvcmtlci0-PlN1cHBsaWVyIEFQSSA6IFNlbmQgb3JkZXJcbiAgICBXb3JrZXItPj5EQiA6IFVwZGF0ZSBvdXRib3hcbiAgICBsb29wIFBvbGwgZW5kcG9pbnRcbiAgICAgICAgQ2xpZW50LT4-QVBJOiBDaGVjayBpZiBvcmRlciBjcmVhdGlvbiBkb25lXG4gICAgZW5kIiwibWVybWFpZCI6eyJ0aGVtZSI6ImRlZmF1bHQifSwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" alt="Idempotent design" /></a></p>
<ol>
<li>The client will generate a UUID as an identifier (ID) that will represent the Order to be created</li>
<li>Instead of sending <code>Idempotency-Key</code> we POST to a unique URI now eg. <code>/orders/1b2e680a-78ce-41f3-8296-63706432f844</code>. Now we either have a order at a known resource, or we do not.</li>
<li>When persisting this order, we use the ID sent as a unique identifier in the database. We can use the database to enforce uniqueness so any call to persist an order with the same ID will fail. If the persist was successful, we return <code>202 Accepted</code>.</li>
<li>Have a <code>supplier_requests</code> table that represent the <a href="https://devonburriss.me/reliability-with-intents/">intent</a> to send the request to the supplier. This is the <a href="https://microservices.io/patterns/data/transactional-outbox.html">outbox pattern</a>.</li>
<li>A worker is running in the background that picks up and sends the unsent records from <code>supplier_requests</code></li>
<li>Once the worker has completed, it updates the database such that subsequent GET requests to <code>/orders/1b2e680a-78ce-41f3-8296-63706432f844</code> will return <code>200 OK</code> instead of <code>202 Accepted</code>.</li>
</ol>
<p>Although the unorthodoxy of generating the ID on the client side seems to bother some people in the team still, they can't really say why. More importantly, everyone agrees that this design does indeed seem to have the guarantees for resilience that they were aiming at.</p>
<p>Happy with the design and the buy-in, our developer pairs up with one of the more skeptical team members to implement the design. And finally, they can enable the client retry policies.</p>
<p><img src="../img/posts/2021/2021-08-29-11-48-28.png" alt="Final design" /></p>
<h2>Analysis</h2>
<p>As part of the analysis we will go into some implementation details, as well as some possible alternative designs.</p>
<h3>Client-generated ID vs. Idempotency-Key</h3>
<p>If a team is uncomfortable using the client-generated ID, continuing to use <code>Idempotency-Key</code> is a perfectly good solution. In this case you could just insert <code>Idempotency-Key</code> into another table in the same transaction as the order is inserted into the database. It is important that this is in the same transaction, or you lose the idempotency guarantee. You just need to make sure the column has a uniqueness constraint on it.</p>
<blockquote>
<p>A note on the primary key: When using client-generated ID you need not use it as the primary key for the Order. By indexing it and placing a uniqueness constraint on it we can use it as a public lookup. We can then use a database incrementing numeric key for the primary key to do joins on. This way your primary key is never exposed. This gives location independence if you needed to make major changes to your database to cope with scale.</p>
</blockquote>
<h3>Outbox states</h3>
<p>I wanted to make a few suggestions for your <code>supplier_requests</code> implementation. In my <a href="https://devonburriss.me/reliability-with-intents/">intents</a> post I discuss a more generic outbox but I would not make that jump unless you have a lot of different systems you are interacting with in an application.</p>
<p>Some data to consider keeping on the outbox:</p>
<ul>
<li><code>created_at</code></li>
<li><code>last_touch</code></li>
<li><code>completed_at</code></li>
<li><code>status</code></li>
<li><code>try_count</code></li>
<li><code>message</code></li>
<li><code>order_id</code></li>
</ul>
<p>A few states to keep track of though:</p>
<ul>
<li><code>pending</code>: Has not been picked up by a worker (message relay), Sets <code>created_at</code> & <code>last_touch</code> column to same value.</li>
<li><code>in-progress</code>: Has been picked up by a worked but not completed. Updates <code>last_touch</code> column.</li>
<li><code>failed</code>: Tried to post to the supplier but either failed with a reason that makes retying risky, or retry count was hit. Updates <code>try_count</code>, <code>message</code>, <code>completed_at</code>.</li>
<li><code>completed</code>: Set if POST to supplier is successful. Updates <code>try_count</code>, <code>completed_at</code>.</li>
</ul>
<p>This design assumes all data you need can be fetched from the linked order. The other option is to just keep a serialized payload in the outbox row as a column.</p>
<h3>Workers</h3>
<p>The workers that do the actual sending, known as <em>message relays</em>, need to be singletons so they are not picking up the same outbox message concurrently. This does not mean you can have only one. You could use locking, or more preferable, partitioning where multiple workers process concurrently but over distinct partitions. As a single example you could have 2 workers, 1 processing the outbox for orders with even numbered row number while the other processes odd.</p>
<h3>A word on cache back-channeling</h3>
<p>For endpoints where multiple hits to either the POST (unlikely) or the GET (more likely) are going to cause significant load on the database, it can be a good idea to actively populate the cache. This is where the outbox represents the <a href="https://devonburriss.me/reliability-with-intents/">intent</a> of steps in a saga rather than the mere passing on of a single message.</p>
<p>If for example we expected a high load on <code>GET /orders/1b2e680a-78ce-41f3-8296-63706432f844</code> our worker could:</p>
<ol>
<li>POST to supplier API</li>
<li>Prepopulate a distributed cache with the order (or response, depending on cache type)</li>
<li>The unhappy path would be to trigger some sort of rollback or user notification of a failure to complete</li>
</ol>
<h3>Optimization warning</h3>
<p>If it is imperative that the endpoint be responding as quickly as possible that the saga has completed, you may be tempted to TRY complete it with the initial request. This gets us back into the concurrency problem where you could have the worker and the API both trying to process the same outbox message. It is possible if you are doing some locking on the database but honestly it just doesn't seem worth it to me.</p>
<h2>Conclusion</h2>
<p>It took a while to get there, but our young developer finally got to a robust API design. In this series we looked at a few subtle ways that things can go wrong. These failure modes are often overlooked when developers are used to dealing with low volume loads, but can quickly become an issue if your load grows quickly. We also saw how sometimes business processes can mask system errors and so saw the importance of having good monitoring, metrics, and alerts fo not just the health, but proper operating of our systems.</p>
<p>The solutions presented in this post assume certain properties from your persistent storage, so it is important to think about how you are handling idempotence when selecting your database technology.</p>
<p>Finally, this design was really optimizing for resilience and eventual consistency of the system. Sometimes if speed of processing is more important, you may need to sacrifice some reliability. Unfortunately, when you are making those kinds of tradeoffs you are almost by definition dealing with high loads so... it depends.</p>
<h2>Summary</h2>
<p><strong>Problem:</strong> Duplicate calls</p>
<p><strong>Solutions:</strong> idempotency via unique key in an atomic commit</p>
<p><strong>Consequence:</strong> The database enforces no duplicates</p>
<h2>Resources</h2>
<ul>
<li><a href="https://www.techyourchance.com/client-generated-ids-vs-server-generated-ids/">A very basic discussion of client vs. server IDs</a></li>
<li><a href="https://devonburriss.me/reliability-with-intents/">Flexible design for outbox like saga</a></li>
<li><a href="https://tech.trello.com/sync-two-id-problem/">Interesting discussion of what happens when you can't use client side IDs</a></li>
</ul>https://devonburriss.me/reliable-apis-part-2/Reliable APIs - Part 22021-08-23T00:00:00+00:00Devon Burrisshttps://devonburriss.me/reliable-apis-part-2/<p>In the <a href="/reliable-apis-part-1">previous post</a> we saw how you can end up with duplicates if using a retry-policy on a call to a non-idempotent endpoint. In this post, we will look at correcting this and see a subtle way that this can go wrong.</p>
<!--more-->
<p>Posts in this series:</p>
<ol>
<li><a href="/reliable-apis-part-1">Exploring reties, retry implications, and the failure modes they are appropriate for</a></li>
<li>Using Idempotency-Key and a response cache</li>
<li><a href="/reliable-apis-part-3">The epic saga of client-side IDs and true idempotence</a></li>
</ol>
<p>When we last saw our young developer, they had learned a lesson about the indiscriminate use of retry policy. This led to some insightful telemetry to be able to monitor when the system landed in an inconsistent state.</p>
<p>A good thing too! The e-commerce company our developer works at is expanding into another country and to cope with the increase in buying across 2 countries, they are automating the restocking. A sister team has been working with the data science team to develop an intelligent resupply service that will be making use of the supplier ordering API to automatically create orders. Currently, inconsistencies only happen once every week or two but with an increase in load, this will start getting even more annoying for both the development team and purchasers. Our young developer has raised that they want to have this fixed and stable before the automation kicks in.</p>
<p>As a reminder, this is the current design:</p>
<p><img src="../img/posts/2021/2021-08-22-10-38-55.png" alt="Current design" /></p>
<p>Let's see how our young developer is getting along...</p>
<h2>That idempotence thing</h2>
<p>So you stopped using XML and SOAP and started sending JSON so you figured you had this REST stuff down. If the last few weeks has taught you anything though it is that there is way more to this API design than the getting started pages on web frameworks tell you. You do recall this idea of <em>idempotent</em> calls though and this seems like what you are looking for. Searching for solutions, the internet seems to be a dumpster fire of people arguing about whether POST should be idempotent or not. Going to the source and reading the POST section of the <a href="https://datatracker.ietf.org/doc/html/rfc7231#section-4.3.3">RFC</a> you decide on:</p>
<ul>
<li>Respond with <code>201 Created</code> if the resource does not exist</li>
<li>Respond with <code>303 See Other</code> if the resource already exists</li>
</ul>
<p>So apparently a POST can be idempotent. Regardless of the spec, this just seems like a good idea.</p>
<p>The more difficult question is, how to tell if a request is a duplicate? Apparently, the semantic way to handle this would be to use <a href="https://tools.ietf.org/id/draft-idempotency-header-01.html">Idempotency-Key</a>.<br />
The <code>Idemptency-Key</code> is a header you place in a request that indicates a unique request. So for each create order request you send to your API, it will have a unique UUID. Now you can retry a request if it fails, you can retry the request with the same <code>Idemptency-Key</code> as the failed request.</p>
<p>For the API our young developer comes up with the following design. The whole team is really excited about adding Redis to their stack as a cache. Not only will it be used as the <code>Idemptency-Key</code> cache but as a response cache in general.</p>
<p><img src="../img/posts/2021/2021-08-23-06-19-59.png" alt="With cache" /></p>
<p>Before servicing a request, the create order endpoint will check to see if the <code>Idemptency-Key</code> is already in the cache and if it is, it will just return the cached response. If it is not in the cache, it will proceed with the rest of the call and at the end, place the response in the cache.</p>
<p>Now that the endpoint is idempotent, you go ahead and re-enable that retry policy from the client-side.</p>
<h2>Not again!</h2>
<p>The day after deploying your new resiliency changes you get a call from one of the new stock purchasers, Leon. Leon is an older guy who wanted a change from warehousing, an area he had been working for decades. He mentions that he has noticed some inconsistencies but wants to check them with you since he does not know these new systems. You smile to yourself because Leon does not seem very comfortable on the computer. He double clicks everything and types with one finger. Leon brings up the application that shows the purchase orders created on our side. He also brings up the portal they use that shows them incoming deliveries from the supplier. It takes a while but eventually he puts these 2 screens next to each other. There are orders that have been created on our side and do not exist at the supplier. Not only that but duplicates are back!<br />
Leon points out something else interesting. He noticed that his orders seem to be duplicated far more often than the other purchasers. He is worried he is doing something wrong since he knows he isn't great at this computer stuff.<br />
You are pretty sure you know what is wrong and you can't believe you made this mistake again. You explain to Leon that he does not need to double click the button but assure him that the fault is not his but rather yours. Leon not only found a bug earlier than everyone else but because he had checked at the supplier, he was able to fix the orders before deliveries were sent. Thank you, Leon!</p>
<h2>Quick fix</h2>
<p>You are pretty sure you know what is going on. Leon's double-clicking meant that sometimes a second request was making it into the endpoint before the first call had been completed and was cached. Now that you are thinking through it, the current design hardly adds any value at all from a resilience point of view. You are shocked. Annoyed with yourself because the reason you had not looked at this more critically before was that this was the advice of countless posts and libraries on the internet. Maybe people just don't make POST requests idempotent? Or the people giving the advice don't work on distributed systems? Maybe they just don't have telemetry telling them how often this goes wrong? Looking at yours, it indeed confirms Leon's findings. Apparently, you need to invest in even better metrics and alerts.</p>
<p>You implement some quick fixes. Firstly, you disable the retry policy. Again. Next, you add a quick change to the UI that disables the button until a response is received. That should take care of Leon's double-clicking.</p>
<p>Back to the drawing board.</p>
<h2>Analysis</h2>
<p>So what went wrong with our friend's design this time? Basically, concurrency makes everything just a little bit more complex. When walking through a sequence of steps in our program it can be difficult to think about what this means for other executions happening at the same time. The kind of bugs that can arise from this can be rather subtle and confusing.</p>
<p>Here is just one example of 2 requests hitting the endpoint before the cache has been updated.</p>
<p><a href="https://mermaid-js.github.io/mermaid-live-editor/edit/##eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gICAgQ2xpZW50LT4-K0FQSTogQ3JlYXRlIG9yZGVyIHJlcXVlc3QgW29yaWddXG4gICAgQVBJLT4-K0NhY2hlIDogQ2hlY2sgZm9yIElkZW1wb3RlbmN5LUtleSBbb3JpZ11cbiAgICBDYWNoZS0-Pi1BUEkgOiBObyBrZXkgZm91bmQgW29yaWddXG4gICAgQ2xpZW50LT4-QVBJOiBEdXBsaWNhdGUgY3JlYXRlIG9yZGVyIHJlcXVlc3QgW2R1cF1cbiAgICBBUEktPj4rQ2FjaGUgOiBDaGVjayBmb3IgSWRlbXBvdGVuY3ktS2V5IFtkdXBdXG4gICAgQ2FjaGUtPj4tQVBJIDogTm8ga2V5IGZvdW5kIFtkdXBdXG4gICAgQVBJLT4-REIgOiBQZXJzaXN0IHJlY29yZCBbb3JpZ11cbiAgICBBUEktPj5TdXBwbGllciBBUEkgOiBTZW5kIG9yZGVyIFtvcmlnXVxuICAgIEFQSS0-PkRCIDogUGVyc2lzdCByZWNvcmQgW2R1cF1cbiAgICBBUEktPj5TdXBwbGllciBBUEkgOiBTZW5kIG9yZGVyIFtkdXBdXG4gICAgQVBJLT4-Q2FjaGUgOiBVcGRhdGUgY2FjaGUgW29yaWddXG4gICAgQVBJLT4-Q2FjaGUgIDogVXBkYXRlIGNhY2hlIFtkdXBdXG4gICAgQVBJLS0-PkNsaWVudDogT3JkZXIgY3JlYXRlZCByZXNwb25zZSBbb3JpZ11cbiAgICBBUEktLT4-LUNsaWVudDogT3JkZXIgY3JlYXRlZCByZXNwb25zZSBbZHVwXVxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9"><img src="https://mermaid.ink/img/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gICAgQ2xpZW50LT4-K0FQSTogQ3JlYXRlIG9yZGVyIHJlcXVlc3QgW29yaWddXG4gICAgQVBJLT4-K0NhY2hlIDogQ2hlY2sgZm9yIElkZW1wb3RlbmN5LUtleSBbb3JpZ11cbiAgICBDYWNoZS0-Pi1BUEkgOiBObyBrZXkgZm91bmQgW29yaWddXG4gICAgQ2xpZW50LT4-QVBJOiBEdXBsaWNhdGUgY3JlYXRlIG9yZGVyIHJlcXVlc3QgW2R1cF1cbiAgICBBUEktPj4rQ2FjaGUgOiBDaGVjayBmb3IgSWRlbXBvdGVuY3ktS2V5IFtkdXBdXG4gICAgQ2FjaGUtPj4tQVBJIDogTm8ga2V5IGZvdW5kIFtkdXBdXG4gICAgQVBJLT4-REIgOiBQZXJzaXN0IHJlY29yZCBbb3JpZ11cbiAgICBBUEktPj5TdXBwbGllciBBUEkgOiBTZW5kIG9yZGVyIFtvcmlnXVxuICAgIEFQSS0-PkRCIDogUGVyc2lzdCByZWNvcmQgW2R1cF1cbiAgICBBUEktPj5TdXBwbGllciBBUEkgOiBTZW5kIG9yZGVyIFtkdXBdXG4gICAgQVBJLT4-Q2FjaGUgOiBVcGRhdGUgY2FjaGUgW29yaWddXG4gICAgQVBJLT4-Q2FjaGUgOiBVcGRhdGUgY2FjaGUgW2R1cF1cbiAgICBBUEktLT4-Q2xpZW50OiBPcmRlciBjcmVhdGVkIHJlc3BvbnNlIFtvcmlnXVxuICAgIEFQSS0tPj4tQ2xpZW50OiBPcmRlciBjcmVhdGVkIHJlc3BvbnNlIFtkdXBdXG4gICAgIiwibWVybWFpZCI6eyJ0aGVtZSI6ImRlZmF1bHQifSwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" alt="Concurrent requests to cache" /></a></p>
<p>As you can see in the sequence diagram, the first request comes in and then the second. The second check against the cache happens before the first request completes and updates the cache.</p>
<p>We also still have the problem that a call to the supplier API failing would leave our database in an inconsistent state. Depending on what went wrong we could retry, but what if the process was terminated at that point? A duplicate call could come in again.<br />
What if we updated the cache before the calls? Well then we could end up with either database or external API call failing and from the outside it seeming like it had succeeded.</p>
<h2>Conclusion</h2>
<p>This design of cache that is not transactional with state change within the service does not really move us closer to a resilient API design. In the next post, we will finally look at a design that does improve reliability.</p>
<h2>Summary</h2>
<p><strong>Problem:</strong> Duplicate calls</p>
<p><strong>Solutions:</strong> idempotency via a response cache</p>
<p><strong>Consequence:</strong> Duplicate calls because cache update is not atomic</p>
<blockquote>
<p>Concurrency is hard</p>
</blockquote>
<h2>Resources</h2>
<ul>
<li><a href="https://stripe.com/blog/idempotency">Stripe blog on Idempotency</a></li>
<li><a href="https://repl.ca/what-is-the-idempotency-key-header/">Intro to Idempotency-Key header</a></li>
</ul>https://devonburriss.me/reliable-apis-part-1/Reliable APIs - Part 12021-08-22T00:00:00+00:00Devon Burrisshttps://devonburriss.me/reliable-apis-part-1/<p>Resiliency is more than just slapping a retry policy on a client and hoping it can handle transient errors. It is building systems that handle operations that always end in a valid state across the whole system. This does not mean that all operations WILL BE successful. Just that they are always handled in an expected way, every time.</p>
<!--more-->
<p>Posts in this series:</p>
<ol>
<li>Exploring reties, retry implications, and the failure modes they are appropriate for</li>
<li><a href="/reliable-apis-part-2">Using Idempotency-Key and a response cache</a></li>
<li><a href="/reliable-apis-part-3">The epic saga of client-side IDs and true idempotence</a></li>
</ol>
<p>To explore this, let's step into a young developer's shoes and consider a simple piece of functionality.</p>
<blockquote>
<p>A stock purchaser is using a system where they look at some analytics on a stock item and decide if they need to purchase more stock and how much. They indicate the quantity on the client application and click the "Order now" button. This sends a POST request to the backend to create an order with the supplier.</p>
</blockquote>
<h2>The naive design</h2>
<p>For this, you coded up the following. A simple call to a backend API that deserializes the request, checks against some predefined rules and looks up the best supplier, persists the order, and finally sends the purchase order off to the supplier API.</p>
<p><img src="../img/posts/2021/2021-08-18-06-04-52.png" alt="Starting design" /></p>
<p>Everything seems to be working well. However, while getting some requirements for a new feature a stock purchaser mentions that sometimes ordering fails. They then click the button again and everything seems to work fine.</p>
<p><img src="../img/posts/2021/2021-08-18-06-14-00.png" alt="Network errors" /></p>
<h2>The naive fix</h2>
<p>Looking through some logs you notice some HTTP timeouts. You decide to add retry logic to the client in case that call fails. For good measure, you add retry policies to the database calls as well as the external supplier API call.</p>
<p><img src="../img/posts/2021/2021-08-20-07-35-45.png" alt="Naive implementation of retry policies" /></p>
<p>After a few days, the stock purchasers report that they are indeed no longer getting the error that requires them to resubmit the order.</p>
<p><em>A few weeks later...</em></p>
<p>The purchaser contacts you in a panic. The warehouse has reported receiving multiple shipments of the same product, with exactly the same quantity, but as separate shipments. According to the warehouse, this happens now and again but recently the frequency has increased as well as the number of duplicate shipments, with as many as 5 duplicates. 5. Shi!t! that is the exact number as your retry policy!</p>
<h2>Lesson learned</h2>
<p>Feeling a bit bad about the trouble you caused for your stakeholder you take a step back and remove the retry policy from the client call and the external supplier API call. You reckon it is safe to leave on the query to get supplier data since that does not change state. The persist seems ok to since the database call succeeds or fails reliably.</p>
<p>Sufficiently chastened by your mistake, you decide to add some metrics and tracing to the operations. On top of that, you add some alerting on top of failed calls to the supplier API. Lastly, you add some exception handling to failed supplier calls so that the entry in the database is removed. For now, you will just let your stakeholder know when this happens so they can reorder.</p>
<p><img src="../img/posts/2021/2021-08-22-10-34-49.png" alt="Retries only on DB" /></p>
<p>After a few weeks, it seems your changes are acceptable since this only happens occasionally.</p>
<h2>Analysis</h2>
<p>Our young developer learned some important lessons. Let's go over what happened.</p>
<p>Firstly, our young developer fell for the first fallacy of distributed systems, ala <em>"The network is reliable"</em>.</p>
<p>In my experience, this is a common one for developers to fall into when they are dealing with low volume traffic. The time between failures is long, and if there is a user observing an intermittent failure, they will often just retry.</p>
<p>Adding a retry policy was a good instinct but unfortunately, it requires your API to have particular characteristics. We will get to these characteristics in later posts but first, let's look at each step in the operation, and what effect a retry has.</p>
<p><a href="https://mermaid-js.github.io/mermaid-live-editor/edit/##eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gICAgbG9vcCAxLiBDbGllbnQgQVBJIGNhbGxcbiAgICAgICAgQ2xpZW50LT4-K0FQSTogQ3JlYXRlIG9yZGVyIHJlcXVlc3RcbiAgICAgICAgXG4gICAgICAgIGxvb3AgMi4gRmV0Y2ggZnJvbSBEQlxuICAgICAgICBBUEktPj5EYXRhYmFzZTogRmV0Y2ggc3VwcGxpZXIgZGF0YVxuICAgICAgICBlbmRcbiAgICAgICAgXG4gICAgICAgIGxvb3AgMy4gUGVyc2lzdCB0byBEXG4gICAgICAgIEFQSS0-PkRhdGFiYXNlOiBQZXJzaXN0IE9yZGVyXG4gICAgICAgIGVuZFxuICAgICAgICBcbiAgICAgICAgbG9vcCA0LiBFeHRlcm5hbCBBUEkgY2FsbFxuICAgICAgICBBUEktLT4-U3VwcGxpZXIgQVBJOiBDcmVhdGUgb3JkZXIgYXQgc3VwcGxpZXJcbiAgICAgICAgZW5kXG5cbiAgICAgICAgQVBJLS0-Pi1DbGllbnQ6IE9yZGVyIGNyZWF0ZWQgcmVzcGmplex9uc2VcbiAgICBlbmQiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"><img src="https://mermaid.ink/img/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gICAgbG9vcCAxLiBDbGllbnQgQVBJIGNhbGxcbiAgICAgICAgQ2xpZW50LT4-K0FQSTogQ3JlYXRlIG9yZGVyIHJlcXVlc3RcbiAgICAgICAgXG4gICAgICAgIGxvb3AgMi4gRmV0Y2ggZnJvbSBEQlxuICAgICAgICBBUEktPj5EYXRhYmFzZTogRmV0Y2ggc3VwcGxpZXIgZGF0YVxuICAgICAgICBlbmRcbiAgICAgICAgXG4gICAgICAgIGxvb3AgMy4gUGVyc2lzdCB0byBEQlxuICAgICAgICBBUEktPj5EYXRhYmFzZTogUGVyc2lzdCBPcmRlclxuICAgICAgICBlbmRcbiAgICAgICAgXG4gICAgICAgIGxvb3AgNC4gRXh0ZXJuYWwgQVBJIGNhbGxcbiAgICAgICAgQVBJLS0-PlN1cHBsaWVyIEFQSTogQ3JlYXRlIG9yZGVyIGF0IHN1cHBsaWVyXG4gICAgICAgIGVuZFxuXG4gICAgICAgIEFQSS0tPj4tQ2xpZW50OiBPcmRlciBjcmVhdGVkIHJlc3BvbnNlXG4gICAgZW5kIiwibWVybWFpZCI6eyJ0aGVtZSI6ImRlZmF1bHQifSwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" alt="" /></a></p>
<h3>1. Client API call</h3>
<p>Putting a retry around the entire operation is problematic because our developer friend was not being very specific about what went wrong. As we will see in the next few paragraphs, a retry may be appropriate or not. In part 2 of this series of posts, we will start to make our API endpoint idempotent. As we will see then, even that is more difficult than it seems at first glance.</p>
<p>What are some of the failure modes the client can experience calling the API though?</p>
<ul>
<li>The URI for the endpoint is wrong. Retries will not help here.</li>
<li>The client sends a bad request. No amount of retries will help.</li>
<li>The service is not up. Retries may help if it comes up in a timely fashion.</li>
<li>The service takes too long to respond and the request times out. A retry may not be appropriate since we do not know if the request was processed. It also may exacerbate high load if that is why the service took too long.</li>
<li>The service errors for an unknown reason. A retry may or may not be appropriate.</li>
<li>The service dies mid-request. We don't know how far the processing of the request got, so a retry may not be appropriate.</li>
</ul>
<p>Let's drill into the various steps that occur due to the API call and see what can go wrong.</p>
<h3>2. Fetch from DB</h3>
<p>The fetching of supplier data from the database is the easiest. If this fails we cannot continue.<br />
A nuanced use of HTTP codes and <code>Retry-After</code> header could allow you to easily indicate to the client that they could retry too.<br />
Since this call changes no state, we could retry this query if it fails due to intermittent network availability.</p>
<h3>3. Persist to DB</h3>
<p>When just considering an atomic database call, we can be fairly confident that the call will succeed or fail in a reliable way.</p>
<p>Something that is often not taken into account is the process prematurly terminating just before, during, or after a database call. From the outside, these are near impossible to distinguish. Your machine dying or restarting is something you should always try to cater for. Depending on how you are deploying, a deployment could kill a service that is servicing traffic. And given a high enough volume, it is guaranteed that a request will be in the state that a database call has succeeded but the external API call has not yet happened. Solving this problem will be covered later in this series but it is important to note that the client retrying will persist a new record, leaving the current one in an unfinished state where its order was never sent to the supplier.</p>
<h3>4. External API call</h3>
<p>The external API call is the most fraught since how it behaves is not under our control. There is almost no failure mode here that would warrant a retry unless the supplier API explicitly responded with a response that indicated we could, such as a <code>503 - Service Unavailable</code> and the <code>Retry-After</code> header set. An incorrect endpoint or other <code>4XX</code> error is not going to be fixed by retrying. Any ambiguous <code>5XX</code> error response leaves us uncertain about whether we are safe to retry, as retrying may create a duplicate order.</p>
<h2>Conclusion</h2>
<p><img src="../img/posts/2021/2021-08-22-10-38-55.png" alt="Final design" /></p>
<p>In this post, we looked at some of the ways that different calls can fail, and looked at whether retrying was appropriate. Our developer friend learned some important lessons. The most important improvement was the improved telemetry and alerting to get insight into when the system is ending up in an inconsistent state. Unfortunately, these kinds of failures are a lot more prevalent in systems than most think. The actual problem is that visibility into systems is usually so poor (or no one is looking) that no one is aware of how often these types of errors actually occur. In a lot of cases, other parts of the business just absorb the inconsistency by having mitigating processes.</p>
<p>The network is not reliable but simply retrying often has unintended consequences. In the <a href="/reliable-apis-part-2">next post</a>, we will start to improve our design so that we can retry with more confidence by trying to make the endpoint idempotent.</p>
<p>I hope this discussion was insightful. If you think I missed anything important for a discussion at this level, please let me know in the comments.</p>
<h2>Summary</h2>
<p><strong>Problem:</strong> Transient network errors</p>
<p><strong>Solutions:</strong> Retry policy on network calls</p>
<p><strong>Consequence:</strong> Duplicate calls</p>
<blockquote>
<p>Only retry idempotent operations</p>
</blockquote>
<h2>Resources</h2>
<ul>
<li><a href="https://datatracker.ietf.org/doc/html/rfc7231#section-4.3.3">POST method</a></li>
<li><a href="https://datatracker.ietf.org/doc/html/rfc7231#section-6.6">Error Codes</a></li>
</ul>https://devonburriss.me/azfunc-prometheus-endpoint/Capturing custom business metrics in Azure Functions2021-02-01T00:00:00+00:00Devon Burrisshttps://devonburriss.me/azfunc-prometheus-endpoint/<p>For years now I have noticed a blind-spot when using serverless functions and observability platforms like Datadog. Custom metrics. Observability tools are constantly improving their integrations with cloud providers but are still not on par with having access to the OS like with VMs or containers. In this post I explore a little proof-of-concept I did to get custom metrics out of Azure Functions.</p>
<!--more-->
<h2>How it started</h2>
<p>A couple years back I explored solving this with a <a href="https://github.com/dburriss/DatadogAzureFunctions">custom binding</a> to Datadog but it was a naive implementation that just called Datadog's HTTP API. About a year ago I had the idea of scraping these metrics using Prometheus but at the time I couldn't find a library that easily allowed me to "speak Promethean". The .NET libraries I found didn't seem to allow you to create or parse Prometheus logs, instead handling things from end-to-end. Usually as middleware.</p>
<h2>Clearing the path</h2>
<p>So about 7 months back I created a small library called <a href="https://github.com/dburriss/fennel">Fennel</a> which has a very simple purpose. Parse Prometheus text to objects, and turn these metric objects into valid Prometheus text. This gave me the building block I needed to easily try my experiment.</p>
<p>You can find my <a href="/prometheus-parser-fennel">post on Fennel here</a>.</p>
<h2>Taking the steps</h2>
<p><img src="../img/posts/2020/azfunc_prom_setup.jpg" alt="Design for scraping metrics from Azure Functions" /></p>
<p>So my idea is fairly simple. In any function that needs to emit metrics, use a Azure Function binding to write them to some store. I chose an Azure Storage Queue for simplicity but I need to post a disclaimer at this point:</p>
<blockquote>
<p>This is demo code hacked together in an evening and does not consider the following very important production quality points:</p>
<ol>
<li>Longer persistence of the metrics</li>
<li>Multiple consumers of the metrics</li>
<li>Enforcing ordering if more than 1 function instance is running</li>
<li>Resilience and sending custom metrics only if state has changed</li>
<li>This ignores a lot of the more complex things Prometheus exporters do</li>
</ol>
<p>The code will be available on my <a href="https://github.com/dburriss/Fennel.MetricsDemo">GitHub</a>.</p>
</blockquote>
<p>As a reminder, the Prometheus format is a text based format.</p>
<pre><code class="language-text"># This is a comment but the following 2 have meaning
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"} 3 1395066363000
</code></pre>
<p>For this demo I have a small function on a timer trigger to emit metrics.</p>
<pre><code class="language-fsharp">// The builder ensures that a metric has HELP and TYPE information when written to a string
// For implementation: https://github.com/dburriss/Fennel.MetricsDemo/blob/master/Fennel.MetricsDemo/PrometheusLogBuilder.fs
let metricsBuilder = PrometheusLogBuilder()
.Define("sale_count", MetricType.Counter, "Number of sales that have occurred.")
// Function for generating some simple metrics
[<FunctionName("MetricsGenerator")>]
let metricsGenerator([<TimerTrigger("*/6 * * * * *")>]myTimer: TimerInfo, [<Queue("logs")>] queue : ICollector<string>, log: ILogger) =
let msg = sprintf "Generating sales at: %A" DateTime.Now
log.LogInformation msg
let sales = Random().Next(0, 50) |> float
let metric = Line.metric (MetricName "demo_sale_count") (MetricValue.FloatValue sales) [] (Some(Timestamp DateTimeOffset.UtcNow))
queue.Add(Line.asString metric)
log.LogInformation (sprintf "Sales : %f" sales)
</code></pre>
<p>It places a Prometheus text representation of a <code>demo_sale_count</code> event on a queue called <code>logs</code>.</p>
<p>Next, I create a HTTP Azure Function to serve as the <code>/metrics</code> endpoint to be scraped by Prometheus. It pulls the messages off the <code>logs</code> queue and builds up Prometheus text.</p>
<pre><code class="language-fsharp">[<FunctionName("metrics")>]
let metrics ([<HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)>]req: HttpRequest) (log: ILogger) =
async {
log.LogInformation("Fetching prometheus metrics...")
// setup queue client
let queueName = "logs"
let connectionString = Environment.GetEnvironmentVariable("AzureWebJobsStorage", EnvironmentVariableTarget.Process)
let queueClient = QueueClient(connectionString, queueName)
if queueClient.Exists().Value then
// receive messages
let messages = queueClient.ReceiveMessages(Nullable<int>(32), Nullable<TimeSpan>(TimeSpan.FromSeconds(20.))).Value
log.LogInformation(sprintf "Received %i logs." messages.Length)
// return message as text
let processMessage (msg : QueueMessage) =
let txt = Encoding.UTF8.GetString(Convert.FromBase64String(msg.MessageText))
queueClient.DeleteMessage(msg.MessageId, msg.PopReceipt) |> ignore
txt
let metrics = messages |> Array.map processMessage
// build up Prometheus text
let responseTxt = metricsBuilder.Build(metrics)
// return as Prometheus HTTP content
let response = ContentResult()
response.Content <- responseTxt
response.ContentType <- "text/plain; version=0.0.4"
response.StatusCode <- Nullable<int>(200)
return response :> IActionResult
else return NoContentResult() :> IActionResult
} |> Async.StartAsTask
</code></pre>
<p>Nothing too interesting here other than the <code>ContentType</code> being "text/plain; version=0.0.4" as per Prometheus specification.</p>
<h2>How it's going</h2>
<p>Having the metrics endpoint up, all that is left is to <a href="/local-prometheus-setup">setup a local Prometheus instance</a> to call our Azure Function.</p>
<p>Looking at Prometheus' UI at <code>http://localhost:9090/graph</code> we can query for <code>sale_count</code> and we can see the metrics are coming in:</p>
<p><img src="../img/posts/2020/prometheus_sale_count.png" alt="Prometheus graph" /></p>
<p>At work we use Datadog and it turns out the <a href="https://www.datadoghq.com/blog/monitor-prometheus-metrics/">Datadog agent has support for scraping a Prometheus endpoint</a>. Once we have the <a href="/prometheus-datadog-agent">Datadog agent setup</a> we can see the metrics flowing into Datadog.</p>
<p><img src="../img/posts/2021/azurefunctiongraph.png" alt="Datadog metric from Prometheus" /></p>
<h2>Conclusion</h2>
<p>This was a quick proof-of-concept of whether this approach was worth pursuing. I intend to take it further by running this in Azure and have a container with an agent reach out for metrics. It is unfortunate that the workarounds described here are necessary at this point but if we want a view on business metrics, we need to get creative. What I do like about this approach though is that it leverages Azure function bindings as well as Prometheus' scraping model, so not much had to be re-invented here. I am sure in the future we will see better baked in solutions for this but for now we work with what we have.</p>
<p><span>Photo by <a href="https://unsplash.com/@_ggleee?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Gleb Lukomets</a> on <a href="https://unsplash.com/s/photos/flame?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></span></p>https://devonburriss.me/prometheus-datadog-agent/Prometheus Datadog Agent2021-01-31T00:00:00+00:00Devon Burrisshttps://devonburriss.me/prometheus-datadog-agent/<p>In the <a href="/local-prometheus-setup">previous post</a> we looked at setting up a local Prometheus container to scrape metrics to test the observability setup of an application locally. A lot of companies I have worked with in recent years are using hosted observability solutions like Datadog. Although Datadog is typically a push based collector, a little known feature is that the agent can scrape from a Prometheus endpoint. In this post we will look at a simple setup for this locally.</p>
<!--more-->
<p>To get started you need to have the <a href="https://docs.datadoghq.com/agent/">Datadog agent installed</a>.<br />
Once you have the agent installed, you will need to edit <em>openmetrics.d/conf.yaml</em>. On my Mac this is found in <em>/opt/datadog-agent/etc/conf.d/openmetrics.d/conf.yaml</em>.<br />
Optionally, you can launch the agent's Web UI. On my install it is at <a href="http://127.0.0.1:5002/">http://127.0.0.1:5002/</a>.</p>
<ol>
<li>Click <strong>Checks > Manage Check Checks</strong></li>
<li>If <em>openmetrics.d/conf.yaml</em> is not available, select <strong>Add Check</strong> from the select box (NOT <em>prometheus.d/conf.yaml</em>)</li>
<li>Configure the yaml value shown below.
You can find the <a href="https://docs.datadoghq.com/integrations/openmetrics/">docs here</a></li>
</ol>
<pre><code class="language-yaml">init_config:
## Every instance is scheduled independent of the others.
instances:
## @param prometheus_url - string - required
## The URL where your application metrics are exposed by Prometheus.
#
- prometheus_url: http://localhost:7071/api/metrics
## @param namespace - string - required
## The namespace to be prepended to all metrics.
#
namespace: azure.functions
## @param metrics - list of strings - required
## List of metrics to be fetched from the prometheus endpoint, if there's a
## value it'll be renamed. This list should contain at least one metric.
#
metrics:
- demo_*
</code></pre>
<p>It is important that you specify which metrics you want to scrape. For this reason it is useful to prefix your metrics with an app name. In the example above I have updated my metrics to all start with <em>demo_</em>.<br />
Once done editing the <em>conf.yaml</em>, restart your Datadog agent.</p>
<p>Using the Web UI to check that metrics are flowing from your application is useful at this point. The most details can be seen by navigating to <strong>Status > Collector</strong>. Scroll down until you see the Open Metrics section. Check that metrics sample is increasing (values do not update without a refresh).</p>
<p>Once values are flowing to Datadog, you can go and view them in Datadog.</p>
<p><img src="../img/posts/2021/azurefunctiongraph.png" alt="sale demo graph" /></p>
<h2>Conclusion</h2>
<p>Datadog allows you to have a mix of push and pull metrics if you have applications that were built with different strategies. This is a really nice touch rather than having different places where applications are monitored. In my next post I will be showing how you can use this to monitor custom events in Azure Functions from Datadog.</p>https://devonburriss.me/local-prometheus-setup/Local Prometheus setup2021-01-30T00:00:00+00:00Devon Burrisshttps://devonburriss.me/local-prometheus-setup/<p>It is useful to have a local Prometheus instance running to test the instrumentation of your application. If you are running the application on your machine, you need to make sure the Prometheus container can talk to the host machine. This is a short post detailing this setup.</p>
<!--more-->
<h2>Configuration</h2>
<p>Firstly, let's create a Prometheus configuration with the needed <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config">scrape config</a>.</p>
<pre><code class="language-yaml"># A list of scrape configurations.
scrape_configs:
# The job name assigned to scraped metrics by default.
- job_name: 'fennel-metricsdemo'
# How frequently to scrape targets from this job.
scrape_interval: 5s
# The HTTP resource path on which to fetch metrics from targets.
metrics_path: "/api/metrics"
# List of labeled statically configured targets for this job.
static_configs:
# The targets specified by the static config.
- targets: ['host.docker.internal:7071']
# Labels assigned to all metrics scraped from the targets.
labels:
app: 'demo-app'
</code></pre>
<p>Since we are running this locally, you need to target your local machine. With Docker on Mac I had to target <code>host.docker.internal</code> and my application (an Azure Function) is running locally on port 7071.</p>
<p>Now that we have our configuration, we can use this to start our Docker container, mounting the configuration as a volume.</p>
<pre><code class="language-bash">docker run --rm -it -p 9090:9090 -v /path/to/your/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
</code></pre>
<p>Prometheus should now be up and running and if your application is emitting metrics, you can go see them by navigating to http://localhost:9090/graph.</p>
<p><img src="../img/posts/2020/prometheus_sale_count.png" alt="" /></p>
<h2>Conclusion</h2>
<p>I will be making use of this in an upcoming post I plan to release soon. In my <a href="/prometheus-datadog-agent">next post</a> though I will look at using Datadog instead of a Prometheus server. I hope you find this useful.</p>
<p><span>Photo by <a href="https://unsplash.com/@_ggleee?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Gleb Lukomets</a> on <a href="https://unsplash.com/s/photos/flame?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></span></p>https://devonburriss.me/meaning-of-meditation/The meaning of meditation2021-01-09T00:00:00+00:00Devon Burrisshttps://devonburriss.me/meaning-of-meditation/<h2>Skills of meditation</h2>
<p>Meditation. Often described as clearing your mind. Following your breath. Relaxing. Although the most common descriptions, they are the least interesting activities of meditation. They do not capture the rich depth of the techniques that can be found in the many contemplative traditions throughout history. During this weird time of a global pandemic, I believe these techniques can be especially helpful.</p>
<!--more-->
<p>A clarifying analogue for meditation is that of physical exercise. One way of comparing exercises is by their discipline. Weightlifting vs running, yoga vs cross-fit. If you are looking for what the benefits are from these exercises, the style is less important than the specific exercises you do within them. All of these styles can be good for you but doing the exercises from each that give the biggest boost to your physical fitness is the best way of approaching them.
Meditation is the same. When comparing the different styles of meditation, the common theme is living a good life. What exactly a good life is can differ between them but this just gives us a buffet of techniques to choose from while trying to improve our own lives.</p>
<p>Let's look at some of the "mental muscles" you may want to target and how these techniques developed over thousands of years can help. At this point I want to say, I will be providing examples from meditation traditions I am more familiar with. This is not to say other traditions do no have techniques for cultivating your thoughts in similar ways. I prefer to stick to subjects I have personal experience with.</p>
<p>I came to meditation in a rather round-about way. Around 2009 I was running 60+ km a week. I heard about some Tai Chi classes nearby and thought I would try it out as a kind of cross-training for my legs that didn't cause so much injury. Although the Taoist Tai Chi and Kung Fu school had its form of meditation, my teacher also taught mindfulness. She encouraged me to try it and was always willing to talk about my experience and give pointers. Mindfulness appealed to me as it made no esoteric claims about the universe like Taoism did.</p>
<p>My formal meditation practice was an on-and-off affair for the next few years. I would have a good run for a few weeks or months, then get busy and stop. Then I would get stressed, and go back to it, seeking relief. And looking back at it I had a very different view on it than I do now. No matter the level of understanding though, meditation has something to offer.</p>
<p>Then in 2014, I moved to a new city. I not only left my friends, family, and fiance behind; I left my faith. I had grown up in a religious home and had believed my whole life but years of asking questions had led me to an answer. I could no longer believe without sufficient evidence. As chance would have it, I decided to listen to a lecture series on meditation on the 9-hour drive to my new life. To live a good life without faith.</p>
<h3>Reducing suffering</h3>
<p>I think we can all agree that a good place to start with living a good life is by reducing suffering. The word suffering can be misleading because everyone has their understanding of it. Suffering can be the mental, emotional, or physical anguish that occurs from traumatic events in your life. It can be from well-known sources like stress. Another lesser-acknowledged source is from change. Everything changes and holding on to anything can be an exercise in futility, and a source of great suffering. In 2014 I learned how change can cause suffering. That too was impermanent though, for I finally cultivated the habit of meditation.</p>
<p>One way that formal meditation helps alleviate suffering is by training you to recognise when you are lost in thought. We all know that feeling of remaining angry because you replay in your mind how someone wronged you. The technique often referred to as Mindfulness is the practice of observing a thought, and then bringing attention back to something like the breath. The breath is an easy object to focus on though because it is always with you. Once you are comfortable with the practice, anything can be an object of focus, even thoughts themselves.
A common misconception of meditation is that you are trying to keep your mind free of thought. The real benefit is that each time you recognise a thought, you get to do a "rep" and practice letting go of that thought. This is the practice. Notice the distraction and let it fade away. We cannot in-fact control our thoughts, we can only control where we place our attention.</p>
<p>Epictetus, a well known Stoic philosopher, summarised the source of our suffering well:</p>
<blockquote>
<p>"What upsets people is not things themselves, but their judgements about these things".</p>
</blockquote>
<p>The idea Epictetus is putting forth here is also a core theme in Buddhism. That we suffer because we cling to things. Both the Stoics and Buddhists are often maligned on this point. Accused of not caring. The key point here is that clinging to things, good or bad, will cause suffering because everything is impermanent and we are shaped by what we place our attention on.</p>
<p>And these are the types of meditations of the Stoics. They do not sit and breathe but instead meditate on the wisdom passed down to them, and their thoughts. They visualise how they want to approach situations with wisdom and courage.
One example of a Stoic practice is described by Seneca:</p>
<blockquote>
<p>"Set aside a certain number of days during which you shall be content with the scantiest and cheapest fare, with a coarse and rough dress, saying to yourself the while, ‘Is this the condition that I feared?".</p>
</blockquote>
<p>Many people fear losing everything, or not having every need met. By putting yourself in the situation and reflecting on it, the fear will often dissipate.
This brings me to one last example of a fear that many have. Fear of dying. There is a technique in some Buddhist traditions called Corpse meditation. Even back in South Africa, I could not find a corpse so I had to content myself with imagining myself dead and slowly decaying. Although it seems macabre, it familiarises you with death in a way that reduces the unknown, and so also the fear.
In my apartment in 2014, confronted really for the first time by my mortality, this practice was invaluable. I can attest to its effectiveness.
This practice has another benefit which I will touch on later.</p>
<p>I will end this section with an amusing story.
A General of an army conquers a town and then hears about a Zen master who lives nearby. The General goes to the Zen master and on not being afforded the reverence he feels he deserves, his anger rises and he draws his sword. <em>"Do you not realise you stand before a man who could run you through with this sword without blinking an eye?"</em> shouts the self-important General. Unperturbed, the Zen master responds, <em>"Do you not realise you are standing before a man who could be run through without blinking an eye?"</em>.</p>
<h3>Being kind to others</h3>
<p>A common theme of philosophies that are serious about how to live well is that of treating others well.</p>
<blockquote>
<p>"Call to mind the doctrine that rational creatures have come into the world for the sake of one another, and that tolerance is a part of justice" - Marcus Aurelius</p>
</blockquote>
<p>In Buddhism, there is a technique for actively cultivating feelings of compassion for yourself and the world around you. It is the practice of Metta, otherwise known as loving-kindness. In this practice, you generate a feeling of compassion. You start with those you already have this feeling for and wish them happiness, freedom from suffering, and fulfilment in life. You then expand that to others. This may feel awkward initially but recall that our thoughts about people are just running through neural pathways. Reinforcing these pathways in positive ways can lead to new ways of thinking and feeling.</p>
<p>While in lockdown, this can be an enriching technique to apply to keep your sense of connection with others and may inspire you to reach out to people you might otherwise not.</p>
<h3>Finding meaning</h3>
<blockquote>
<p>"Meditate often on the interconnectedness and mutual interdependence of all things in the universe. For in a sense, all things are mutually woven together and therefore have an affinity for each other." - Marcus Aurelius</p>
</blockquote>
<p>We are social animals, even the more introverted ones like myself. The richness of experience is not only determined by our actions but who we share those actions with. This is easy to take for granted until you are thrust into isolation by the world being in the grip of a pandemic.</p>
<p>If you can, get out into open spaces. Walk around and greet any stranger you can (from a distance). Be mindful during these times and be thankful for the things you can do, the people you see, and the occasional smile you get back. This simple practice has made a huge difference for me during this pandemic and I am grateful for the possibility to move around.</p>
<blockquote>
<p>"All you need are these: certainty of judgment in the present moment; action for the common good in the present moment; and an attitude of gratitude in the present moment for anything that comes your way." - Marcus Aurelius</p>
</blockquote>
<p>Developing a practice of being thankful daily for every little thing you can muster will provide a bulwark against the bad things that come your way.</p>
<p>I said I would come back to the meditation on death. By considering that any day could be your last, it can bring into focus what is important and what your energy should be spent on.</p>
<blockquote>
<p>“Let us prepare our minds as if we’d come to the very end of life. Let us postpone nothing. Let us balance life’s books each day. The one who puts the finishing touches on their life each day is never short of time.” - Seneca</p>
</blockquote>
<p>With our options of things to do drastically diminished these days, it is easy to fritter away our time on things of little substance. Instead of visiting friends and family, we scroll endlessly on social media. Rather than go to the gym, we binge-watch shows.
Not only can meditating on your own mortality make you more cognizant of what you are spending your time on, it can help you to appreciate the moment.</p>
<h3>Conclusion</h3>
<p>I hope I have convinced you that there is more to meditation than following the next breath. To be clear, this practice of Mindfulness is an important part of the whole. It provides the foundation of focus and awareness of distraction that is critical for many other techniques.</p>
<p>Taking that foundation and layering other practices from other traditions can provide a holistic collection of techniques that add practical knowledge. This knowledge can be honed into practical skills.</p>
<p>One last thing. Just like the fear of death can be lessened by analysis of it, so the journey toward happiness is an analysis of your mind. The more you observe how it works, the more you can steer it towards your goals.</p>
<p>Once you have control, only then can you decide where to go.</p>
<h3>Further reading</h3>
<p><a href="https://dailystoic.com/">Daily Stoic</a><br />
<a href="https://samharris.org/how-to-meditate/">Waking Up</a><br />
To get started easily there are many apps like Waking Up, Head Space, and 10 Percent Happier that will take you through guided meditations.</p>
<p><span>Photo by <a href="https://unsplash.com/@fcornish?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Faye Cornish</a> on <a href="https://unsplash.com/s/photos/wisdom?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></span></p>https://devonburriss.me/prometheus-parser-fennel/Creating a Prometheus parser: Fennel2020-12-24T00:00:00+00:00Devon Burrisshttps://devonburriss.me/prometheus-parser-fennel/<p>A year back I ran into the need for a library that provided a model for creating valid Prometheus log lines. The libraries I looked at sent these metrics for export rather than giving me access to the model or allowing me to create the corresponding log string. I had been wanting to play around with FParsec for a while so this seemed like a perfect opportunity to give it a try.</p>
<!--more-->
<blockquote>
<p>This post is part of <a href="https://sergeytihon.com/2020/10/22/f-advent-calendar-in-english-2020/">FsAdvent 2020</a>.</p>
</blockquote>
<p>The result was a library called <a href="https://github.com/dburriss/fennel">Fennel</a>. It can parse Prometheus text to objects, and turn these metric objects into valid Prometheus text.</p>
<p>This was my first time using a library to do a custom parser. In the past when I had needed to parse text I had used a state machine and consumed a character at a time. The idea here is that depending on the state, you expect certain characters to follow. See <a href="https://stackoverflow.com/questions/50896567/fsharp-sequence-processing-with-state/50918243#50918243">here</a> for an example. It turns out this is not too different to how you use a library like FParsec.
Although there is a bit of a learning curve, and not many resources outside of the docs, using <a href="http://www.quanttec.com/fparsec/">FParsec</a> was fun. I am sure there are 100 ways I could improve the parser (feedback welcome...preferably be polite) but I am happy with the end result.</p>
<h2>FParsec</h2>
<p>This post is not meant to be a tutorial on FParsec but in-case you have never used it, let's look at some of the things it allows you to do.</p>
<p>FParsec gives you a boatload of parsers that can be combined to make a new parser. Parser factory functions like <code>satisfy</code> will give you back a <code>Parser<></code> that satisfies the given predicate. The library also gives you some operators. Below <code><|></code> means try the first parser, if that fails, try the second.</p>
<p>The example below also uses <code>manyChar2</code> which uses the first parser for the first char and then the next for all following chars. In this case, because a Prometheus metric name must start with an ASCII letter or an underscore (not a number).</p>
<pre><code class="language-fsharp">let underscoreOrColon = satisfy (fun c -> c = '_' || c = ':')
let ascii_alpha_numeric = (asciiLetter <|> digit)
let pname = manyChars2 (asciiLetter <|> underscoreOrColon) (ascii_alpha_numeric <|> underscoreOrColon)
</code></pre>
<p>These parsers can then be combined in other ways. The code below combines the <code>pname</code> parser with the "zero or more" whitespace parser but because the period is on the left of the <code>.>></code> operator it takes only the result of <code>pname</code> (<code>.>></code> and <code>.>>.</code> are available). The <code>|>></code> operator returns a parser that takes the result of the parser to the left and applies the function to the right.</p>
<pre><code class="language-fsharp">let private metric_name = pname.>> ws0 |>> MetricName
</code></pre>
<p>This is just a tiny taste of how you can build up a complex parser from simpler ones. Combining these you can start to build up a grammar for your parsers. Next we look at building this further with our Prometheus parser.</p>
<h2>Prometheus parser</h2>
<p>As a reminder, the Prometheus format is a text-based format.</p>
<pre><code class="language-text"># This is a comment but the following 2 have meaning
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"} 3 1395066363000
</code></pre>
<p>You can read up on the exposition format <a href="https://prometheus.io/docs/instrumenting/exposition_formats/">here</a>.</p>
<p>The model looks like this:</p>
<pre><code class="language-fsharp">// details and types excluded for brevity
type MetricLine = {
Name : MetricName
Labels : Label list
Value : MetricValue
Timestamp : Timestamp option
}
type Line =
| Help of (MetricName*DocString)
| Comment of string
| Type of (MetricName*MetricType)
| Metric of MetricLine
| Blank
</code></pre>
<p>You can see the full model on <a href="https://github.com/dburriss/fennel/blob/master/src/Fennel/Model.fs">the GitHub repository</a></p>
<p>Any Prometheus log line can be <em>Help</em> information, a normal <em>comment</em>, <em>type information</em>, a metric, or a blank line. From a parsing point of view, I categorize comments, TYPE line, and HELP line all as comments since the <code>#</code> is a common first character. This is not reflected in the model.</p>
<p>So let's break down the Prometheus text and how it relates to the model above.</p>
<ol>
<li>A line in some Prometheus text can be <em>blank</em> for a Prometheus log <em>line</em></li>
<li>A Prometheus log line can be a <em>comment</em> or a <em>metric</em></li>
<li>A comment can be a just a normal <em>comment</em>, <em>TYPE</em> information, or <em>HELP</em> information</li>
<li>A <em>metric</em> requires a <em>name</em> and <em>value</em></li>
<li>A metric can optionally have <em>labels</em> and a <em>timestamp</em></li>
</ol>
<p>Let's look at a few select parsers and see how they match with our description above. We will focus on the comment line of TYPE and how it fits in.</p>
<pre><code class="language-fsharp">// TYPE from point 3
let typeLine = (``TYPE``>>.metric_name.>>.metric_type) |>> Line.Type
let comment = comment_prefix >>.ws0 >>.(typeLine <|> helpLine <|> commentLine)
// Point 2
let line = ws0 >>.(comment <|> metric)
// Point 1
ws0 >>.(line <|> emptyLine)
</code></pre>
<h2>Fennel</h2>
<p>So that was a little under the hood of Fennel. What does it look like from the outside?</p>
<p>We can turn Prometheus log text into strongly typed models.</p>
<pre><code class="language-fsharp">open Fennel
let input = """
# Finally a summary, which has a complex representation, too:
# HELP rpc_duration_seconds A summary of the RPC duration in seconds.
# TYPE rpc_duration_seconds summary
rpc_duration_seconds{quantile="0.01"} 3102
rpc_duration_seconds{quantile="0.05"} 3272
rpc_duration_seconds{quantile="0.5"} 4773
rpc_duration_seconds{quantile="0.9"} 9001
rpc_duration_seconds{quantile="0.99"} 76656
rpc_duration_seconds_sum 1.7560473e+07
rpc_duration_seconds_count 2693
"""
let lines = Prometheus.parseText input
</code></pre>
<p>Each of these lines can match a specific line type:</p>
<pre><code class="language-fsharp">match line with
| Help (name, doc) -> printfn "Help line %A" (name, doc)
| Comment txt -> printfn "Comment line %s" txt
| Type (name, t) -> printfn "Type line %A" (name, t)
| Metric m -> printfn "Metric line %A" m
| Blank -> printfn "Blank line"
</code></pre>
<p>And we can create an object that represents a Prometheus log line.</p>
<pre><code class="language-fsharp">open Fennel
let prometheusString = Prometheus.metric "http_requests_total" 1027. [("method","post");("code","200")] DateTimeOffset.UtcNow
</code></pre>
<h2>Conclusion</h2>
<p>So that was a little peek into creating my first parser.
Have you used FParsec? If not, was this helpful?<br />
Do you have plenty of experience with it? What can I improve?<br />
Leave a comment or create an issue or PR on the repo.</p>
<p><span>Photo by <a href="https://unsplash.com/@_ggleee?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Gleb Lukomets</a> on <a href="https://unsplash.com/s/photos/flame?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></span></p>https://devonburriss.me/converting-fsharp-csharp/Converting between F# and C# types2020-05-05T00:00:00+00:00Devon Burrisshttps://devonburriss.me/converting-fsharp-csharp/<p>Every now and again in F# you run into needing to convert a <code>Seq</code> to something like <code>IList<></code>. Depending on how often you do this, and if you are like me, you will need to search for this or try different things for longer than you would care to admit. So if nothing else, here I am capturing for myself how to tackle some of these conversions.</p>
<!--more-->
<h2>TL;DR</h2>
<p>For the sake of this post being a reference post, I am going to post this class which captures a lot of the conversions. Here I try and capture the C# type as the type most closely related to an F# type, if that makes sense. In most cases, this is an <code>'T array</code>, since this is equivalent to a <code>'T []</code> array in C#. I do encourage you to read the rest of the article at least once, as I will try break down the types a bit so in the future it should be easier to figure out the conversions yourself.</p>
<p>For many of these, you will need to convert to <code>seq</code> and then to the F# type you want to work with. If that is not acceptable perhaps do it yourself with a loop.</p>
<!-- | From | To | Conversion |
| -------------------- | ------------------ | ------------------------------------ |
| `IEnumerable<int>` | `int seq` | alias for |
| `List<int>` | `ResizeArray` | alias for |
| `IEnumerable` | `seq` | `Seq.cast` |
| `IEnumerable` | `int array` | `Seq.cast \|> Seq.toArray` |
| `IEnumerable` | `int list` | `Seq.cast \|> Seq.toList` |
| `ICollection<int>` | `int seq` | `:> seq<_>` |
| `IList<int>` | `int seq` | `:> seq<_>` |
| `int []` | `int array` | alias for |
| `System.Array` | `obj seq` | `System.Linq.Enumerable.OfType<obj>` |
| `seq`/`array`/`list` | `ResizeArray` | `ResizeArray` ctor |
| `int seq` | `IEnumerable` | `:> IEnumerable` |
| `int array` | `ICollection<int>` | `:> ICollection<int>` |
| `ResizeArray` | `ICollection<int>` | `:> ICollection<int>` |
| `ResizeArray` | `IList<int>` | `:> IList<int>` |
| `ResizeArray` | `int seq` | `:> seq<_>` |
| `ResizeArray` | `int array` | `.ToArray()` instance method |
| `f: unit -> int` | `Func<int>` | `Func<int>(f)` ctor |
| `Func<int>` | `unit -> int` | `fun () -> f.Invoke()` | -->
<table>
<thead>
<tr>
<th>From</th>
<th>To</th>
<th>Conversion</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>IEnumerable<int></code></td>
<td><code>int seq</code></td>
<td>alias for</td>
</tr>
<tr>
<td><code>List<int></code></td>
<td><code>ResizeArray</code></td>
<td>alias for</td>
</tr>
<tr>
<td><code>IEnumerable</code></td>
<td><code>seq</code></td>
<td><code>Seq.cast</code></td>
</tr>
<tr>
<td><code>IEnumerable</code></td>
<td><code>int array</code></td>
<td><code>Seq.cast |> Seq.toArray</code></td>
</tr>
<tr>
<td><code>IEnumerable</code></td>
<td><code>int list</code></td>
<td><code>Seq.cast |> Seq.toList</code></td>
</tr>
<tr>
<td><code>ICollection<int></code></td>
<td><code>int seq</code></td>
<td><code>:> seq<_></code></td>
</tr>
<tr>
<td><code>IList<int></code></td>
<td><code>int seq</code></td>
<td><code>:> seq<_></code></td>
</tr>
<tr>
<td><code>int []</code></td>
<td><code>int array</code></td>
<td>alias for</td>
</tr>
<tr>
<td><code>System.Array</code></td>
<td><code>obj seq</code></td>
<td><code>System.Linq.Enumerable.OfType<obj></code></td>
</tr>
<tr>
<td><code>seq</code>/<code>array</code>/<code>list</code></td>
<td><code>ResizeArray</code></td>
<td><code>ResizeArray</code> ctor</td>
</tr>
<tr>
<td><code>int seq</code></td>
<td><code>IEnumerable</code></td>
<td><code>:> IEnumerable</code></td>
</tr>
<tr>
<td><code>int array</code></td>
<td><code>ICollection<int></code></td>
<td><code>:> ICollection<int></code></td>
</tr>
<tr>
<td><code>ResizeArray</code></td>
<td><code>ICollection<int></code></td>
<td><code>:> ICollection<int></code></td>
</tr>
<tr>
<td><code>ResizeArray</code></td>
<td><code>IList<int></code></td>
<td><code>:> IList<int></code></td>
</tr>
<tr>
<td><code>ResizeArray</code></td>
<td><code>int seq</code></td>
<td><code>:> seq<_></code></td>
</tr>
<tr>
<td><code>ResizeArray</code></td>
<td><code>int array</code></td>
<td><code>.ToArray()</code> instance method</td>
</tr>
<tr>
<td><code>f: unit -> int</code></td>
<td><code>Func<int></code></td>
<td><code>Func<int>(f)</code> ctor</td>
</tr>
<tr>
<td><code>Func<int></code></td>
<td><code>unit -> int</code></td>
<td><code>fun () -> f.Invoke()</code></td>
</tr>
</tbody>
</table>
<blockquote>
<p>For those that can be cast with <code>:> seq<_></code> like <code>ICollection<></code> and <code>IList<></code> you can use directly with the <code>Seq</code> module functions like <code>toList</code>, since those interfaces implement <code>IEnumerable<></code>.</p>
</blockquote>
<h3>Example</h3>
<pre><code class="language-fsharp">// This is for demonstration purposes only
type CSharpyType() =
// seq<int>
let mutable enumerableTProp = Seq.empty
// seq<obj>
let mutable enumerableProp = Seq.empty
// int []
let mutable arrayTProp = Array.empty
// obj []
let mutable arrayProp = Array.empty
// int list
let mutable listTProp = ResizeArray()
// int []
let mutable ilistTProp = Array.empty
// int []
let mutable icollectionTProp = Array.empty
// unit -> DateTimeOffset
let mutable dtFun = fun () -> System.DateTimeOffset.UtcNow
// Convert between expressions: http://www.fssnip.net/ts/title/F-lambda-to-C-LINQ-Expression
member _.IEnumerableTProp
with get() : System.Collections.Generic.IEnumerable<int> = enumerableTProp
and set(v : System.Collections.Generic.IEnumerable<int>) = enumerableTProp <- v
member _.IEnumerableProp
with get() : System.Collections.IEnumerable = enumerableProp :> System.Collections.IEnumerable
and set(v : System.Collections.IEnumerable ) = enumerableProp <- v |> Seq.cast
member _.ArrayTProp
with get() : int[] = arrayTProp
and set(v : int[]) = arrayTProp <- v
member _.ArrayProp
with get() : System.Array = arrayProp :> System.Array
and set(v : System.Array) = arrayProp <- v |> System.Linq.Enumerable.OfType<obj> |> Seq.toArray
member _.ListTProp
with get() : System.Collections.Generic.List<int> = listTProp
and set(v : System.Collections.Generic.List<int>) = listTProp <- v
member _.ICollectionTProp
with get() : System.Collections.Generic.ICollection<int> = icollectionTProp :> System.Collections.Generic.ICollection<int>
and set(v : System.Collections.Generic.ICollection<int>) = icollectionTProp <- v |> Seq.toArray
member _.IListTProp
with get() : System.Collections.Generic.IList<int> = ilistTProp :> System.Collections.Generic.IList<int>
and set(v : System.Collections.Generic.IList<int>) = ilistTProp <- v |> Seq.toArray
member _.FuncProp
with get() : System.Func<System.DateTimeOffset> = System.Func<System.DateTimeOffset>(dtFun)
and set(f : System.Func<System.DateTimeOffset>) = dtFun <- fun () -> f.Invoke()
</code></pre>
<h2>Breakdown</h2>
<p>Well done for pushing past just copy pasting the code you need from above. We will go through the F# types and see what interfaces they implement, as well as if they have corresponding types in .NET BCL types.</p>
<h3>System.Collections.Generic.IEnumerable<_></h3>
<p>So as a <code>type</code>, <a href="https://github.com/fsharp/fsharp/blob/3bc41f9e10f9abbdc1216e984a98e91aad351cff/src/fsharp/FSharp.Core/prim-types.fs#L3287"><code>seq<'T></code> is an alias for <code>IEnumerable<'T></code> in FSharp.Core</a>.</p>
<pre><code class="language-fsharp">// FSharp.Core
type seq<'T> = IEnumerable<'T>
</code></pre>
<p>If you are just getting started with F#, you may have noticed that it can be a lot more particular about it's types than C#. It can be easy to forget that this actually works. You can assign <code>'a list</code> or <code>'a array</code> to a <code>seq</code>.</p>
<pre><code class="language-fsharp">let mutable ss = seq { 1; 2 }
ss <- [1;2]
ss <- [|1;2|]
</code></pre>
<p>This is because <code>seq</code> is <code>IEnumerable<'T></code> and <code>'a list</code> and <code>'a array</code> implement <code>IEnumerable<'T></code>.</p>
<pre><code class="language-fsharp">// FSharp.Core
type List<'T> =
| ([]) : 'T list
| (::) : Head: 'T * Tail: 'T list -> 'T list
interface System.Collections.Generic.IEnumerable<'T>
interface System.Collections.IEnumerable
interface System.Collections.Generic.IReadOnlyCollection<'T>
interface System.Collections.Generic.IReadOnlyList<'T>
</code></pre>
<p>As it turns out, this gets us a very long way with interacting with C#, since <code>IEnumerable</code> and <code>IEnumerable<'T></code> are pretty ubiquitous.</p>
<pre><code class="language-fsharp">let csharp = CSharpyType()
csharp.IEnumerableTProp <- seq { 0..10 }
csharp.IEnumerableTProp <- [0..10]
csharp.IEnumerableTProp <- [|0..10|]
</code></pre>
<p>So, working with <code>IEnumerable<'T></code> in F# is as simple as using <code>seq</code>.</p>
<h3>System.Collections.IEnumerable</h3>
<p>For working with <a href="https://docs.microsoft.com/en-us/dotnet/api/system.collections.ienumerable?view=netcore-3.1">System.Collections.IEnumerable</a> we can make use of a function on the <code>Seq</code> module, <code>Seq.cast</code>. This takes an <code>System.Collections.IEnumerable</code> and turns it into a <code>seq</code>. Now it is in a more natural form for working with in F#.<br />
In terms of assigning, <code>'a seq</code>, <code>'a list</code>, and <code>'a array</code> can be assigned to it, since they all implement <code>IEnumerable</code>.</p>
<pre><code class="language-fsharp">let csharp = CSharpyType()
csharp.IEnumerableProp <- seq { 0..10 }
csharp.IEnumerableProp <- [0..10]
csharp.IEnumerableProp <- [|0..10|]
</code></pre>
<p>It is worth noting we can also just use them in the usual constructs like:</p>
<pre><code class="language-fsharp">for i in (csharp.IEnumerableProp) do
printfn "i: %A" i
</code></pre>
<h3>int []</h3>
<p>With a typed array, we can just use <code>'T array</code> <a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/arrays">since they are the same</a> across F# and the .NET BCL.</p>
<pre><code class="language-fsharp">let csharp = CSharpyType()
csharp.ArrayTProp <- [|0..10|]
//csharp.ArrayTProp <- seq {0..10} // Compile error: This expression was expected to have type 'int []' but here has type 'seq<int>'
</code></pre>
<p>Make use of whatever you need from the <code>Array</code> module.</p>
<h3>System.Array</h3>
<p>The above is still true when using <code>System.Array</code>.</p>
<pre><code class="language-fsharp">let csharp = CSharpyType()
csharp.ArrayProp <- [|0..10|]
</code></pre>
<p>When trying to assign an instance of this type to an F# value you will need to give it a <code>Type</code>. This can be done using a static method out of <code>Linq</code> to get us an <code>IEnumerable<'T></code> ie <code>seq</code>, like so: <code>arr |> System.Linq.Enumerable.OfType<obj></code>. From there you can make use of the functions in the <code>Seq</code> module.</p>
<h3>System.Collections.Generic.List<_></h3>
<p>It can be confusing initially since <code>list</code> in F# is not the same as <code>List<></code> in C#. The equivalent of a <a href="https://github.com/fsharp/fsharp/blob/3bc41f9e10f9abbdc1216e984a98e91aad351cff/src/fsharp/FSharp.Core/prim-types.fs#L3129">C# list in F# is <code>ResizeArray</code></a>.</p>
<pre><code class="language-fsharp">// FSharp.Core
type ResizeArray<'T> = System.Collections.Generic.List<'T>
</code></pre>
<p>You can convert F# types to a <code>ResizeArray</code>.</p>
<pre><code class="language-fsharp">csharp.ListTProp <- [0..10] |> ResizeArray
csharp.ListTProp <- [|0..10|] |> ResizeArray
csharp.ListTProp <- seq { 0..10 } |> ResizeArray
</code></pre>
<p>And of course remember that <code>List<'T></code> implements <code>IEnumerable<'T></code> and <code>ICollection<'T></code>, which we will look at next.</p>
<h3>System.Collections.Generic.ICollection<<em>> & IList<</em>></h3>
<p>Remember that <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/generics-and-arrays">array</a> and <a href="https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1?view=netcore-3.1"><code>List<'T></code></a> aka <code>ResizeArray</code> already implement <code>IEnumerable<'T></code>, <code>ICollection<'T></code>, and <code>IList<'T></code>.</p>
<pre><code class="language-fsharp">csharp.ICollectionTProp <- [|0..10|]
csharp.ICollectionTProp <- [|0..10|] |> ResizeArray
csharp.IListTProp <- [|0..10|]
csharp.IListTProp <- [|0..10|] |> ResizeArray
</code></pre>
<h3>ResizeArray</h3>
<p>One thing you might be left wondering is converting from a ResizeArray, back to more natural F# types.</p>
<pre><code class="language-fsharp">let resizeArr = [0..10] |> ResizeArray
let xs = resizeArr :> seq<_> // Implements IEnumerable<T> so we can just cast
let arr = resizeArr.ToArray() // ResizeArray / List<T> has a `ToArray` method. This is an O(n) activity.
let lst = xs |> Seq.toList // Once we have a seq, we can use Seq functions
</code></pre>
<h3>Bonus: System.Func<_></h3>
<p>Another kind of conversion I often find myself doing when working with C# APIs is with <code>Func</code> and F# functions. Converting a F# function to a <code>Func</code> is as simple as passing it into the <code>Func</code> constructor if need be. We can often simply assign a F# function to a <code>Func</code> and the compiler will handle that.</p>
<pre><code class="language-fsharp">csharp.FuncProp <- (fun () -> System.DateTimeOffset.UnixEpoch)
let f = fun () -> csharp.FuncProp.Invoke()
</code></pre>
<h2>Conclusion</h2>
<p>So that is my potted run through of converting between F# and C# types. This was meant to be more of a reference than a post that teaches or tells a story so I hope the lack of continuity was not too off-putting.</p>https://devonburriss.me/reliability-with-intents/Reliability with Intents2019-12-05T00:00:00+00:00Devon Burrisshttps://devonburriss.me/reliability-with-intents/<p>If you are using any kind of messaging architecture to notify outside of your system of internal changes you may have noticed a reliability problem. Unless you are using distributed transactions to ensure atomic operations you may have noticed an ordering problem in updating state and notifying the rest of the world. In this post, I will look at this problem and a possible solution.</p>
<!--more-->
<blockquote>
<p>This post is part of <a href="https://sergeytihon.com/2019/11/05/f-advent-calendar-in-english-2019/">#FsAdvent 2019</a>. PS. THIS IS NOT PRODUCTION WORTHY CODE! FOR DEMO PURPOSES ONLY!</p>
</blockquote>
<blockquote>
<p>UPDATE: Posting this on Twitter yielded that I had, as I expected, uncovered an existing pattern. With the example I show here it is basically the Transactional outbox. I will say that the pattern I show here can function more like a local orchestrator that forms part of an choreography-based saga.</p>
</blockquote>
<h2>The atomic problem</h2>
<p>Oft times when doing an operation in an application, I see a call to put some kind of message on a queue (or topic) to notify other systems that this event occurred.</p>
<pre><code class="language-fsharp">// save to database
// it could then fail
// put on queue
person
|> Data.createPerson dbConnection None
|> tap (fun _ -> failwith "Failed before sending message") // <-- simulate application crash
|> Result.bind (Message.personCreated queue)
</code></pre>
<p>What happens though if the application crashes right after saving some changes to the database? Your application has changed state but has not, and will not, notify the rest of the world about that change. What if other business processes rely on this?</p>
<p><img src="/img/posts/2019/intents-1.png" alt="persist state then send" /></p>
<blockquote>
<p>If you are thinking that the chances of this happening are vanishingly small, let me float this idea. A 99.99% uptime still means almost an hour of downtime a year. On a high load system in the cloud (chaos monkey as a service), systems can disappear more often than you think.</p>
</blockquote>
<p>I have seen businesses be unaware of this communication loss for months, where the result is customer service calls routed to teams dependent on that message. The problem here is both assume every message is sent, never considering the loss. Only once these numbers were monitored did the problem become apparent.</p>
<p>So back to the problem. Of course, reversing the order does not help.</p>
<p><img src="/img/posts/2019/intents-2.png" alt="send then persist state" /></p>
<p>Now you are notifying the world about a change that never happened.</p>
<h2>What is your intention?</h2>
<p>I will mention a few more sophisticated variations in the conclusion but the solution is fairly simple. Separate the intention of sending the notification from the actual sending.</p>
<p>F# discriminated unions give a nice way to define our intention, as it is a state machine.</p>
<pre><code class="language-fsharp">// domain type
type Person = {
id:string
name:string
email:string
}
// Here the type of case could be the entity, command, or the message to be sent.
// Whatever makes the most sense.
type IntentOfPersonCreated =
| Pending of Person
| Complete of Person
</code></pre>
<p>We can then save the intention to send the message in a transaction with the state change that is prompting the notification.</p>
<pre><code class="language-fsharp">// save to database with intent
// intent puts on queue
use transaction = dbConnection.BeginTransaction()
let txn = Some transaction
person
|> Data.createPerson dbConnection txn
|> Result.map (fun p -> Data.createPersonIntent dbConnection txn (Pending p))
transaction.Commit()
</code></pre>
<p>Don't get too hung up on what this code is doing. The important part here is that <code>createPerson</code> and <code>createPersonIntent</code> are both called using the same transaction.</p>
<p>Finally, you need to process all persisted intents.</p>
<pre><code class="language-fsharp">let handleIntent connection queue (id,intent) =
// handle each state of the intent
match intent with
| Pending person ->
Message.personCreated queue person |> ignore
Data.markCreatePersonIntentDone connection id (Complete person) |> ignore
printfn "%A intent sent" person
| Complete _ -> failwith "These should not be queried"
let processIntents (dbConnection:DbConnection) queue =
let intentsR = Data.getCreatePersonIntents dbConnection
match intentsR with
| Error ex -> raise ex
| Ok intents -> intents |> Seq.iter (handleIntent dbConnection queue)
</code></pre>
<p>Note the state changes in <code>handleIntent</code> where the message is sent and the new state of the <strong>intent</strong> is persisted back. If you expanded the states that these can land in, you could potentially move through multiple states. This would allow for a kind of local orchestrator, in a choreography-based saga.</p>
<p>Now as long as you have a process that is regularly running through and processing the <strong>intents</strong>, you can guarantee that as soon as all infrastructure is healthy, all notifications will be sent at least once.</p>
<p><img src="/img/posts/2019/intents-3.png" alt="transactional persistence of state and intention" /></p>
<h2>Implementation ideas</h2>
<p>All the DEMO code is <a href="https://github.com/dburriss/intent-blog">available on my GitHub</a> but I wanted to talk about a few implementation details and what you may want to do differently.</p>
<p>This is the table I am storing the <strong>intents</strong> in.</p>
<pre><code class="language-sql">CREATE TABLE IF NOT EXISTS intents (
id INTEGER PRIMARY KEY AUTOINCREMENT,
iscomplete INTEGER NOT NULL DEFAULT 0,
intenttype TEXT NOT NULL,
intent BLOB NOT NULL
);
</code></pre>
<ul>
<li>I am using <code>iscomplete</code> to filter out the <strong>intents</strong> I no longer need to process.</li>
<li><code>intenttype</code> allows me to use this table for multiple <strong>intents</strong> and treat each differently.</li>
<li><code>intent</code> is a JSON string of the serialized <strong>intent</strong>.</li>
</ul>
<p>For production, you will likely want to add some indexes. Another thought I had was a partition key that could be used to process the intents from multiple consumers. This way you could scale out consumers even if the order was important for related <strong>intents</strong>, with a consumer per partition key.</p>
<p>You can check out the usage of this on the <a href="https://github.com/dburriss/intent-blog">GitHub</a> repository, specifically <code>Data.fs</code> but the following code should give a sufficient peek under the hood to get you going.</p>
<pre><code class="language-fsharp">let createIntent (connection:#DbConnection) (transaction:#DbTransaction option) (intent:string) (type':string)=
let data = [("@intent",box intent);("@intenttype",box type')] |> dict |> fun d -> DynamicParameters(d)
let sql = "INSERT INTO intents (intent,intenttype) VALUES (@intent,@intenttype);"
execute connection sql data transaction
let createPersonIntent (connection:#DbConnection) (transaction:#DbTransaction option) (intent:IntentOfPersonCreated) =
let intent' = intent |> JsonConvert.SerializeObject
createIntent connection transaction intent' "create-person"
</code></pre>
<h2>Conclusion</h2>
<p>Of course, increasing the reliability of your system comes at the cost of a bit of added complexity, as well as a latency penalty for the outgoing notifications. I will say that on top of the reliability increase, you also get a fairly good audit log without having moved to Event Sourcing (no I am not saying auditing alone is a good reason to do ES).</p>
<p>Another useful design choice that is related here is collecting events as your code executes. If you are using a functional style of programming, always returning events is the way to go. If you are using a more imperative style using classic DDD techniques, an aggregate root is a good place to accumulate these events. Erik Heemskerk and myself worked together and he has a great <a href="https://www.erikheemskerk.nl/ddd-persistence-recorded-event-driven-persistence/">post describing this technique</a>.</p>
<p>I did want to acknowledge that the processing of the intents does have some challenges that I have not covered in this post. You want to try to avoid having multiple workers pulling the same kind of <strong>intents</strong> or the number of duplicate messages will explode. Since EXACTLY ONCE message delivery using a push mechanism is a pipe dream, you need to cater for duplicate messages. Having a single instance processing means it can easily go down, so monitoring and restarts are important for the health of your system. A product like <a href="https://www.hangfire.io/">Hangfire</a> may be useful here, or scheduled serverless functions. Your mileage may vary.</p>
<p>Finally, I did want to also point out a <a href="https://www.youtube.com/watch?v=FkDZw9HmwQY&list=FLtCKfk3-Xz9K1kCkvT_v6aQ">great talk of Erik's</a> where he talks about turning this around so consumers come get the events from you. If you want to send out notifications you can write the consumer of your event feed that then notifies... or just tell people to come and fetch and be done with all this headache.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://microservices.io/patterns/data/saga.html">Saga pattern</a></li>
<li><a href="https://microservices.io/patterns/data/transactional-outbox.html">Transactional Outbox</a></li>
</ol>
<h2>Credits</h2>
<ul>
<li>Photo by Jens Lelie on <a href="https://unsplash.com/photos/u0vgcIOQG08">Unsplash</a></li>
</ul>https://devonburriss.me/canopy-from-fsx/Canopy from a FSX Script2018-12-15T00:00:00+00:00Devon Burrisshttps://devonburriss.me/canopy-from-fsx/<p>Recently I found myself doing a very repetitive task that entailed copying values one at a time off a page, navigating to the next page, then repeat. I would spend 2 hours automating 1 hour of work if said work is sufficiently boring, even if I may never need the automation again. I enjoy coding, I do not enjoy copy-pasting. So I wondered if it was even possible to run Canopy in an F# FSX script file. It turns out it is.</p>
<!--more-->
<h2>F# Scripting</h2>
<p>In case you are new to F# let us briefly touch on what a FSX file is. F# code can be placed into <code>.fs</code> files in a project and compiled to DLLs. This is how you would write a console application, Windows Service, or a Web Application. Another option that is great for experimenting is using <code>.fsx</code> files (and nowadays C# as well with .csx). These are F# scripting file that allow you to run as a standalone script using <strong>FSI</strong> (FSharp Interactive).</p>
<pre><code class="language-powershell">fsi .\basic.fsx
</code></pre>
<p>This requires <code>Fsi.exe</code> be on your <strong>PATH</strong>. For more information see <a href="https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/fsharp-interactive/">the docs</a>.</p>
<p>Worth mentioning is <a href="http://ionide.io/">Ionide Project's</a> great support for running script files, as well as working with <a href="https://fsprojects.github.io/Paket/">PAKET</a> which we will not go into in detail.</p>
<h2>Setup</h2>
<p>So the first thing you will need is a way to pull down the necessary Nuget packages. See my article on <a href="/up-and-running-with-paket">getting up and running with Paket fast</a> if you need help setting up Paket.</p>
<p>Here is the TL;DR version:</p>
<h3>.NET Core 2.1 SDK and later versions</h3>
<p>You can install it in a specific directory.</p>
<p><code>dotnet tool install --tool-path ".paket" Paket --add-source https://api.nuget.org/v3/index.json</code></p>
<h2>A basic script</h2>
<p>First we use PAKET to pull down the Nuget package we need.</p>
<pre><code class="language-text">source https://www.nuget.org/api/v2
nuget canopy
</code></pre>
<p>And run <code>.\.paket\paket.exe install</code> to download the packages.</p>
<pre><code class="language-fsharp">#r "packages/Selenium.WebDriver/lib/netstandard2.0/WebDriver.dll"
#r "packages/canopy/lib/netstandard2.0/canopy.dll"
open canopy.classic
open canopy.configuration
open canopy.types
chromeDir <- "C:\\tools\\selenium\\"//or wherever you place your Selenium
start chrome
pin FullScreen
url "https://google.com/"
"[name=q]" << "Youtube: BGF Red and Blue"
press enter
</code></pre>
<blockquote>
<p>One gotcha I did run across here is that the order of the <code>#r</code> references here does matter. The <em>WebDriver.dll</em> is required before <em>canopy.dll</em>.</p>
</blockquote>
<p>For more advanced examples see the <a href="https://github.com/dburriss/CanopyFSX/">related Github repository</a>.</p>
<h2>Conclusion</h2>
<p>And that is how easy it is to start using Canopy from a FSX file. This is a great way of automating some repetitive web task where an API is not available or exploring interacting with some DOM elements via Canopy in preparation for a UI test. Hope you found this useful. If you have any other use-cases, I would love to hear about them in the comments below.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://lefthandedgoat.github.io/canopy/">Canopy</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/fsharp-interactive/">FSI Reference</a></li>
<li><a href="https://www.seleniumhq.org/download/">Selenium Download</a></li>
</ol>https://devonburriss.me/review-fsharp-test-libs/Review: F# unit testing frameworks and libraries2018-12-08T00:00:00+00:00Devon Burrisshttps://devonburriss.me/review-fsharp-test-libs/<p>In this post I go through a few of the available assertion libraries and 2 test runners. We will look at running options, assertion style, and the clarity of the error messages.</p>
<!--more-->
<blockquote>
<p>This post is part of FsAdvent 2018.</p>
</blockquote>
<h2>Introduction</h2>
<p>Before we get into reviewing some different options, let me introduce the the libraries and frameworks up for review and the criteria I will be looking at. One criteria you may expect here is speed. I will make some small observations on this at the end but I didn't see enough difference that I think it should be a factor.</p>
<h3>Frameworks</h3>
<p>We will be looking at 2 frameworks: XUnit and Expecto. Some may disagree with me labeling them as frameworks. That is fine but it is useful to distinguish that both have components that allow you to write tests and hand that over to .NET tooling or Visual Studio to then run those tests. This is in contrast to the assertion libraries that are focused on the actual assertion of the outcome of a test.</p>
<h4>XUnit</h4>
<p>XUnit is a popular unit testing tool in the .NET space. It will be the baseline for a lot of the comparisons and is also necessary for the assertion libraries, as they are not test runners.</p>
<h4>Expecto</h4>
<p>Expecto is a F# testing framework that does a lot. It has an API for running tests, test adapters for runners, assertions, performance tests, and integration with FsCheck for property based testing. In this post we will only be looking at the basic features of setting up tests and the assertions.</p>
<h3>Assertion libraries</h3>
<p>Both <strong>XUnit</strong> and <strong>Expecto</strong> come with their own assertions. We will be looking at 2 other assertion libraries with different approaches. <strong>FsUnit</strong> brings an fun style to assertions that many like and <strong>Unquote</strong> makes use of a cool F# language feature to give detailed error messages.</p>
<h3>Criteria</h3>
<p>When reviewing or comparing anything it is useful to have a concrete list of attributes that are compared.</p>
<ul>
<li><strong>Setup:</strong> what options there are in terms of getting up and running</li>
<li><strong>Style:</strong> test setup style for the frameworks as well as the assertion style</li>
<li><strong>Messages:</strong> format of the error messages and comment on ease of parsing as well as the amount of detail in the message</li>
<li><strong>Runners:</strong> Running from Visual Studio and command line as well as filtering tests</li>
</ul>
<h2>Review</h2>
<p>So let's get into the comparison...</p>
<h3>Setup</h3>
<p>The project used to test out the examples is <a href="https://github.com/dburriss/FsharpUnitTestFrameworks">here on Github</a>.</p>
<div class="container">
<div class="row hidden-xs hidden-sm">
<div class="col-md-2"></div>
<div class="col-md-5"><h4>XUnit</h4></div>
<div class="col-md-5"><h4>Expecto</h4></div>
</div>
<div class="row">
<div class="col-md-2"><b>Templates</b></div>
<div class="col-md-5">
<p>
.NET Core templating comes standard with an xUnit template. Visual Studio also has built in templates for XUnit.<br/>
<code>dotnet new xunit -lang F#</code>
</p>
</div>
<div class="col-md-5">
<p>
You can install the Expecto template<br/>
<code>dotnet new -i Expecto.Template::*</code><br/>
<code>dotnet new expecto -n PROJECT_NAME -o FOLDER_NAME</code>
</p>
</div>
</div>
<div class="row">
<div class="col-md-2"><b>Nuget</b></div>
<div class="col-md-5">
<ul>
<li><a href="https://www.nuget.org/packages/xunit/">xunit</a></li>
</p>
</div>
<div class="col-md-5">
<ul>
<li><a href="https://www.nuget.org/packages/Expecto/">Expecto</a></li>
</ul>
</div>
</div>
<div class="row">
<div class="col-md-2"><b>VS Adapter</b></div>
<div class="col-md-5">
<ul>
<li><a href="https://www.nuget.org/packages/Microsoft.NET.Test.Sdk/15.9.0">Microsoft.NET.Test.Sdk</a></li>
<li><a href="https://www.nuget.org/packages/xunit.runner.visualstudio/">xunit.runner.visualstudio</a></li>
</ul>
</div>
<div class="col-md-5">
<ul>
<li><a href="https://www.nuget.org/packages/YoloDev.Expecto.TestSdk/">YoloDev.Expecto.TestSdk</a></li>
</ul>
</div>
</div>
</div>
<p>The only issue I had was discovering I had to use <em>YoloDev.Expecto.TestSdk</em> to get Visual Studio integration working instead of <em>Expecto.VisualStudio.TestAdapter</em> (as suggested in the documentation). Easy enough to discover by generating an example project using the template. So not much between them here other than XUnit being available out the box.</p>
<h3>Style</h3>
<p>Let's look at how we setup a test in both XUnit and Expecto and then we will look at the assertion styles.</p>
<h4>Test setup</h4>
<p><strong>XUnit</strong> looks for the <code>[<Fact>]</code> or <code>[<Theory>]</code> attribute on a function and will run that as a test.</p>
<pre><code class="language-fsharp">[<Fact>]
let ``toEmail with bob gives bob [at] acme [dot] com`` () =
let name = "bob"
let expected = "bob@acme.com"
let actual = toEmail name
Assert.Equal (expected, actual)
</code></pre>
<blockquote>
<p>F# allows us to use the double ` to name a function with some special characters in it.</p>
</blockquote>
<p>So we have the <code>[<Fact>]</code> attribute and a function with our test.</p>
<p>Let's compare this to <strong>Expecto</strong> setup.</p>
<pre><code class="language-fsharp">[<Tests>]
let aTest =
test "toEmail with bob gives bob [at] acme [dot] com" {
let name = "bob"
let expected = "bob@acme.com"
let actual = toEmail name
Expect.equal actual expected "emails did not match"
}
</code></pre>
<p>Expecto uses the <code>[<Tests>]</code> attribute to mark a value that contains tests, where the tests are defined in a F# computation expression called <code>test</code>.</p>
<p>Although this might seem quite similar, it is in fact quite different. This becomes more apparent if we have multiple tests. Where XUnit is just more functions with the attribute on, Expecto treats the tests more like data.</p>
<pre><code class="language-fsharp">[<Tests>]
let emailtests =
testList "Email tests" [
test "toEmail with null gives info [at] acme [dot] com" {
let name = null
let expected = "info@acme.com"
let actual = toEmail name
Expect.equal actual expected "emails did not match"
}
test "toEmail with bob gives bob [at] acme [dot] com" {
let name = "bob"
let expected = "bob@acme.com"
let actual = toEmail name
Expect.equal actual expected "emails did not match"
}
]
</code></pre>
<p>Now we are defining our tests in a <code>List</code> given to a <code>testList</code>. Expecto <a href="https://github.com/haf/expecto#writing-tests">has an almost overwhelming number of ways to organize tests</a>. XUnit is simple and straightforward but if you find yourself wanting to take more control of how tests are organized, Expecto might be just what you want. This becomes even more important if you are using it to do property-based testing, performance tests, etc.</p>
<h4>Assertions</h4>
<p>Next we will look at the style of the assertions used by each library.</p>
<div class="container">
<div class="row hidden-xs hidden-sm">
<div class="col-md-6"><h4>XUnit</h4></div>
<div class="col-md-6"><h4>FsUnit</h4></div>
</div>
<div class="row">
<div class="col-md-6"><b class="visible-xs-block visible-sm-block">XUnit</b><pre><code class="fsharp">Assert.Equal (expected, actual)</code></pre></div>
<div class="col-md-6"><b class="visible-xs-block visible-sm-block">FsUnit</b><pre><code class="fsharp">actual |> should equal expected</code></pre></div>
</div>
<div class="row hidden-xs hidden-sm">
<div class="col-md-6"><h4>Unquote</h4></div>
<div class="col-md-6"><h4>Expecto</h4></div>
</div>
<div class="row">
<div class="col-md-6"><b class="visible-xs-block visible-sm-block">Unquote</b><pre><code class="fsharp">test <@ actual = expected @></code></pre></div>
<div class="col-md-6"><b class="visible-xs-block visible-sm-block">Expecto</b><pre><code class="fsharp">Expect.equal actual expected "null should be None"</code></pre></div>
</div>
</div>
<p><strong>XUnit</strong> is pretty standard if you come from an OO background, and it's OO roots are really showing here. Other than that it is easy enough to understand. XUnit's <code>Assert</code> static class contains a stack of useful assertion methods on it and since XUnit is very popular in the .NET space, it is easy finding answers.</p>
<p><strong>FsUnit</strong> is for those that like a more fluent style (FP version) of defining assertions. If you are a C# developer and love the style of <a href="https://fluentassertions.com/">FluentAssertions</a>, then you may want to try this out. Honestly, I am not a fan of FluentAssertions library for its assertion style, I am a fan for its helpful error messages. In OO I prefer the more succinct XUnit style but use FluentAssertions because of its error messages. So if this is a style that appeals to you, try it out!</p>
<p><strong>Unquote</strong> is slightly different as it uses F# quoted expressions (using <code><@ expression @></code>) to evaluate a plain statically typed F# expression and give detailed failure messages based on that evaluation. We will take at what this means for the error messages in the next section. There are some <a href="https://github.com/SwensenSoftware/unquote/wiki/UserGuide#assertions">assertion helpers</a> but mostly you just write plain old F# expressions.</p>
<p><strong>Expecto</strong> has its own assertion module <code>Expect</code> which has a bunch of functions available for asserting behavior. This is much akin to XUnit's <code>Assert</code> class except it doesn't carry the same OO legacy and so is much more functional in feel.</p>
<h3>Error message</h3>
<p>Although a fan of TDD I prefer testing from the boundary of my application and only going as deep as needed. The less your clients (including your tests) know about the internals of your code, the more free you are to make changes without breaking any API contracts. So then error messages from your application become very important, and the more helpful your assertions are at surfacing this the better.</p>
<pre><code class="language-fsharp">// XUnit / FsUnit / Unquote
[<Fact>]
let ``toEmail with bob gives bob [at] acme [dot] com`` () =
let name = "bob"
let expected = "bob@acme.com"
let actual = toEmail name
// XUnit
Assert.Equal (expected, actual)
// FsUnit
actual |> should equal expected
// Unquote
test <@ actual = expected @>
// Expecto
test "toEmail with bob gives bob [at] acme [dot] com" {
let name = "bob"
let expected = "bob@acme.com"
let actual = toEmail name
Expect.equal actual expected "bob should be Some bob"
}
</code></pre>
<div class="container">
<div class="row hidden-xs hidden-sm">
<div class="col-md-3"><h4>XUnit</h4></div>
<div class="col-md-3"><h4>FsUnit</h4></div>
<div class="col-md-3"><h4>Unquote</h4></div>
<div class="col-md-3"><h4>Expecto</h4></div>
</div>
<div class="row">
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">XUnit</b>
Message: Assert.Equal() Failure
↓ (pos 0)
Expected: bob@acme.com
Actual: info@acme.com
↑ (pos 0)
</p></div>
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">FsUnit</b>
Message: FsUnit.Xunit+MatchException : Exception of type 'FsUnit.Xunit+MatchException' was thrown.
Expected: Equals "bob@acme.com"
Actual: was "info@acme.com"
</p></div>
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">Unquote</b>
"info@acme.com" = "bob@acme.com"
false
</p></div>
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">Expecto</b>
Message:
emails did not match.
Expected string to equal:
bob@acme.com
↑
The string differs at index 0.
info@acme.com
↑
String does not match at position 0. Expected char: 'b', but got 'i'.
</p></div>
</div>
</div>
<p>So far there is very little between them. <strong>Unquote</strong> does stand out as different to the others. <strong>FsUnit</strong> has a bit more noise before the important part and doesn't point out the index of where things go wrong. That little detail could be helpful in spotting a small <em>tpyo</em> but other than that is not too significant.</p>
<p>Let's look at something with a functional concept in like <code>option</code>.</p>
<pre><code class="language-fsharp">// XUnit / FsUnit / Unquote
[<Fact>]
let ``sanitize with bob gives Some bob`` () =
let name = "bob"
let expected = Some name
let actual = Data.sanitize name
// XUnit
Assert.Equal (expected, actual)
// FsUnit
actual |> should equal expected
// Unquote
test <@ actual = expected @>
// Expecto
test "sanitize with bob gives Some bob" {
let name = "bob"
let expected = Some name
let actual = Data.sanitize name
Expect.equal actual expected "bob should be Some bob"
}
</code></pre>
<div class="container">
<div class="row hidden-xs hidden-sm">
<div class="col-md-3"><h4>XUnit</h4></div>
<div class="col-md-3"><h4>FsUnit</h4></div>
<div class="col-md-3"><h4>Unquote</h4></div>
<div class="col-md-3"><h4>Expecto</h4></div>
</div>
<div class="row">
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">XUnit</b>
Message: Assert.Equal() Failure
Expected: Some(bob)
Actual: (null)
</p></div>
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">FsUnit</b>
Message: FsUnit.Xunit+MatchException : Exception of type 'FsUnit.Xunit+MatchException' was thrown.
Expected: Equals Some "bob"
Actual: was null
</p></div>
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">Unquote</b>
None = Some "bob"
false
</p></div>
<div class="col-md-3"><p>
<b class="visible-xs-block visible-sm-block">Expecto</b>
Message:
bob should be Some bob. Actual value was <null> but had expected it to be Some "bob".
</p></div>
</div>
</div>
<p>Again most are similar but <strong>Unquote</strong> begins to shine. All the other libraries print <code>None</code> as <code>null</code>.</p>
<h3>Runners</h3>
<p>If you are using Visual Studio you probably want to run your tests from the Test Explorer in the IDE. This works fine for all the listed frameworks as you can see.</p>
<p><img src="/img/posts/2018/test-explorer.jpg" alt="Visual Studio Test Explorer" /></p>
<p>If the command line is more your thing, <code>dotnet test</code> works just fine. This example is a bit of a mess as it is running all the test libraries.</p>
<p><img src="/img/posts/2018/console-tests.jpg" alt="Visual Studio Test Explorer" /></p>
<h4>Filtering</h4>
<p>Sometimes it is useful to filter to run only certain tests. <strong>XUnit</strong> and <code>dotnet test</code> support this. With the following example you can filter down to just tests marked with this category using <code>--filter</code>.</p>
<pre><code class="language-fsharp">[<Trait("Category", "Smoke")>]
[<Fact>]
let ``get list of numbers`` () =
let expected = [|1;2;3|]
let actual = Data.list |> Seq.take 3 |> Seq.toArray
Assert.Equal<IEnumerable<int>>(expected, actual)
</code></pre>
<blockquote>
<p><code>dotnet test --filter Category=Smoke</code></p>
</blockquote>
<p>Filtering with <strong>Expecto</strong> is a bit different. Remember how we assign a list of tests to a value? We could for example run our data tests like so by running them from a command line program.</p>
<pre><code class="language-fsharp">// Program.fs
module Program =
open Expecto
open TestFrameworks
let [<EntryPoint>] main args =
runTestsWithArgs defaultConfig args ExpectoTests.datatests
</code></pre>
<p>And since this is just a normal console application, you can make it as simple or complex as needed. Now testing becomes a <code>dotnet watch run</code>.</p>
<h2>Conclusion</h2>
<p>So that is our review of a few of the testing libraries available in the F# ecosystem. This is by no means comprehensive in terms of all libraries nor a deep dive into any of these. I do hope that if you have not used some of these, you did glimpse what some of them might offer.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://xunit.github.io/">xunit</a></li>
<li><a href="http://fsprojects.github.io/FsUnit/">FsUnit</a></li>
<li><a href="https://github.com/SwensenSoftware/unquote">Unquote</a></li>
<li><a href="https://github.com/haf/expecto">Expecto</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computation-expressions">Computation Expressions</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/code-quotations">Quoted expressions</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/core/testing/selective-unit-tests#xunit">Filtering tests</a></li>
<li><a href="https://fscheck.github.io/FsCheck/">Property-based testing with FsCheck</a></li>
<li><a href="https://github.com/fsprojects/Foq/wiki">Foq for Mocking (personally I don't recommend using mock frameworks much)</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-10/How to F# - Part 102018-11-24T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-10/<p>In this final post in the series we are going to create a fully functioning F# application. Along the way we will discuss the .NET SDK, SQLite, and how to organize your code. If you follow along (which I recommend you do), you will have a working F# console application that accepts input and communicates with a database.</p>
<!--more-->
<p>The code for this tutorial can be found at <a href="https://github.com/dburriss/HowToFsharp">Github</a>.</p>
<h2>Introduction</h2>
<p>First let's discuss what we will be building. We will be creating a console application that accepts some contact information as user input and then persists those contacts to a database.</p>
<p>The data we will be capturing will have the following fields:</p>
<ul>
<li>First name</li>
<li>Last name</li>
<li>Email</li>
</ul>
<p>We will be doing the initial creation of the project from the command line so it doesn't matter if you are using VS Code, Visual Studio, Rider, or any other preferred editor on Windows, Linux, or Mac.</p>
<h2>Project setup</h2>
<p>The first thing we need to check is if we have the <a href="https://www.microsoft.com/net/download">.NET Core SDK installed</a>. Go to <a href="https://www.microsoft.com/net/download">dot.net</a> and download the .NET Core SDK.</p>
<p>And although I hope you tried out some of the samples in previous posts, if any this would be the one to follow along with if you have never written a F# application before. To do that you will need an IDE.</p>
<ul>
<li><a href="https://visualstudio.microsoft.com/downloads/">Visual Studio with the F# workload installed</a></li>
<li><a href="https://visualstudio.microsoft.com/downloads/">Visual Studio Code</a> with <a href="http://ionide.io/">Ionide extension installed</a></li>
<li><a href="https://www.jetbrains.com/rider/">Rider</a></li>
</ul>
<p>Once we have the .NET SDK installed, create a folder and navigate to that folder in your terminal (Prompt on Windows or Terminal on *nix).</p>
<p>If you are unsure there is an awesome <a href="https://www.youtube.com/playlist?list=PLlzAi3ycg2x0TScJb7czq7-4LrQoyTB0I">video series by Compositional IT on YouTube now that will get you setup</a>.</p>
<p>On Windows using Powershell I did the following:</p>
<pre><code class="language-powershell">cd C:\dev\personal\
mkdir HowToFsharp
cd .\HowToFsharp\
</code></pre>
<p>So I am in a folder <em>C:\dev\personal\HowToFsharp</em>. You can put the folder anywhere you prefer and call it what you like, it is not too important. Just be sure that you execute the following command in the folder you just created:</p>
<pre><code class="language-powershell">dotnet new --list
</code></pre>
<p><img src="/img/posts/2018/dotnet-list.jpg" alt="dotnet new --list" /></p>
<p>This prints out a list of all the templates you have installed on your machine. We will be creating a console application so we will use the first template on the list above.</p>
<pre><code class="language-powershell">dotnet new console -lang F#
</code></pre>
<p>Running this will generate 2 files and another folder called <code>obj</code> which we won't be looking at. Let's look at the 2 files though.</p>
<p><em>HowToFsharp.fsproj</em>:</p>
<pre><code class="language-xml"><Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
</PropertyGroup>
<ItemGroup>
<Compile Include="Program.fs" />
</ItemGroup>
</Project>
</code></pre>
<p>The <code>*proj</code> files like <code>csproj</code> and <code>fsproj</code> MSBuild XML files that specify how our project is built. The first thing to notice is it is specifying <code>Microsoft.NET.Sdk</code> as an attribute in the root <code>Project</code> element. This layers in tasks and targets automatically for us to work with .NET Core.</p>
<p><code>OutputType</code> is quite straight forward. The artefact of this program will be an executable that can be run.</p>
<p>With <code>TargetFramework</code> we indicate what framework we are targeting. <code>netcoreapp2.1</code> is for runnable .NET Core applications. If we wanted to target .NET Fullframework we could specify something like <code>net461</code>. That is not too important for this post though, just useful to keep in mind when developing your own applications.</p>
<p>Finally we have an <code>ItemGroup</code> with <code><Compile Include="Program.fs" /></code>. This is important as it includes our other file of interest that we will look at next to be compiled when compiling this project. In an F# project file this is important as this allows us to specify what is compiled and in what order. If you are used to C#, this is different as the order of files does not matter in C#. The important part to note here is that as we add more F# files, we will need to add them here so they are compiled as needed.</p>
<p><em>Program.fs</em></p>
<pre><code class="language-fsharp">open System
[<EntryPoint>]
let main argv =
printfn "Hello World from F#!"
0 // return an integer exit code
</code></pre>
<p><code>Program.fs</code> becomes the entry point of our application (as is made explicit by the <code>[<EntryPoint>]</code> attribute).</p>
<p>On the command line if you are in the folder containing the <code>fsproj</code> file you can run the command <code>dotnet run</code> command to build and run this program.</p>
<blockquote>
<p><code>dotnet run</code><br />
Hello World from F#!</p>
</blockquote>
<p>Before we get to writing our program, let's talk about organizing code.</p>
<h2>Organizing Code</h2>
<p>So far in this series we haven't talked about how to organize code. In your F# code you typically have 3 aspects to bring together to organise your code. Firstly, you have <em>files</em>. These are files ending with <code>.fs</code>. Pretty straight forward. As mentioned before <code>.fs</code> files need to be compiled in the order they are used. So if You depend on functions or types in another file, that file must appear ahead in the compilation order of the file they are used in.</p>
<p>The bread and butter of organizing F# code is <code>module</code>s. A <code>module</code> allows you to group values, types, and functions. This can be useful for thinking of the group as a single abstraction and avoiding naming conflicts. When interoping with other .NET languages, <code>module</code>s show up as static classes.</p>
<p>Lastly, there are <em>namespaces</em>. These are an artefact of the interop with the rest of the .NET library although they can be useful for spanning multiple <code>module</code>s. One important thing to note is that although we can define types in a <code>namespace</code>, we cannot define values. This includes functions.</p>
<p>We will be coding up the following:</p>
<p><img src="/img/posts/2018/fsharpapp.jpg" alt="Code" /></p>
<p>As we discussed, <code>Program.fs</code> represents our entry point. All other files contain <code>module</code>s in the <code>Contacts</code> <code>namespace</code>.</p>
<h2>Creating our model</h2>
<p>We will start with creating our <code>Domain.fs</code> file. There are multiple ways to organise code with or without namespaces but I am showing my preferred method. We have a <code>namespace</code>, in this case <code>Contacts</code> that all the code falls under. We create our types within that <code>namespace</code>. Any domain logic we need enforced on our types, we place in a <code>module</code> with the same name as the type.</p>
<p><em>Domains.fs</em>:</p>
<pre><code class="language-fsharp">namespace Contacts
open System
type Contact = {
Id:Guid
Firstname:string
Lastname:string
Email:string
}
[<RequireQualifiedAccess>]
module Contact =
// string -> bool
let private isValidEmail (email:string) =
try
new System.Net.Mail.MailAddress(email) |> ignore
true
with
| _ -> false
// Contact -> Result<Contact,seq<string>>
let validate contact =
let errors = seq {
if(String.IsNullOrEmpty(contact.Firstname)) then yield "First name should not be empty"
if(String.IsNullOrEmpty(contact.Lastname)) then yield "Last name should not be empty"
if(String.IsNullOrEmpty(contact.Email)) then yield "Email should not be empty"
if(isValidEmail contact.Email |> not) then yield "Not a valid email"
}
if(Seq.isEmpty errors) then Ok contact else Error errors
// string -> string -> string -> Result<Contact,seq<string>>
let create fname lname email =
let c = { Id = Guid.NewGuid(); Firstname = fname; Lastname = lname; Email = email }
validate c
</code></pre>
<p>Above we have a type <code>Contact</code> and a <code>module</code> <code>Contact</code>. Within the <code>module</code> we have 2 public functions. <code>create</code> creates a contact given the needed values, and uses <code>validate</code> to ensure the contact is valid.</p>
<p>I find this a nice structured way of finding the necessary behavior on a type that is similar to how behavior would be discovered using OO, is still functional, and matches how we work in F# with types like <code>List</code> and <code>Option</code>.</p>
<h2>Getting input</h2>
<p>Next, let's look at how we can get input from the user console. We will be catering for the following functionality.</p>
<ol>
<li>Print a menu to the console</li>
<li>List all existing saved contacts persisted to the database</li>
<li>Capture new contacts to the database</li>
</ol>
<p>The menu will look like this:</p>
<pre><code class="language-text">====================
MENU"
====================
1. Print Contacts"
2. Capture Contacts"
0. Quit"
</code></pre>
<p>With this bit of code, consider reading it from the bottom up. This way of reading code often makes the most sense as the upper functions are helper functions for those lower down.</p>
<pre><code class="language-fsharp">namespace Contacts
[<RequireQualifiedAccess>]
module Input =
open System
// string -> string
let private captureInput(label:string) =
printf "%s" label
Console.ReadLine()
// seq<string> -> unit
let private printErrors errs =
printfn "ERRORS"
errs |> Seq.iter (printfn "%s")
// unit -> Contact
let rec private captureContact() =
printfn "CAPTURE CONTACT"
Contact.create
(captureInput "First name: ")
(captureInput "Last name: ")
(captureInput "Email: ")
|> fun r -> match r with
| Ok c -> c
| Error err ->
printErrors err
captureContact()
// (Contact -> unit) -> Choice<unit,unit>
let private captureContactChoice saveContact =
let contact = captureContact()
saveContact contact
let another = captureInput "Continue (Y/N)?"
match another.ToUpper() with
| "Y" -> Choice1Of2 ()
| _ -> Choice2Of2 ()
// (Contact -> unit) -> unit
let rec private captureContacts saveContact =
match captureContactChoice saveContact with
| Choice1Of2 _ ->
captureContacts saveContact
| Choice2Of2 _ -> ()
// unit -> unit
let printMenu() =
printfn "===================="
printfn "MENU"
printfn "===================="
printfn "1. Print Contacts"
printfn "2. Capture Contacts"
printfn "0. Quit"
// string -> (unit -> Contact list) -> (Contact -> unit) -> unit
let routeMenuOption i getContacts saveContact =
match i with
| "1" ->
printfn "Contacts"
getContacts() |> List.iter (fun c -> printfn "%s %s (%s)" c.Firstname c.Lastname c.Email)
| "2" -> captureContacts saveContact
| _ -> printMenu()
// unit -> string
let readKey() =
let k = Console.ReadKey()
Console.WriteLine()
k.KeyChar |> string
</code></pre>
<p>The first thing you may notice (if you did still start from the top) is the <code>RequireQualifiedAccess</code> attribute. This enforces that calling the functions in the module is done using the fully qualified <code>module</code> name. I often like this as it gives context to the function call names.</p>
<p>Now that you have been found out for starting from the top, let's work our way up from the bottom.</p>
<p><code>readKey</code> is pretty uninteresting. It gets a key as input and returns that as a string. This will be used to get menu choices.</p>
<p><code>routeMenuOption</code> pattern <code>match</code>es on the <code>i</code>. "1" prints out each contact. To do that it calls the <code>getContacts</code> function that is passed in as an argument. This means we are not directly tied to fetching our contacts from the database when using this <code>Input module</code>, we need only supply a function with the signature <code>unit -> Contact list</code>.<br />
"2" is a little more interesting as we call a function <code>captureContacts</code> which is in this <code>Input module</code>. It takes as an argument the function <code>saveContact</code> which has the signature <code>Contact -> unit</code>. So again, the <code>Input module</code> is not dependent on storing contacts in a database. All it requires is a function that will do something with the <code>Contact</code>.</p>
<p>Let's drill into <code>captureContacts</code> then. It has the signature <code>(Contact -> unit) -> unit</code>, so its argument matches up with our <code>saveContact</code> function. Another interesting part about <code>captureContacts</code> is the <code>rec</code> keyword. This means that the function is recursive. That is a fancy way of saying it calls itself. So what it does is make use of the <code>captureContact</code> function which returns back a <code>Choice</code> type. <code>Choice1Of2</code> means we will capture another contact, <code>Choice2Of2</code> means we will not capture any more contacts.</p>
<p>The rest of the functions <code>printMenu</code>, <code>printErrors</code>, and <code>captureInput</code> should be simple enough to reason about by now.</p>
<h2>Persisting data</h2>
<p>Next we need to setup our data access. I am not going to go over the code too much in this section as it is basically the same as what <a href="https://devonburriss.me/how-to-fsharp-pt-9/">we covered in Part 9</a>.</p>
<h3>Database creation</h3>
<p>What is important is to be able to work with <a href="https://www.sqlite.org/">SQLite</a>. To follow along here you can <a href="https://www.sqlite.org/download.html">download sqlite-tools</a> for free for your platform. You will either need to put <a href="https://www.howtogeek.com/118594/how-to-edit-your-system-path-for-easy-command-line-access/"><code>sqlite3</code> on your <em>PATH</em></a> or call it from where you downloaded it. You could also use a tool like <a href="https://www.jetbrains.com/datagrip/">Jetbrains Datagrip</a>.</p>
<p>Once we have the <code>sqlite</code> binary we can create a new database and connect to it using the following command:</p>
<pre><code class="language-bash">sqlite3 contactsDB.sqlite
sqlite> CREATE TABLE IF NOT EXISTS contacts ( id TEXT PRIMARY KEY, firstname TEXT NOT NULL, lastname TEXT NOT NULL, email TEXT NOT NULL UNIQUE );
</code></pre>
<p>It should look something like this, depending on your operating system and terminal of choice.</p>
<p><img src="/img/posts/2018/sqlite3.jpg" alt="sqlite3 bash" /></p>
<h3>Install nuget packages</h3>
<p>So now we have our database setup, we are going to start with the code to connect to it. First we will install the <a href="https://github.com/StackExchange/Dapper">Dapper</a> package into our project.</p>
<blockquote>
<p>I would usually recommend dependency management <a href="https://fsprojects.github.io/Paket/">Paket</a>. I have a post on <a href="https://devonburriss.me/up-and-running-with-paket/">getting up and running with Paket</a> if you are interested.</p>
</blockquote>
<p>Run the following commands:</p>
<pre><code class="language-bash">dotnet add package Dapper
dotnet add package System.Data.SQLite
dotnet restore
</code></pre>
<p>Your <em>HowToFsharp.fsproj</em> should now contain the following <em>ItemGroup</em> element.</p>
<pre><code class="language-xml"><ItemGroup>
<PackageReference Include="Dapper" Version="1.50.5" />
<PackageReference Include="System.Data.SQLite" Version="1.0.109.2" />
</ItemGroup>
</code></pre>
<p>We are going to add some code we saw in <a href="https://devonburriss.me/how-to-fsharp-pt-9/">Part 9</a>. <em>Database.fs</em> is a helper <code>module</code> for using Dapper in a more functional way.</p>
<p><em>Database.fs</em></p>
<pre><code class="language-fsharp">namespace Contacts
module Database =
open Dapper
open System.Data.Common
open System.Collections.Generic
// DbConnection -> string -> 'b -> Result<int,exn>
let execute (connection:#DbConnection) (sql:string) (parameters:_) =
try
let result = connection.Execute(sql, parameters)
Ok result
with
| ex -> Error ex
// DbConnection -> string -> IDictionary<string,obj> -> Result<seq<'T>,exn>
let query (connection:#DbConnection) (sql:string) (parameters:IDictionary<string, obj> option) : Result<seq<'T>,exn> =
try
let result =
match parameters with
| Some p -> connection.Query<'T>(sql, p)
| None -> connection.Query<'T>(sql)
Ok result
with
| ex -> Error ex
// DbConnection -> string -> IDictionary<string,obj> -> Result<'T,exn>
let querySingle (connection:#DbConnection) (sql:string) (parameters:IDictionary<string, obj> option) =
try
let result =
match parameters with
| Some p -> connection.QuerySingleOrDefault<'T>(sql, p)
| None -> connection.QuerySingleOrDefault<'T>(sql)
if isNull (box result) then Ok None
else Ok (Some result)
with
| ex -> Error ex
</code></pre>
<p>Next we will use this file in a <code>module</code> we will call <code>Data</code> that will contain the code and queries for saving and listing the contacts in the database.</p>
<p><em>Data.fs</em></p>
<pre><code class="language-fsharp">namespace Contacts
open Contacts
open System
open System.Data.SQLite
[<RequireQualifiedAccess>]
module Data =
type ContactEntity = { id:string; firstname:string; lastname:string; email:string }
// string -> SQLiteConnection
let private conn (dbname:string) =
let c = new SQLiteConnection(sprintf "Data Source=%s.sqlite" dbname)
c.Open()
c
let private dbname = "contactsDB"
// unit -> Result<seq<Contact>,exn>
let all() =
let db = conn dbname
Database.query db "SELECT id, firstname, lastname, email FROM contacts" None
|> Result.map
(fun ss -> ss
|> Seq.map (fun c -> {
Id = Guid.Parse(c.id); Firstname = c.firstname; Lastname = c.lastname; Email = c.email
}))
// Contact -> Result<int,exn>
let insert c =
let db = conn dbname
let entity = { id = c.Id.ToString(); firstname = c.Firstname; lastname = c.Lastname; email = c.Email }
let sql = "INSERT INTO contacts (id, firstname, lastname, email) VALUES (@id, @firstname, @lastname, @email);"
Database.execute db sql entity
</code></pre>
<p>Of note here is that we use a specific type here called <code>ContactEntity</code> that we use to store and retrieve from the database. Here it was necessary as SQLite does not handle <code>Guid</code> type that we are using for the <code>Id</code>. Even if this was not necessary it is a good practice to separate the boandaries of your application like input and storage from your domain types.</p>
<h2>Tying it all together</h2>
<p>Remember we need to add the files to the <em>HowToFsharp.fsproj</em> for them to be compiled.</p>
<pre><code class="language-xml"><Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
</PropertyGroup>
<ItemGroup>
<Compile Include="Database.fs" />
<Compile Include="Domain.fs" />
<Compile Include="Data.fs" />
<Compile Include="Input.fs" />
<Compile Include="Program.fs" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Dapper" Version="1.50.5" />
<PackageReference Include="System.Data.SQLite" Version="1.0.109.2" />
</ItemGroup>
</Project>
</code></pre>
<p>Now we have all the building blocks we need to tie our application together. Lets flesh out our entry point to use what we have created so far to get our contacts application working.</p>
<p>When the application starts we want to print the menu and get an input. After completing each action we will print the menu again and get an input.</p>
<p><em>Program.fs</em></p>
<pre><code class="language-fsharp">open Contacts
// unit -> Contact list
let getContacts() =
Data.all()
|> fun r -> match r with
| Ok cs -> cs |> Seq.toList
| Error e ->
printfn "ERROR: %s" e.Message
List.empty
// Contact -> unit
let insertContact c =
Data.insert c
|> fun r -> match r with
| Ok i -> printfn "%i records inserted" i
| Error e -> printfn "ERROR: %s" e.Message
[<EntryPoint>]
let main argv =
Input.printMenu()
let mutable selection = Input.readKey()
while(selection <> "0") do
Input.routeMenuOption selection getContacts insertContact
Input.printMenu()
selection <- Input.readKey()
0
</code></pre>
<p>So we print the menu and get a menu option, then we go into a loop of doing that after executing each action with <code>Input.routeMenuOption</code>.
Remember that <code>Input.routeMenuOption</code> takes 2 functions as input to fetch all contacts and insert a contact.</p>
<p>In the <code>Data</code> module we have 2 functions that almost fit the bill. <code>Data.all</code> has a signature of <code>unit -> Result<seq<Contact>,exn></code> for fetching all contacts as a result, where the result may be an exception. <code>Data.insert</code> has a signature of <code>Contact -> Result<int,exn></code> with the result of inserting a contact into the database.</p>
<p>At the top of <em>Program.fs</em> we have created 2 functions that wrap the <code>Data module</code> functions, handling errors and then give us the signatures we need for using them in <code>Input.routeMenuOption</code>.</p>
<p>This all just loops in the <code>while</code> loop until the <strong>Quit</strong> option is selected.</p>
<p>To run our application we execute the <code>dotnet run</code> command like we did near the beginning of this tutorial.</p>
<pre><code class="language-bash">> dotnet run
====================
MENU
====================
1. Print Contacts
2. Capture Contacts
0. Quit
</code></pre>
<h2>Conclusion</h2>
<p>Congratulations for completing your first application! Hopefully you can see that functional programming and F# is not a scary thing and that it is quite possible to write any kind of application in it.</p>
<p>Here are a few ways you could expand this application:</p>
<ol>
<li>Move the connection string into a json or yaml configuration file</li>
<li>Try use a different database</li>
<li>Try use <a href="https://fsprojects.github.io/SQLProvider/">SQL Provider</a> for the <code>Data</code> layer</li>
<li>Try import contacts from a <a href="https://gist.github.com/dburriss/4fd75fb874efb3ee41d0c31b14387fdf">CSV</a> file</li>
<li>Make this a web api using <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a></li>
</ol>
<h3>Next steps</h3>
<p>What are some ways of furthering your learnings in F#?</p>
<ol>
<li>Check out <a href="https://exercism.io/">Exercism</a> is a great way to get some easy practice writing code</li>
<li><a href="https://fsharpforfunandprofit.com/">F# for fun and profit</a> is a wealth of F# knowledge and I started out by just reading a little of that every day. I would encourage you to follow along with the <a href="https://devonburriss.me/fsharp-scripting/">script files</a> rather than just read like I did. Nothing beats actually writing code for learning a new language.</li>
<li><a href="https://www.manning.com/books/get-programming-with-f-sharp">Get Programming with FSharp</a> by Isaac Abraham is a great getting started book</li>
<li><a href="https://pragprog.com/book/swdddf/domain-modeling-made-functional">Domain Modeling Made Functional</a> is one of my favorite F# and DDD books, I highly recommend it once you are a little comfortable with F#.</li>
</ol>
<h2>Resources</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/visualstudio/msbuild/how-to-use-project-sdk">MSBuild project SDKs</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/core/tools/csproj">MSBuild for .NET Core</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-9/How to F# - Part 92018-11-04T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-9/<p>In almost any software system we want to store data at some point. For decades the bread and butter of persisting data has been databases, and in this post we look at ways of working with a database in F#.</p>
<!--more-->
<h2>Introduction</h2>
<p>SQL databases are very common and have been for decades. In this post we will look at how to interact with a <a href="https://www.sqlite.org/about.html">SQLite database</a> using the <a href="https://github.com/StackExchange/Dapper">Dapper</a> library. Lets briefly go through the technologies we will be touching today that are not F#. If you already have experience with relational databases and are just here for the F#, you probably want to skip this introduction section.</p>
<h3>Structured Query Language</h3>
<p>Structured Query Language (SQL) is a domain specific language. What is a domain specific language? Well it is a language that is designed and used in a very specific domain. In this case, working with databases. I am not going to go into the mathematics as I did with functions because frankly the syntax is not nearly as similar, so it doesn't demonstrate much. Suffice to say it has its roots in relational algebra. It is a language that is easy to start using but hard to master.</p>
<p>Imagine we want a table of data like this:</p>
<div class="table-responsive"><table class="table table-hover">
<thead> <tr>
<th>id</th> <th>name</th> <th>email</th> </tr> </thead>
<tbody>
<tr> <td>1</td> <td>Sue</td> <td>sue@acme.com</td> </tr>
<tr> <td>2</td> <td>Bob</td> <td>khan@acme.com</td> </tr>
<tr> <td>3</td> <td>Neo</td> <td>neo@metacortex.com</td> </tr>
<tr> <td>4</td> <td>Fen</td> <td>fen@acme.com</td> </tr>
<tr> <td>5</td> <td>An Si</td> <td>we@acme.com</td> </tr>
<tr> <td>6</td> <td>Jan</td> <td>lee@acme.com</td> </tr>
</tbody>
</table></div>
<h4>Creating a table</h4>
<p>So how would we create the structure for the table above? SQL is one of the more descriptive languages, because it is so specific. This is a good thing.</p>
<pre><code class="language-sql">CREATE TABLE people (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE
);
</code></pre>
<p>Firstly we specify the name of the table to create, <em>people</em>. Secondly, we specify the columns found in the table.<br />
We have an <code>id</code> that is an integer. We mark it as <code>PRIMARY KEY</code> to indicate that it is the primary way to uniquely identify our record. The database will automatically insert an incrementing identifier for each record we insert.<br />
Next we have <code>name</code> which is a <code>TEXT</code> field indicating we can store a <code>string</code> value. <code>NOT NULL</code> indicates we cannot leave this record out.
Finally, we have <code>email</code> which is similar to <code>name</code> except we have an extra constraint on it that it be <code>UNIQUE</code>. The database will enforce these constraints of <code>NOT NULL</code> and <code>UNIQUE</code>, giving us some measure of protection from bad data.</p>
<h4>Inserting a record</h4>
<p>So we have our table but how do we get data in the database? Unsurprisingly we use <code>INSERT</code>.</p>
<pre><code class="language-sql">INSERT INTO people (name,email) VALUES ("Bob","bob@acme.com");
</code></pre>
<p>We specify the table <em>people</em> as the one we want to insert into and then the columns we will be supplying data for. Then we indicate the values to insert using <code>VALUES</code> where the order of the values matches the order of the columns we specified.</p>
<h4>Updating a record</h4>
<p>What if some data changed since being inserted? Well of course SQL provides an <code>UPDATE</code> command.</p>
<pre><code class="language-sql">UPDATE people SET name='Bobby', email= 'bobby@acme.com' WHERE id = 1;
</code></pre>
<p>So we indicate an <code>UPDATE</code> on a specific table and then <code>SET</code> whichever columns we want to change. You almost always want to specify a condition of which record to update. If you left off the <code>WHERE</code> for this update it could set every <code>name</code> and <code>email</code> to "Bobby" and "bobby@acme.com", except that we are protected by our <code>UNIQUE</code> constraint on <code>email</code>, so our constrain saves us from a potentially devastating loss of data.</p>
<h4>Fetching records</h4>
<p>How would we query data from it? We use a SQL <code>SELECT</code> statement.</p>
<pre><code class="language-sql">SELECT id,name,email FROM people;
</code></pre>
<p>When selecting we start with <code>SELECT</code> then specify the columns we want, then <code>FROM</code> which table.</p>
<p>When selecting data we can also use <code>WHERE</code> to specify specific records.</p>
<pre><code class="language-sql">SELECT id,name,email FROM people WHERE id = 1;
SELECT id,name,email FROM people WHERE email LIKE '%@acme.com';
</code></pre>
<p>The first query will return a single record since <code>id</code> is always unique.<br />
The second query will return all records where <code>email</code> ends with <em>@acme.com</em>, skipping only record number 3 in our example data.</p>
<blockquote>
<p>In this tutorial we will only deal with data in a single table, we will not be going into relationships between tables. Relationships are a very powerful aspect of some databases and worth looking into further.</p>
</blockquote>
<h3>SQLite</h3>
<p>SQLite is a very popular database that has some unique characteristics that make it desireable for a tutorial like this. It requires no server so we interact directly with the file system from our process. This means it is very easy to get going with as it has zero setup.</p>
<p>We will be using the <a href="https://www.nuget.org/packages/System.Data.SQLite/">System.Data.SQLite Nuget package</a> to interact with a local <a href="https://www.sqlite.org">SQLite</a> database. The database is created in our code when we use it for the first time.</p>
<h3>Dapper</h3>
<p><a href="https://github.com/StackExchange/Dapper">Dapper</a> is a very popular mini-ORM. An ORM (Object Relational Mapper) is typically a library used in your code that maps relational data from a database to objects in your programming language of choice. While a full ORM will typically generate all queries, joins, and mappings for you, a mini-ORM will usually require you to still write some SQL and then it will do some mapping by convention for you. We will be visiting some Dapper code soon.</p>
<h2>Now to the good part</h2>
<p>Although Dapper is a great library for flexibly working with databases, it is written in and for C#. So the first thing we are going to do each time we use Dapper is wrap its functionality it in functions that surfaces Dapper in a more functional way.</p>
<h3>Executing SQL</h3>
<p>Dapper exposes the following C# function that we will be using a lot. It executes a SQL statement against a database connection and allows you to optionally pass an <code>object</code> in for parameters for the SQL statement. Don't worry if this doesn't make complete sense now, it should make more sense when you see an example.</p>
<p>It is known as an <strong>extension method</strong> and is on an instance of <code>IDbConnection</code>.</p>
<pre><code class="language-csharp">public static Task<int> Execute(this IDbConnection cnn, string sql, object param = null, SqlTransaction transaction = null)
</code></pre>
<p>So what is the problem here? Well for one remember in <a href="/how-to-fsharp-pt-8">part 8</a> we looked at how to handle exceptions more functionally? The above method will throw an exception if something goes wrong. Lets fix that.</p>
<pre><code class="language-fsharp">open Dapper
open System.Data.Common
// DbConnection -> string -> 'b -> Result<int,exn>
let execute (connection:#DbConnection) (sql:string) (parameters:_) =
try
let result = connection.Execute(sql, parameters)
Ok result
with
| ex -> Error ex
</code></pre>
<blockquote>
<p>NOTE: I am catching ALL errors here, contrary to my advice in the previous <a href="/how-to-fsharp-pt-8">post on error handling</a>. This is to keep things simple and concentrate on executing SQL.</p>
</blockquote>
<p>So we have a function called <code>execute</code> now with signature <code>DbConnection -> string -> 'b -> Result<int,exn></code>. It makes use of the Dapper extension method <code>Execute</code> but we wrap it in a <code>try..with</code> expression and return a type <code>Result<int,exn></code>.</p>
<p>To use <code>execute</code> we need an instance of a <code>DbConnection</code>. Lets write a small function that will return us a database connection and open that connection to the database, ready to use.</p>
<pre><code class="language-fsharp">// string -> SQLiteConnection
let conn (db:string) =
let c = new SQLiteConnection(sprintf "Data Source=%s.sqlite" db)
c.Open()
c
</code></pre>
<h4>Creation</h4>
<p>So we now have all the building blocks to execute a SQL statement. Lets create a <em>people</em> table in a database called <em>test</em>.</p>
<pre><code class="language-fsharp">// DbConnection -> Result<int,exn>
let createPeopleTable (connection:DbConnection) =
let sql = "CREATE TABLE IF NOT EXISTS people (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email text NOT NULL UNIQUE
);"
execute connection sql None
// create a connection and the table
let dbName = "test"
let connection = conn dbName
createPeopleTable connection
</code></pre>
<h4>Insertion</h4>
<p>So now we have a table called <em>people</em>. Lets insert a record.</p>
<pre><code class="language-fsharp">// DbConnection -> string -> string -> Result<int,exn>
let insertPerson (connection:DbConnection) name email =
let data = [("@name",box name);("@email",box email)] |> dict |> fun d -> new Dapper.DynamicParameters(d)
let sql = "INSERT INTO people (name,email) VALUES (@name,@email);"
execute connection sql data
// insert a person from name and email
insertPerson connection "Sue" "sue@acme.com"
</code></pre>
<p>So in the above code we make use of a type called <code>DynamicParameters</code> from Dapper. This takes in a dictionary so we create a list of name value tuples, and convert that to a dictionary before passing it to <code>DynamicParameters</code>. Worth noting here is that the constructor of <code>DynamicParameters</code> takes <code>IDictionary<string,obj></code>.</p>
<p>Which brings us to <code>box</code>. It has a signature of <code>'T -> obj</code>, so when applied to the values in the tuples we get type <code>IDictionary<string,obj></code> as needed for the constructor of <code>DynamicParameters</code>.</p>
<blockquote>
<p>This fails with some pretty cryptic errors of <em>Insufficient parameters supplied to the command</em> if you do not call the <code>box</code> function on the value.</p>
</blockquote>
<p>Another way of achieving the same, and usually a better option, is to use an actual type to represent the insert data.</p>
<pre><code class="language-fsharp">type CreatePerson = { name:string; email:string }
let insertPerson (connection:DbConnection) (person:CreatePerson) =
let sql = "INSERT INTO people (name,email) VALUES (@name,@email);"
execute connection sql person
insertPerson connection { name = "Ali"; email = "ali@acme.com" }
</code></pre>
<h4>Update</h4>
<p>We could of course have both variations with the update as well.</p>
<pre><code class="language-fsharp">// Option 1: multiple arguments
let updatePerson (connection:DbConnection) id name email =
let data = [("@id",box id);("@name",box name);("@email",box email)]
|> dict |> fun d -> new Dapper.DynamicParameters(d)
let sql = "UPDATE people SET name=@name, email=@email WHERE id=@id"
execute connection sql data
// Option 2: a record with all data
[<CLIMutable>]
type UpdatePerson = { id:int; name:string; email:string }
let updatePerson (connection:DbConnection) (person:UpdatePerson) =
let sql = "UPDATE people SET name=@name, email=@email WHERE id=@id"
execute connection sql person
// use option 2
let updatedPerson = { id=2; name="Kublai Khan"; email="kublai.k@acme.com"}
updatePerson connection updatedPerson
</code></pre>
<blockquote>
<p>NOTE: We put the <code>[<CliMutable>]</code> attribute on the type because later on we use this type to return rows from the database. If left off you will receive an error: <em>A parameterless default constructor or one matching signature (System.Int64 id, System.String name, System.String email) is required for UpdatePerson materialization</em></p>
</blockquote>
<p>As you can see, option 2 will handle change a lot better than option 1 if more fields need to be added it a person.</p>
<h2>Querying for data</h2>
<p>So far we have looked at SQL that changes state but doesn't really return much, other than the number of changes. Lets now look at querying for data.</p>
<p>First we need to write our functional wrappers around Dapper. We will create a function for querying for multiple records (<code>query</code>) and another for querying a single record (<code>querySingle</code>). The make use of Dapper's <code>Query</code> and <code>QuerySingleOrDefault</code> methods respectively.</p>
<pre><code class="language-fsharp">// DbConnection -> string -> IDictionary<string,obj> -> Result<seq<'T>,exn>
let query (connection:#DbConnection) (sql:string) (parameters:IDictionary<string, obj> option) : Result<seq<'T>,exn> =
try
let result =
match parameters with
| Some p -> connection.Query<'T>(sql, p)
| None -> connection.Query<'T>(sql)
Ok result
with
| ex -> Error ex
// DbConnection -> string -> IDictionary<string,obj> -> Result<'T,exn>
let querySingle (connection:#DbConnection) (sql:string) (parameters:IDictionary<string, obj> option) =
try
let result =
match parameters with
| Some p -> connection.QuerySingleOrDefault<'T>(sql, p)
| None -> connection.QuerySingleOrDefault<'T>(sql)
if isNull (box result) then Ok None
else Ok (Some result)
with
| ex -> Error ex
</code></pre>
<p>Note that for <code>query</code> I specify the return type, this is purely so the return type uses <code>seq<'T></code> instead of <code>IEnumerable<'T></code>. Errors are returned as before and for <code>querySingle</code> any <code>null</code> is returned as an <code>option</code> type as we discussed in <a href="/how-to-fsharp-pt-6">part 6</a>.</p>
<p>So lets use <code>query</code> to create a search function for all ACME employees.</p>
<pre><code class="language-fsharp">let findAcmeEmployees (connection:DbConnection) =
let sql = "SELECT id,name,email FROM people WHERE email LIKE '%@acme.com'"
query connection sql None
match (findAcmeEmployees connection) with
| Ok people -> printfn "Found %i employees" (Seq.length people)
| Error ex -> printfn "%A" ex.Message
</code></pre>
<p>Lastly, we will demonstrate fetching a single record by <strong>id</strong>.</p>
<pre><code class="language-fsharp">let personById (connection:DbConnection) id =
let data = [("@id",box id)] |> dict |> Some
let sql = "SELECT id,name,email FROM people WHERE id = @id"
querySingle connection sql data
// use the function to fetch person with id 1 and print results out
match (personById connection 1) with
| Ok (Some(person)) -> printfn "Found %i : %s %s" person.id person.name person.email
| Ok None -> printfn "No person found"
| Error ex -> printfn "%A" ex.Message
</code></pre>
<p>See how we handle different possibilities when evaluating a query result. We have the happy case where we have no errors and find someone. We have no errors but do not find someone. And finally we handle errors.</p>
<h2>Cleaning up</h2>
<p>Remeber the <code>conn</code> method we created at the beginning of the code walkthrough? It gave us back an open connection because it called <code>Open()</code> on the connection before returning it. If you have performed the operation on the connection, but may use it again, call <code>Close()</code> on the connection. If you are done with the operation, call <code>Dispose()</code>. Once disposed you cannot use the connection again and will need to create another if needed.</p>
<pre><code class="language-fsharp">let cleanup (connection:DbConnection) =
connection.Close()
connection.Dispose()
</code></pre>
<p>Technically, you could just call <code>Dispose()</code> if you are not planning on reusing the connection.</p>
<h2>Conclusion</h2>
<p>We covered quite a lot today but now you know the basics of working with a database in F#. We saw how we can use Dapper to ease passing in parameters and mapping to types. We wrote a functional wrapper around Dapper to handle errors and <code>null</code>s. And we saw how to persist to and query from a database that we created.</p>
<p>What we covered here is a pretty standard way to work with a database. F# actually has some very novel ways of working with databases using <a href="https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/type-providers/">Type Provider</a>s like <a href="https://github.com/fsprojects/SQLProvider">SQLProvider</a> and <a href="https://github.com/rspeele/Rezoom.SQL">Rezoom.SQL</a>.</p>
<p>In the final <strong>How to F#</strong> coming soon we will put everything we have learned together to create you first F# application.</p>
<h2>Resources</h2>
<ol>
<li><a href="http://www.sqlitetutorial.net/download-install-sqlite/">Install SQLite binaries</a></li>
<li><a href="http://www.sqlitetutorial.net/sqlite-create-table/">CREATE TABLE</a></li>
<li><a href="http://www.sqlitetutorial.net/sqlite-insert/">INSERT</a></li>
<li><a href="http://www.sqlitetutorial.net/sqlite-update/">UPDATE</a></li>
<li><a href="https://fsharpforfunandprofit.com/posts/cli-types/#boxing-and-unboxing">Boxing for fun and profit</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-8/How to F# - Part 82018-10-30T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-8/<p>Even with all the pure functions we could ask for, eventually our applications are going to have to interact with the unpredictable outside world. Also, sometimes we just mess up. In this post we look at ways of dealing with errors in our applications.</p>
<!--more-->
<h2>Throwing our toys out the pram</h2>
<p>In South Africa we say "Throwing your toys out the cot" but it is the same. When a child is upset, they tend to throw whatever they have in hand to express their distress. When you cannot communicate your intent in another way, this is how you get your parents' attention.</p>
<p>With that backdrop lets introduce <code>Exception</code>s. When an error occurs, the normal execution of the application stops and an error is raised as an object that contains information about the error that occurred. Exceptions can happen for example when reading from a file that is not where you expect it to be.</p>
<p>You can also raise exceptions yourself.</p>
<pre><code class="language-fsharp">open System
// int -> int
let doublePositiveNumber x =
if(x <0) then raise (new ArgumentException("Argument must be positive number"))
else x*2
let y = doublePositiveNumber 2 // val y : int = 4
let z = doublePositiveNumber -1 // ERROR: System.ArgumentException: Argument must be positive number
</code></pre>
<p>Where <code>ArgumentException</code> (which is in the <code>System</code> namespace) is a type that inherits from <code>SystemException</code> which inherits from <code>Exception</code>. We haven't covered object-oriented topics in this series but basically that means that <code>ArgumentException</code> inherits features from <code>SystemException</code> which inherits from <code>Exception</code>. All errors that occur during application execution inherit from <code>Exception</code>. We will see an implication of this later when we explore ways of handling exceptions.</p>
<p>F# also provides a really easy way to raise an exception with a string message using <code>failwith</code>.</p>
<pre><code class="language-fsharp">let doublePositiveNumber x =
if(x <0) then failwith "Argument must be positive number"
else x*2
</code></pre>
<h2>Custom exceptions</h2>
<p>In F# defining custom exceptions is simple (especially compared to C#). Lets define a custom exception of type <code>MustBePositiveException</code> that takes a tuple of type <code>string * int</code>.</p>
<pre><code class="language-fsharp">exception MustBePositiveException of string * int
let doublePositiveNumber x =
if(x <0) then raise (MustBePositiveException("Argument must be positive number",x))
else x*2
</code></pre>
<p>We will see soon how we can handle exceptions that occur.</p>
<h2>Handling exceptions</h2>
<p>The semantics of handling exceptions is that we try do something with the possibility of one or more exceptions occurring. Lets look at an example.</p>
<pre><code class="language-fsharp">open System
let z = try
doublePositiveNumber -1
with
| :? Exception as ex -> printfn "ERROR: %s" ex.Message; 0 // don't do this
</code></pre>
<p>We <code>try</code> execute <code>doublePositiveNumber</code> and when it fails it falls though to the <code>with</code> part of the expression. Here we pattern match on the type using <code>:? Exception</code> and return <code>0</code> after printing the exception <code>Message</code>.</p>
<p>So we come to out first tip on exception handling.</p>
<blockquote>
<p>TIP 1: Only handle exceptions you are expecting. Let the exceptional cases bubble up.</p>
</blockquote>
<p>What does this mean in practice? It means you should be more precise than handling <code>Exception</code>. Usually we want to do something drastic (like crash the application or cancel processing that HTTP request) if something happened that we did not cater for at all.</p>
<p>Remember that <code>Exception</code> is the type that just about any exception will inherit from, so by adding that as the type to handle, we effectively catch EVERY exception.</p>
<p>Lets see how we can</p>
<pre><code class="language-fsharp">// int
let z = try
doublePositiveNumber -1
with
| MustBePositiveException(msg,nr) -> printfn "ERROR with number %i: %s" nr msg; 0
| NumberTooLarge(msg,nr) -> printfn "ERROR with number %i: %s" nr msg; Int32.MaxValue
</code></pre>
<blockquote>
<p>TIP 2: If you can do something meaningful when an error occurs handle it as close to the exception source as possible.</p>
</blockquote>
<p>So we are now being more precise about handling <code>MustBePositiveException</code>, which is better.<br />
NOTE: If we were raising an error using <code>failwith</code> we would handle with <code>| Failure(msg) -> printfn "%s" msg</code>.</p>
<h2>Handling expected exceptions</h2>
<p>So in the previous example we were catching the <code>MustBePositiveException</code> exception and after printing returning <code>0</code>. Is this really a good default behavior? Maybe <code>-1</code>? This is hardly elegant or intent revealing. F# provides a functional solution to this problem in the form of <code>Result</code>. <code>Result</code> is similar to <code>Option</code> and <code>List</code> in that it provides an abstraction for dealing with a problem that takes a specific pattern. The result of a function call that can fail is either success, or a failure in some way. Lets change our calling code to return this <code>Result</code> type.</p>
<pre><code class="language-fsharp">// Result<int,exn>
let z = try
Ok (doublePositiveNumber -1)
with
| MustBePositiveException(msg,nr) as ex -> Error(ex)
</code></pre>
<p>So we call <code>Ok</code> with the result if the call succeeds and <code>Error</code> if it throws an exception. Note the signature of the return type is <code>Result<int,exn></code>. The first generic parameter is an <code>int</code> for the successful case and the second is of type <code>exn</code>, an F# exception. If we had instead just send back the exception message with <code>Error(msg)</code> the return type would have been <code>Result<in,string></code>.</p>
<h2>Working with Result</h2>
<p>Lets take a look at a complete example and step through it.</p>
<ol>
<li>We define our function that throw an exception</li>
<li>We call the function within a <code>try</code> expression</li>
<li>We handle the <code>Result</code> with pattern matching</li>
</ol>
<pre><code class="language-fsharp">let doublePositiveNumber x =
if(x <0) then failwith "Argument must be positive number"
else x*2
let safeDoublePositiveNumber x =
try
Ok (doublePositiveNumber x)
with
| Failure(msg) -> Error(msg)
let z = safeDoublePositiveNumber -1
match z with
| Ok i -> printfn "The answer is %i" i
| Error msg -> printfn "ERROR: %s" msg
</code></pre>
<blockquote>
<p>Output: ERROR: Argument must be positive number</p>
</blockquote>
<p>This leads to our third tip.</p>
<blockquote>
<p>TIP 3: In the majority of cases you cannot do anything about the exception at the source. Return <code>Result</code> for any expected exceptions and let the calling code decide what to do.</p>
</blockquote>
<p>We could of course remove the need for <code>safeDoublePositiveNumber</code> by never throwing the exception in the first place.</p>
<pre><code class="language-fsharp">let doublePositiveNumber x =
if(x < 0) then Error "Argument must be positive number"
else Ok (x*2)
</code></pre>
<p>Our final tip.</p>
<blockquote>
<p>TIP 4: Rather than raising exceptions for non-exceptional cases, instead just return <code>Result</code>.</p>
</blockquote>
<h2>Conclusion</h2>
<p>This was a brief introduction to exception handling. There are still more concepts to learn here so I do encourage you to go through the links in the resources section if you would like to learn more. You might want to look into <code>finally</code>, which allows execution of code regardless of the <code>try</code> succeeding or not.</p>
<p>Once you are comfortable with the concepts here I also suggest looking at the Railway oriented programming link in the resources.</p>
<p>To review the tips:</p>
<ol>
<li>Only handle exceptions you are expecting. Let the exceptional cases bubble up.</li>
<li>If you can do something meaningful when an error occurs handle it as close to the exception source as possible.</li>
<li>In the majority of cases you cannot do anything about the exception at the source. Return <code>Result</code> for any expected exceptions and let the calling code decide what to do.</li>
<li>Rather than raising exceptions for non-exceptional cases, instead just return <code>Result</code>.</li>
</ol>
<p>So don't be a child. Communicate your errors back rather than throwing your exceptions out the functions (that metaphor aged badly).</p>
<p>Next in the series we will be looking at a common occurrence in software development. <a href="/how-to-fsharp-pt-9">Working with a database</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Wikipedia:Don%27t_throw_your_toys_out_of_the_pram">Throwing your toys out the pram</a></li>
<li><a href="https://en.wikipedia.org/wiki/Inheritance_(object-oriented_programming)">Inheritance</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/exception-handling/">Microsoft docs</a></li>
<li><a href="https://fsharpforfunandprofit.com/posts/exceptions/">Further reading for fun and profit</a></li>
<li><a href="https://fsharpforfunandprofit.com/rop/">Railway oriented programming</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Social image by <a href="https://unsplash.com/@chuttersnap">Chuttersnap</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-7/How to F# - Part 72018-10-28T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-7/<p>So after much threatening in past posts, we will finally be diving a little deeper into collections in F#. We will look at a few of the most commonly used functions on the collection modules by manipulating a <code>list</code> of people that we randomly generate.</p>
<!--more-->
<p>So lets go through a few common actions you would want to do on a collection. We will use <code>list</code> as an example through most of this post but what we learn applies to <code>array</code> and <code>seq</code> as well. Before we do that though let us briefly touch on the <code>map</code> type again.</p>
<h2>All the beautiful people (Creating data)</h2>
<p>To work with lists we will need some data. Often data comes in the form of tables we need to join together. We will start simple though. Lets create 2 <code>map</code>s with numbers corresponding to the first names for the 1st and second names for the other.</p>
<div class="table-responsive"><table class="table table-hover">
<thead> <tr> <th>#</th> <th>First name</th> <th>#</th> <th>Last name</th> </tr> </thead>
<tbody>
<tr> <td>1</td> <td>Sue</td> <td>1</td> <td>Ali</td> </tr>
<tr> <td>2</td> <td>Bob</td> <td>2</td> <td>Khan</td> </tr>
<tr> <td>3</td> <td>Neo</td> <td>3</td> <td>Jacobs</td> </tr>
<tr> <td>4</td> <td>Fen</td> <td>4</td> <td>Jensen</td> </tr>
<tr> <td>5</td> <td>An Si</td> <td>5</td> <td>Wu</td> </tr>
<tr> <td>6</td> <td>Jan</td> <td>6</td> <td>Lee</td> </tr>
</tbody>
</table></div>
<!-- | # | First name | # | Last name |
|---|---------------|---|-----------|
| 1 | Sue | 1 | Ali |
| 2 | Bob | 2 | Khan |
| 3 | Neo | 3 | Jacobs |
| 4 | Fen | 4 | Jensen |
| 5 | An Si | 5 | Wu |
| 6 | Jan | 6 | Lee | -->
<p>We will use these to generate a list of people later.</p>
<pre><code class="language-fsharp">let fNames = [ (1, "Sue"); (2, "Bob"); (3, "Neo"); (4, "Fen"); (5, "An Si" ); (6, "Jan")] |> Map.ofList
let lNames = [ (1, "Ali"); (2, "Khan"); (3, "Jacobs"); (4, "Jensen"); (5, "Wu" ); (6, "Lee")] |> Map.ofList
// Map<int,string> -> Map<int,string> -> int -> string
let generateName fnames lnames i =
let random = new System.Random(i) //don't new up Random every time in a real app
let fo = random.Next(1,6) // get a random number between 1 - 6
let lo = random.Next(1,6) // get a random number between 1 - 6
sprintf "%s %s" (Map.find fo fnames) (Map.find lo lnames)
// int -> string
let nameGen = generateName fNames lNames
</code></pre>
<p>We curry <code>generateName</code> with the <code>map</code>s of <code>fNames</code> (first names) and <code>lNames</code> (last names) transforming a function of signature <code>Map<int,string> -> Map<int,string> -> int -> string</code> into <code>int -> string</code>.</p>
<p>So calling <code>nameGen</code> will give us a random name like <em>"An Ali"</em> or <em>"Neo Jenson"</em>. Firstly we create 2 <code>map</code>s created from <code>list</code>s of <code>int * string</code> tuples using <code>Map.ofList</code>. In the <code>generateName</code> function we randomly get a first name and last name from the <code>map</code>s by using <code>Map.find</code> which has the signature of <code>'Key -> Map<'Key,'T> -> 'T</code>. Basically given a key and a <code>map</code>, it will return the value found at that key. Since we randomly generate the key, we get a random name each time.</p>
<h2>And there was light (Creating a list)</h2>
<p>Although we can create a list with <code>[ expression ]</code> lets look at the <code>Map.init</code> function which has the signature <code>int -> (int - 'T) -> 'T list</code>. Lets break this down:</p>
<ol>
<li><code>int</code> - size of the list to create</li>
<li><code>(int - 'T)</code> - a function that takes in the current position in the list being generated and returns an instance of type 'T to place at that position</li>
<li><code>'T list</code> - the list that will be created of type <code>'T</code></li>
</ol>
<p>So we want to create a <code>Person list</code>. We need a function <code>int -> Person</code>. We curry in the <code>nameGen</code> to generate a <code>Person</code> with a randomly generated name.</p>
<pre><code class="language-fsharp">type Person = { Id:int; Name:string }
// (int -> string) -> int -> Person
let generatePerson gen i = { Id = i; Name = gen(i) }
// int -> Person
let personGen = generatePerson nameGen
let people = List.init 10 personGen
</code></pre>
<p>So <code>people</code> will be a list of <strong>10</strong> <code>Person</code> instances.</p>
<pre><code class="language-fsharp">[
{ Id = 0; Name = "Wu Fen" }
{ Id = 1; Name = "Bob Ali" }
{ Id = 2; Name = "Fen Jacobs" }
{ Id = 3; Name = "Bob Jensen" }
{ Id = 4; Name = "An Si Wu" }
{ Id = 5; Name = "Bob Khan" }
{ Id = 6; Name = "An Si Jacobs" }
{ Id = 7; Name = "Bob Wu" }
{ Id = 8; Name = "An Si Ali" }
{ Id = 9; Name = "Neo Jensen" }
]
</code></pre>
<h2>These are not the elements you are looking for (Finding an element)</h2>
<p>Now that we have a list, lets see how we work with it. A common need while programming is to find an element in a collection.</p>
<pre><code class="language-fsharp">let bob = people |> List.find (fun p -> p.Name.StartsWith("Bob"))
</code></pre>
<p>We use <code>List.find</code> which has the signature <code>('T -> bool) -> 'T list -> 'T</code>. In our case that would be a function <code>(Person -> bool)</code> that returns <code>true</code> if it is the element you are looking for. Now this is all good and well if there is a "Bob" in the list. But it is a randomly generated collection of names, what if we want to find a specific "Bob" and he isn't in the list?</p>
<pre><code class="language-fsharp">let bob = people |> List.find (fun p -> p = "Bob Khan")
</code></pre>
<p>Assuming you do not have a "Bob Khan" in your list, you will get an exception thrown.</p>
<blockquote>
<p>System.Collections.Generic.KeyNotFoundException: An index satisfying the predicate was not found in the collection.</p>
</blockquote>
<p>Remember in <a href="/how-to-fsharp-pt-6">a previous post</a> we dealt with handling cases when there is no data using <code>option</code>. Well this is one of those times. Lets use a very similar function to <code>List.find</code> called <code>List.tryFind</code> that has the signature <code>('T -> bool) -> 'T list -> 'T option</code>.</p>
<pre><code class="language-fsharp">let maybeBob = people |> List.tryFind (fun p -> p.Name = "Bob Khan")
</code></pre>
<blockquote>
<p>val maybeBob : Person option = Some {Id = 5; Name = "Bob Khan";}
OR
val maybeBob : Person option = None</p>
</blockquote>
<p>So depending on whether the <code>list</code> contains someone named "Bob Khan" the function will return <code>Some</code> or <code>None</code>.</p>
<h2>Take what you need (Filtering a list)</h2>
<p>Sometimes we are not looking for a specific element but multiple elements. Maybe we are looking for elements that match some criteria or are wanting to exclude based on something. Either way we are wanting to filter the collection. For <code>list</code>s we use the <code>List.filter</code> function which has the signature <code>('T -> bool) -> 'T list -> 'T list</code>.</p>
<pre><code class="language-fsharp">let bobs = people |> List.filter (fun p -> p.Name.StartsWith("Bob"))
</code></pre>
<p>So given a function that returns <code>true</code> if the element should be in the <code>list</code>, you will get a new <code>list</code> with the matching elements in it.</p>
<pre><code class="language-fsharp">[
{ Id = 1; Name = "Bob Ali" }
{ Id = 3; Name = "Bob Jenson" }
{ Id = 5; Name = "Bob Khan" }
{ Id = 7; Name = "Bob Wu" }
]
</code></pre>
<p>So in my <code>list</code>, 4 out of 10 people had a first name of "Bob".</p>
<h2>A change is as good as a holiday (Working with list elements)</h2>
<p>Imagine we have our collection of people but a request comes in that the names be in the format <em>Surname, First Names</em>.
First things first, lets write a function <code>leadingLastName</code> that will take in <em>"Neo Jensen"</em> and transform it to <em>"Jensen, Neo"</em> and <em>"An Si Ali"</em> to <em>"Ali, An Si"</em>.</p>
<pre><code class="language-fsharp">// char -> string -> string[]
let split (sep:char) (s:string) = s.Split([|sep|])
// string -> string
let leadingLastName (name:string) =
let lastNameToFront (names:string array) =
match names with
| [||] -> ""
| [|x|] -> x
| [|x;y|] -> String.concat ", " ([|y;x|])
| _ -> [|yield ([Array.last names;","] |> String.concat ""); for i=0 to ((Array.length names)-2) do yield names.[i] |] |> String.concat " "
name |> split ' '
|>lastNameToFront
</code></pre>
<p>This uses <code>match</code> to pattern match on the <code>array</code>. Lets break it down quickly:</p>
<ol>
<li><code>[||] -> ""</code> - Matches when <code>array</code> is empty: return name is empty</li>
<li><code>[|x|] -> x</code> - Matches when name is a single element: name is a single name like "Cher"</li>
<li><code>[|x;y|] -> String.concat ", " ([|y;x|])</code> - Matches when <code>array</code> is 2 elements: name and surname so swaps and adds a ,</li>
<li><code>_</code> - this one is quite complex but basically it moves the last element to the front and adds a , after it</li>
</ol>
<p>Next we will use <code>leadingLastName</code> with <code>List.map</code> which has the signature <code>('T -> 'U) -> 'T list -> 'U list</code>. We have seen <code>map</code> (the function not the type) before when we learned about <code>option</code>. Although <code>map</code> can map from a value to a value of any other type, in that case we went from <code>string -> string</code> with name to email. In this case we will also go from <code>string</code> to <code>string</code>. Just remember you can map to different types.</p>
<pre><code class="language-fsharp">let withLeadingLName = people |> List.map (fun p -> {p with Name = (leadingLastName p.Name)})
[
{ Id = 0; Name = "Wu, Fen" }
{ Id = 1; Name = "Ali, Bob" }
{ Id = 2; Name = "Jacobs, Fen" }
{ Id = 3; Name = "Jenson, Bob" }
{ Id = 4; Name = "Wu, An Si" }
...
]
</code></pre>
<p>We supplied <code>map</code> an inline function <code>(fun p -> {p with Name = (leadingLastName p.Name)})</code> that takes a <code>Person</code> and uses <code>leadingLastName</code> to return a new <code>Person</code> with the name changed.</p>
<h2>Get it sorted (sorting elements)</h2>
<p>Often we care about the order of the elements in a collection. We can use one of the sorting functions to get a new sorted <code>list</code> back. <code>List.sortBy</code> has the signature <code>('T -> 'Key) -> 'T list -> 'T list</code>.</p>
<pre><code class="language-fsharp">let sorted = withLeadingLName |> List.sortBy (fun p -> p.Name)
[
{ Id = 8; Name = "Ali, An Si" }
{ Id = 1; Name = "Ali, Bob" }
{ Id = 6; Name = "Jacobs, An Si" }
{ Id = 2; Name = "Jacobs, Fen" }
{ Id = 9; Name = "Jacobs, Neo" }
...
]
</code></pre>
<p>So we pass it a function to determine what to sort by and then the list and we will get back the sorted list, in this case by the <code>Name</code> <code>string</code>.</p>
<h2>Family business (Grouping)</h2>
<p>What if our next task was to group the people by their last name? Well with the built in <code>list</code> functions this is very simple.</p>
<pre><code class="language-fsharp">// Person -> string
let getLastName person = person.Name |> split ',' |> Array.head
let groupedByLName = withLeadingLName |> List.groupBy getLastName
[
("Wu", [{Id = 0; Name = "Wu, Fen";}; {Id = 4; Name = "Wu, An Si";}; {Id = 7; Name = "Wu, Bob";}]);
("Ali", [{Id = 1; Name = "Ali, Bob";}; {Id = 8; Name = "Ali, An Si";}]);
...
]
</code></pre>
<p>We use the <code>List.groupBy</code> function which has the signature <code>('T -> 'Key) -> 'T list -> ('Key * 'T list) list</code>. Lets break that down.</p>
<ol>
<li><code>('T -> 'Key)</code> - a function that will take an element from the <code>list</code> and return a key to group by. In our case it should take a <code>Person</code> and return their last name.</li>
<li><code>'T list</code> - the original list that needs grouping</li>
<li><code>('Key * 'T list) list</code> - a list of tuples where the first element of the tuple is the <strong>key</strong> and the second is a list of elements that matched with that key</li>
</ol>
<h2>Less random</h2>
<p>As a final demonstration, lets look at a less used function of <code>list</code>. At the beginning of this post we had 2 <code>list</code>s with the names. What if we didn't care about it being random? What if we just joined the 1st first name to the 1st last name and continued on like that down the <code>list</code>s.</p>
<pre><code class="language-fsharp">let fNames = [ "Sue"; "Bob"; "Neo"; "Fen"; "An Si" ; "Jan"]
let lNames = [ "Ali"; "Khan"; "Jacobs"; "Jenson"; "Wu"; "Lee"]
let names = List.zip fNames lNames |> List.map (fun (fname,lname) -> sprintf "%s, %s" lname fname)
</code></pre>
<blockquote>
<p>val names : string list = ["Ali, Sue"; "Khan, Bob"; "Jacobs, Neo"; "Jenson, Fen"; "Wu, An Si"; "Lee, Jan"]</p>
</blockquote>
<p>We used the <code>List.zip</code> function that takes 2 lists and zips them together into a <code>list</code> of tuples made up of an element from the 1st <code>list</code> and and element from the second.</p>
<h2>Conclusion</h2>
<p>So we finally got to dive into working with collections. In this post you learned how to create, map, sort, group, and even zip a <code>list</code>. Remember that the functions we worked with here are also available on the <code>Array</code> and <code>Seq</code> modules.</p>
<p>In a coming post we will be dealing with error handling.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/lists">Lists</a></li>
<li><a href="https://fsharpforfunandprofit.com/posts/list-module-functions/">For fun and profit List module functions</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Background image by <a href="https://unsplash.com/@jackreichert">Jack Reichert</a></li>
<li>Social image by <a href="https://unsplash.com/@p">Patrik Göthe</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-6/How to F# - Part 62018-10-27T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-6/<p>Sometimes when dealing with data, the value you are expecting does not exist. Functional programming has a common abstraction for dealing with this called <strong>Maybe</strong>. In F# this abstraction is known as <code>option</code>.</p>
<!--more-->
<p>Rather than just diving into the functional way of handling no data lets briefly dive into how non-functional languages typically handle the absence of data, namely <code>null</code>.</p>
<h2>What is the problem with null?</h2>
<p>So what problem are we solving by abstracting what it means to have data or not? Well lets look at how things are typically handled in most popular languages. In languages like Java, C#, and Javascript <code>null</code> represents the intentional absence of any object. So why is this a problem? Firstly, <code>null</code> carries no information about the type of data that was expected. Was it a missing <code>string</code> or a <code>Person</code> object? If <code>null</code> is all you have, you by definition have NOTHING! The other problem is in the handling of it. You need to explicitly handle any case where a value may be <code>null</code>.</p>
<pre><code class="language-csharp">// Problem 1: Need to check for null
if(!string.IsNullOrEmpty(email))
{
SendEmail(email);
} else ...
</code></pre>
<p>This means your code can become littered with <code>null</code> checks and if you forget to check and a <code>null</code> sneaks through, your code will throw some kind of <code>NullReferenceException</code>.</p>
<pre><code class="language-csharp">// Problem 2: If you do not check for null your application can blow up
var email = (firstname.ToLower()) + "@acme.com";
</code></pre>
<p>If <code>firstname</code> is <code>null</code>, this statement will throw an exception and possibly crash our application.</p>
<p>The strategies for mitigating these problems are to try catch all <code>null</code>s at the boundaries of your application and to use the <a href="https://martinfowler.com/eaaCatalog/specialCase.html">Null Object/Special Case</a> pattern. We won't go into these but my main criticism is the noise it adds to the code.</p>
<h2>Maybe this is here</h2>
<p>The nice thing about the <code>Maybe</code> abstraction is it is generic, unlike the <strong>Special Case</strong> and in general can be much more elegant, saving you from repeatedly checking for <code>null</code>.</p>
<p>As mentioned before, in F# the <strong>Maybe</strong> abstraction (known as a Monad in functional programming theory) is an <code>option</code>. To see how it works we are going to define a function that takes a name as <code>string option</code> and turns it into an email.</p>
<p>First, lets briefly discuss what <code>option</code> actually is. <code>option</code> can have one of 2 values : <strong>Some of 'T</strong> OR <strong>None</strong>. We can optionally have some value of type <code>'T</code>, else we will have <code>None</code>.</p>
<p>Below we see how we define a value with <code>Some</code> or <code>None</code></p>
<pre><code class="language-fsharp">let fname1 = Some "Brandon"
let fname2 = None
//string option -> string option
let makeEmail name = Option.map (fun n -> sprintf "%s@acme.com" n) name
let email1 = makeEmail fname1
let email2 = makeEmail fname2
</code></pre>
<blockquote>
<p>val email1 : string option = Some "Brandon@acme.com"
val email2 : string option = None</p>
</blockquote>
<p><code>Option.map</code> has a signature of <code>('T -> 'U) -> 'T option -> 'U option</code>.</p>
<ol>
<li><code>('T -> 'U)</code> - a function that maps from <code>'T</code> to <code>'U</code>. This is a generic function so in our case it is a function of <code>string -> string</code></li>
<li><code>'T option</code> - the input value to map. In our case <code>'T</code> will be <code>string</code> that is the name</li>
<li><code>'U option</code> - the return value of type <code>'U</code> will be the email <code>string</code></li>
</ol>
<h2>Alternatives</h2>
<p>What if we wanted to have a fallback email incase no name was supplied? That is simple enough:</p>
<pre><code class="language-fsharp">//string option -> string option
let makeEmail name =
name
|> Option.orElse (Some "info")
|> Option.map (fun n -> sprintf "%s@acme.com" n)
</code></pre>
<p>We have changed to a pipeline style now where the <code>string option</code> is piped through <code>Option.orElse</code> which. If the value is <code>Some</code> it passes through, if it is <code>None</code> it gets the value of <code>Some("info")</code>.</p>
<p>Running again we would get the following value for <code>email2</code>:</p>
<blockquote>
<p>val email2 : string option = Some "info@acme.com"</p>
</blockquote>
<h2>Handling null</h2>
<p>What if we are getting values from a database but always wrapping them in <code>Some</code>. Then we would be getting values of <code>Some(null)</code>. We could convert the <code>Some(null)</code> to <code>None</code> using <code>Option.bind</code>. This has a signature <code>('T -> 'U option) -> 'U option</code>. So we would pass it a function of <code>string -> string option</code>, which you can see below is the <code>Option.ofObj</code> function.</p>
<pre><code class="language-fsharp">//string option -> string
let makeEmail name =
name
|> Option.bind Option.ofObj
|> Option.orElse (Some "info")
|> Option.map (fun n -> sprintf "%s@acme.com" n)
</code></pre>
<p>Finally, what if we are dealing with data that comes from a C# library and we had not wrapped them in <code>Some</code>? Values could be <code>null</code>. Lets ease out <code>makeEmail</code> constraints a bit and just accept <code>string</code>, we will then transform it directly to an <code>option</code> type. Unfortunately since databases and other languages make <code>null</code> an acceptable value we often do still have to deal with it when stepping outside our process.</p>
<pre><code class="language-fsharp">//string -> string
let makeEmail name =
let sanitizeString (s:string) = null |> (fun x -> if (box x = null) then None else Some(x))
name
|> Option.bind sanitizeString
|> Option.orElse (Some "info")
|> Option.map (fun n -> sprintf "%s@acme.com" n)
|> Option.get
</code></pre>
<p>In the above example I also then returned the contained values, so a <code>string</code> instead of <code>string option</code>.</p>
<p>Now at this point you might ask what is the point of using option and I would tend to agree. This is after all, demo code. I just wanted to point out how to sanitize a possible <code>null</code> value and then use <code>Option.get</code> to get the <code>'T</code> value. In this case <code>string</code>. <code>get</code> will throw a <code>ArgumentException</code> if passed a <code>None</code>.</p>
<h2>Conclusion</h2>
<p>We really just scratched the surface of the functions available on <code>Option</code> but I hope you have seen how it can be used to represent the absence of data. Although when dealing with the outside world (outside your application process) you are still forced to think about the possibility of <code>null</code>, <code>Option</code> has some major advantages over its OO counterparts. For one there is a lot less branching logic. The <code>Option</code> functions will often just handle <code>None</code> elegantly. This is a particular challenge of the <strong>Special case</strong> approach which requires you to think about a specific implementation for every type that can be <code>null</code> and think about what a no operation means.</p>
<p>The most important function to understand here though would have to be <code>map</code>. We will see <code>map</code> over and over in different modules. It allows you to operate in the abstraction you are in without leaving that abstraction but still manipulate the data contained within.</p>
<p>Next up we will finally be <a href="/how-to-fsharp-pt-7">diving into collections</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/values/null-values">Null Values</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/options">Option</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-5/How to F# - Part 52018-10-26T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-5/<p>In the <a href="/how-to-fsharp-pt-4">previous post</a> we looked at language features that allowed us to control the flow of our applications. In this post we will look at Pattern Matching, which allows for some very powerful control flow, as well as some neat deconstruction of values.</p>
<!--more-->
<p>In this post we will look at a few ways of deconstructing values and end with an in-depth look at <code>match</code> again.</p>
<h2>Deconstructing a tuple</h2>
<p>Lets ease into pattern matching by looking at deconstructing a tuple. Remember a tuple is a little like a record except it has no named accessor fields. We have the <code>fst</code> and <code>snd</code> functions that get the value for you but if you have more than 2 elements in your tuple you are on your own. Lets refresh by looking at an example from <a href="/how-to-fsharp-pt-1">part 1</a>:</p>
<pre><code class="language-fsharp">//create a tuple of type bool * int
let myTuple = (true,99)
// use the fst function to get the first value in the tuple
let b1 = fst myTuple
// use the snd function to get the second value in the tuple, with pipe forward operator
let n1 = myTuple |> snd
// use pattern matching to get the values
let (b,n) = myTuple
//val b : bool = true
//val n : int = 99
</code></pre>
<p>Notice how on the last line <code>let (b,n) = myTuple</code> we deconstruct the tuple to individual values. This is a form of pattern matching. The pattern on the left matches the pattern of a tuple that is being assigned to it so F# is able to assign the respective elements from the tuple to each of those elements.</p>
<pre><code class="language-fsharp">let tripleThreat = (true,99,"str")
let (b2,n2,s1) = tripleThreat
</code></pre>
<p>As you would expect, when you add more elements the pattern on the left needs to match.</p>
<h2>Function arguments</h2>
<p>Lets drill into this a bit more. We can use it to assign values while deconstructing a tuple but what if we want to accept a tuple argument into a function and we only care about the deconstructed values.</p>
<pre><code class="language-fsharp">// bool * int -> unit
let takeATup1 tup =
let x = fst tup
let y = snd tup
if(x) then printfn "%i" (y + 1) else printfn "%i" (y - 1)
let takeATup2 (x,y) =
if(x) then printfn "%i" (y + 1) else printfn "%i" (y - 1)
let myTuple = (true,99)
takeATup1 myTuple
takeATup2 myTuple
</code></pre>
<blockquote>
<p>Output: 100<br />
Output: 100</p>
</blockquote>
<p>In <code>takeATup1</code> we accept the argument as a tuple value. In <code>takeATup2</code> we pattern match on it to be able to get straight to its constituent elements. So it is possible to deconstruct a tuple in the argument. Wouldn't it be useful if we could deconstruct other types?</p>
<h2>Sum type</h2>
<p>A common pattern in F# is to create specific types to document your code a little better using the type system. Say we had an <code>int</code> that uniquely identifies a row in a spreadsheet table. We could just make it an <code>int</code>, or we could create a special type to represent what it is. Doing this in F# is super easy. Then whenever we need to get that <code>int</code> out to use it, we simply extract it using the same deconstruction technique we saw earlier.</p>
<pre><code class="language-fsharp">type Id = | RowId of int
let getRow (RowId rid) =
printfn "%i" rid
(rid,true)
let i = RowId 1
let row = getRow i
</code></pre>
<blockquote>
<p>Output: 1</p>
</blockquote>
<p>Did you notice how, like with tuples, the pattern matches what is used to construct the value in the first place?</p>
<h2>Product type</h2>
<p>As one last example before we switch to <code>match</code>, you can do the same kind of deconstruction with record types.</p>
<pre><code class="language-fsharp">
type Person = { Name:string; BirthYear:int }
let p1 = { Name = "Devon"; BirthYear = 2120 }
let sayHello { Name = name; BirthYear = _ } =
printfn "Hello %s" name
sayHello p1
</code></pre>
<blockquote>
<p>Output: Hello Devon</p>
</blockquote>
<p>Again it looks like we are constructing the value in the argument. One thing of note is that I used the wildcard symbol <code>_</code> to show that we don't care about the value of <code>BirthDate</code> within the scope of this function.</p>
<h2>Match expression (revisited)</h2>
<p>We covered <code>match</code> in <a href="/how-to-fsharp-pt-4">part 4</a> but are going to revisit it with our new-found knowledge of pattern matching.</p>
<p>To dip our toes in lets create a function that takes a <code>bool</code> and an <code>int</code> and if the first argument is <code>true</code>, then it increments the second argument else it decrements it.</p>
<pre><code class="language-fsharp">// bool -> int -> int
let incDec t n =
match (t,n) with
| (true,x) -> x + 1
| (false,x) -> x - 1
printfn "%i" (incDec true 10)
printfn "%i" (incDec false 10)
</code></pre>
<blockquote>
<p>Output: 11<br />
Output: 9</p>
</blockquote>
<p>Note how we created a tuple in the input expression of the <code>match</code> and then pattern match for the different options.</p>
<p>Before we move on lets highlight some other patterns and features. Let us add 2 more constraints to out function.</p>
<ol>
<li>If the value is <code>1</code> we ignore the boolean and just return <code>1</code></li>
<li>If the value is less than or equal to <code>0</code> we will return <code>0</code></li>
</ol>
<pre><code class="language-fsharp">let incDec t n =
match (t,n) with
| (_,1) -> 1
| (_,x) when x <= 0 -> 0
| (true,x) -> x + 1
| (false,x) -> x - 1
printfn "%i" (incDec true 10)
printfn "%i" (incDec false 10)
printfn "%i" (incDec false 1)
printfn "%i" (incDec false -5)
</code></pre>
<blockquote>
<p>Output: 11<br />
Output: 9<br />
Output: 1<br />
Output: 0</p>
</blockquote>
<p>A few things to note here. Firstly, we used the wildcard <code>_</code> to indicate that we don't care about the value of the boolean. It will match the first element whether <code>true</code> or <code>false</code>. Secondly, we used a condition with the <code>when</code> keyword. This requires that the pattern is matched AND that the condition is then met. Thirdly, the order matters here. If we had added the 2 new cases at the end of the <code>match</code> they would never be hit.</p>
<h2>Active Patterns</h2>
<p>Active Patterns is a really cool feature that can be used to simplify the <code>match</code> cases by wrapping up some pattern matching into named partitions. I am going to cover partial active patterns here, as I have found them the most useful.</p>
<p>To demonstrate the usage of partial active patterns we are going to code up a little game called FizzBuzz. How it works is you increment numbers saying the number unless:</p>
<ol>
<li>The number is divisible by 3, then you say <em>Fizz</em></li>
<li>The number is divisible by 5, then you say <em>Buzz</em></li>
<li>The number is divisible by both 3 and 5, then you say <em>FizzBuzz</em></li>
</ol>
<pre><code class="language-fsharp">// define partial active patterns
let (|Fizz|_|) i = if ((i%3) = 0) then Some() else None
let (|Buzz|_|) i = if ((i%5) = 0) then Some() else None
// use partial active patterns
let fizzbuzz i =
match i with
| Fizz & Buzz -> printf "Fizz Buzz, "
| Fizz -> printf "Fizz, "
| Buzz -> printf "Buzz, "
| x -> printf "%i, " x
// run fizz buzz for numbers 1 to 20
[1..20] |> List.iter fizzbuzz
</code></pre>
<blockquote>
<p>Output: 1, 2, Fizz, 4, Buzz, Fizz, 7, 8, Fizz, Buzz, 11, Fizz, 13, 14, Fizz Buzz, 16, 17, Fizz, 19, Buzz,</p>
</blockquote>
<p>They are called partial active patterns because in the definition <code>|Fizz|_|</code> they have the wildcard <code>_</code> that allows for a match to not occur. We indicate that the match happened by returning <code>Some</code> and it did not by returning <code>None</code>. We will encounter <code>Some</code> again in a later post when we tackle handling no data.</p>
<p>Notice how for "FizzBuzz" we used <code>&</code> to check it matched both.</p>
<p>I want to point out something that may not be clear. If we wanted to pattern match and deconstruct the value in the match we could do that by sending the value back with <code>Some(i)</code>. Then the case would look like this <code>| Fizz x -> printf "Fizz(%x) " x</code>.</p>
<p>These take some playing around with to get comfortable with but once you are they are great for cleaning up your <code>match</code> and making them more descriptive.</p>
<h2>Conclusion</h2>
<p>Today we looked into various ways you can use pattern matching to both get values and branch your application logic. We also explored partial active patterns by writing an implementation of the FizzBuzz game.</p>
<p>Next up we deal with <a href="/how-to-fsharp-pt-6">handling the absence of data</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Pattern_matching">Pattern Matching on Wikipedia</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/pattern-matching">Pattern Matching on MS docs</a></li>
<li><a href="https://fsharpforfunandprofit.com/posts/match-expression/">Match Expressions for fun and profit</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/active-patterns">Active Patterns</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Social image by <a href="https://unsplash.com/@olav_ahrens">Olav Ahrens Røtne</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-4/How to F# - Part 42018-10-25T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-4/<p>In the <a href="/how-to-fsharp-pt-3">last post</a> we finished off our dive into functions. In this post we will look at control flow. How do we make a branching decision? How do we loop through something until some condition is met?</p>
<!--more-->
<p>I am going to try keep this post short. The reason for this is that although you will invariably need to use control flow expressions in your code, they are stylistically not very functional and there are usually more functional ways to achieve the same thing. We explore those more functional techniques in this and future posts.</p>
<h2>If then else</h2>
<p>Other than<code>match</code> (covered later), <code>if</code> is probably the next most useful control flow expression we will touch on in this post. The <code>if</code> expression takes a <code>bool</code> and if <code>true</code> proceeds with the <code>then</code> body. Usually there is an <code>else</code> and we will go through when that is necessary.</p>
<pre><code class="language-fsharp">let b = true
if (b) then printfn "Is true" else printfn "Is false"
</code></pre>
<p>The above will print out <em>Is true</em>, and not print <em>Is false</em>.<br />
Maybe we don't want to print out anything if the value is <code>false</code>. We can do this:</p>
<pre><code class="language-fsharp">let b = true
if (b) then printfn "Is true"
</code></pre>
<p>Now if you changed <code>let b = false</code>, nothing would be printed.</p>
<p>What if we wanted to <strong>return a value</strong> based on some condition though without an <code>else</code>?</p>
<pre><code class="language-fsharp">let v = if(b) then 1 // <- Error: This 'if' expression is missing an 'else' branch.
</code></pre>
<p>At least the error message is pretty clear about what the problem is. With the print example we were returning <code>unit</code> so it didn't matter if nothing was returned. Here the expression has to return a value because we are assigning that value.</p>
<pre><code class="language-fsharp">let v = if(b) then 1 else 0
</code></pre>
<p>So depending on whether <code>b</code> is <code>true</code> or <code>false</code>, <code>v</code> will have a value of <code>1</code> or <code>0</code> respectively.</p>
<p>Lets take a look at something a bit more complex:</p>
<pre><code class="language-fsharp">let divideBy d n = n/d
let numerator = 10
let denominator = 2
let j = if(denominator <> 0) then
printfn "Dividing by %i, not 0" denominator
let x = numerator |> divideBy denominator
printfn "The answer is %i" x
x
else
printfn "Dividing by 0"
0
</code></pre>
<blockquote>
<p>Note that we don't have to assign this to a value, here <code>j</code> but it would be pretty pointless to return a value and not use it. The compiler will give you a warning at this point <em>The result of this expression has type 'int' and is implicitly ignored. Consider using 'ignore' to discard this value explicitly, e.g. 'expr |> ignore', or 'let' to bind the result to a name, e.g. 'let result = expr'.</em><br />
This is asking us to call <code>ignore()</code> in each branch of the <code>if</code> expression.</p>
</blockquote>
<p>We can have multiple lines in either branch, organized by indentation. Just like functions the last expression is what is returned as the value of the <code>if-else</code> expression for each branch.</p>
<h3>Scope</h3>
<p>In the previous post I mentioned scope. This is a good opportunity to demonstrate scope. Check out the assigning of the <code>denominator</code> value below.</p>
<pre><code class="language-fsharp">let divideBy d n = n/d
let numerator = 10
let denominator = 0
if(denominator <> 0) then
printfn "Dividing by %i, not 0" denominator
let x = numerator |> divideBy denominator
printfn "The answer is %i" x
x
else
printfn "Dividing by 0"
let denominator = 1
printfn "Instead by %i, not 0" denominator
let x = numerator |> divideBy denominator
printfn "The answer is %i" x
x
printfn "Denominator is %i" denominator
</code></pre>
<p>The above prints out the following:</p>
<pre><code class="language-text">Dividing by 0
Instead by 1, not 0
The answer is 10
Denominator is 0
</code></pre>
<p>Notice here how we set a value for <code>denominator</code> within the <code>else</code> branch that shadows the outside one. Once we are back to the scope outside the if, <code>denominator</code> is back to <code>0</code>, even though it was set to <code>1</code> in the <code>else</code> branch. It was <code>1</code> for the scope of the <code>else</code> branch of the expression as it was set in that scope.</p>
<h3>If / elseif / else</h3>
<p>It is (maybe) worth mentioning that you can have more than 2 branches by using <code>elif</code>.</p>
<pre><code class="language-fsharp">if(x = 1) then printfn "x is 1"
elif (x = 2) then printfn "x is 2"
else printfn "x is not 1 or 2"
</code></pre>
<p>We will briefly cover <code>match</code> next and even when <strong>if / else</strong> seems a cleaner solution, once you are using <code>elif</code> you almost certainly should be using <code>match</code> instead.</p>
<h2>Match</h2>
<p>We will hopefully cover pattern matching in more detail in a later entry but no coverage of functional control flow is complete without <code>match</code>.</p>
<p>Lets re-write the previous example using <code>match</code>:</p>
<pre><code class="language-fsharp">match x with
| 1 -> printfn "x is 1"
| 2 -> printfn "x is 2"
| _ -> printfn "x is not 1 or 2"
</code></pre>
<p>Note that <code>_</code> is a catch-all, like else is.
This is a much cleaner and more functional way to do control flow. A nice benefit here is that the compiler gives you a warning if you are not matching exhaustively on all options of the matched value.</p>
<p>We will hopefully circle around to <code>match</code> again when covering pattern matching as <code>match</code> is far more powerful than demonstrated here.</p>
<p><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/match-expressions">Microsoft docs</a></p>
<h2>for..in</h2>
<p>If you need to loop through an entire collection and do something you could use the <code>for pattern in enumerable-expression do body-expression</code> syntax. This is like <code>foreach</code> in many other languages. Lets see what that looks like:</p>
<pre><code class="language-fsharp">let numbers = [1..10]
for x in numbers do
printf "%i " x
</code></pre>
<blockquote>
<p>Output: 1 2 3 4 5 6 7 8 9 10</p>
</blockquote>
<p>See how you can easily create a range of values using <code>start..finish</code> syntax. We use this to define <code>numbers</code>. Then for each element of the list we print the value which is in <code>x</code>.</p>
<p>We will hopefully cover collections in an upcoming post but for interest sake lets see how this would be done in a more functional way.</p>
<pre><code class="language-fsharp">let numbers = [1..10]
numbers |> List.iter (printf "%i ")
</code></pre>
<p>Unsurprisingly the functional approach is to call the <code>iter</code> function on the <code>List</code> module. This <code>iter</code> function has the signature <code>('T -> unit) -> 'T list -> unit</code>. Lets break that down:</p>
<ul>
<li><code>('T -> unit)</code>: a function defining the action to take for each element in the list</li>
<li><code>'T list</code>: the list to iterate through</li>
<li><code>unit</code>: returns unit so this function is designed to iterate through a function and do something, not return a value</li>
</ul>
<p>There are many more functions for working with lists in the <code>List</code> module and matching ones for <code>Array</code> and <code>Seq</code>.</p>
<p><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/loops-for-in-expression">Microsoft docs</a></p>
<h2>for..to</h2>
<p>While <code>for..in</code> is for iterating over a collection, <code>for..to</code> allows you to iterate from a start value to another. This is like a <code>for</code> loop in other languages.</p>
<pre><code class="language-fsharp">let ns = [|1..10..100|]
for i=0 to ((Array.length ns)/2) do
printf "%i " (Array.get ns i)
</code></pre>
<blockquote>
<p>Output: 1 11 21 31 41 51</p>
</blockquote>
<p>In our example we have an array that has numbers 1 to a 100 in increments of 10. We only iterate through half the list.</p>
<p><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/loops-for-to-expression">Microsoft docs</a></p>
<h2>while</h2>
<p>What if we want to iterate until a certain condition is true? The following code gets a random number until that number is <code>7</code>.</p>
<pre><code class="language-fsharp">let random = new System.Random()
let aNumber() = random.Next(1,10)
let mutable n = 0
while (n <> 7) do
printf "%i " n
n <- aNumber()
</code></pre>
<blockquote>
<p>Output: 0 9 9 1 6 5 2 2 6 6 2 6 6 1 2 3 6 8 8 1 3 2 2</p>
</blockquote>
<p>We kept going through the <code>while</code> loop until <code>aNumber()</code> returned <code>7</code>.</p>
<p><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/loops-while-do-expression">Microsoft docs</a></p>
<h2>Conclusion</h2>
<p>In this post we looked at ways to represent branching logic and ways to iterate over values. Remember that much of this is a very imperative approach and as such is not used a lot in the function paradigm. We looked at some functional techniques for dealing with branching and looping and will continue this in future articles. Next up we look at <a href="/how-to-fsharp-pt-5">Pattern Matching</a>.</p>
<h2>Credits</h2>
<ol>
<li><a href="https://unsplash.com/@spacexuan">Crystal Kwok</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-3/How to F# - Part 32018-10-24T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-3/<p><a href="/how-to-fsharp-pt-2">Previously</a> we began exploring some theory behind functions. In this post we will look at practical techniques for working with functions.</p>
<h2>Working with functions</h2>
<p>Using functions is unsurprisingly the bread and butter of functional programming, let us see if we can define a slightly more complex function without butting into too many new concepts. We are going to define a function that cleans up an input <code>string</code> and then saves it to disk.</p>
<pre><code class="language-fsharp">// some helper string functions
// string -> string
let trim (s:string) = s.Trim()
// string -> string -> unit
let write path content =
let sanitized = trim content
File.WriteAllText(path, sanitized)
// use the write function
write "/path/to/file.txt" "Some text to write to file"
</code></pre>
<p>This is our first multi-line function and let us go through a few things that may not have been immediately obvious from the single line function. Firstly, note that the body of the function is defined by the indent. For the function the size of the indent does not matter, as long as it is the same throughout the scope. We will dive into this a bit more when we touch on scope in a later post on control flow. Secondly, the value of the last expression is what is returned from the function, in this case <code>unit</code>. You didn't need to explicitly use <code>return</code> like in many other languages. This is because functions ALWAYS return something so the compiler can assume that the last expression result is the return.</p>
<p>A big part of the flexibility of functional programming comes from being able to easily tie functions together in interesting ways to build up more complex functionality. Let us apply this idea to the <code>write</code> function. We are going to pass a function into the <code>write</code> function that will do the sanitization, thus allowing the client of the function to decide what "sanitized" means.</p>
<pre><code class="language-fsharp">// ('a -> string) -> string -> string -> unit
let write sanitizer path content =
let sanitized = sanitizer content
File.WriteAllText(path, sanitized)
// use the write function
write trim "/path/to/file.txt" "Some text to write to file"
write (fun (s:string) -> s.Substring(0, 140)) "/path/to/file.txt" "Some text to write to file"
</code></pre>
<p>See how we just passed the <code>trim</code> function in as an argument? This of course could be any function as we see in the second usage.
Ok but this signature <code>('a -> string) -> string -> string -> unit</code> is getting a bit more hairy, so lets break it down. <code>('a -> string)</code> is the signature for the <code>sanitizer</code> function we are now passing into the <code>write</code> function. The F# compiler has inferred that the function doesn't need to be of type <code>string -> string</code> for our <code>write</code> function to work. As long as the <code>sanitizer</code> function returns <code>string</code>, the input can be of any type. This is a generic parameter then and in F# a generic parameter is indicated with a leading <code>'</code>. So <code>('a -> string)</code> indicates a function that takes any type and returns a <code>string</code>. The rest of the signature <code>string -> string -> unit</code> then remains the same representing the <em>path</em>, <em>content</em>, and return value type.</p>
<h3>Currying</h3>
<p>Now is the time to introduce <em>currying</em>. This has nothing to do with food but instead is a technique named after <a href="https://en.wikipedia.org/wiki/Haskell_Curry">Haskell Curry</a>. <em>Currying</em> is the technique of taking a function that takes multiple arguments and evaluating it as a sequence of single argument functions. If that doesn't make sense, don't worry, it is easier to understand from examples.</p>
<p>We made our <code>write</code> function more flexible by allowing for a <code>sanitizer</code> function to be passed in but now every time we want to use it we need to supply that sanitizer function. What if in an area of my code <em>sanitizing</em> always means <em>trim</em> the string? What if it was expected that we always do this before saving a <code>string</code> to disk? Well then we can define a new function by <em>currying</em> <code>write</code> with an argument.</p>
<pre><code class="language-fsharp">// string -> string -> unit
let sanitzedWrite = write trim
</code></pre>
<p>Now we have a new function <code>sanitzedWrite</code> with the <code>trim</code> function baked in.<br />
Note how we are back to our previous signature of <code>string -> string -> unit</code> just like before we introduced the <code>sanitizer</code> argument. We are able to optimize for our needs and still leave options open for when <code>write</code> is needed without the <code>trim</code>. Let us look at that case next.</p>
<h3>Identity</h3>
<p>This seems like a good time to introduce a concept whose value may not be immediately obvious. It is the idea of <em>identity</em>. I will not go into any theory on monoids, monads, or any category theory, there is an <a href="http://blog.ploeh.dk/2017/10/04/from-design-patterns-to-category-theory/">awesome series from Mark Seemann that covers this</a>. Suffice to say <em>identity</em> is a function that does nothing.</p>
<p>The easiest way to explain <em>identity</em> is with examples:</p>
<ol>
<li>The <em>identity</em> for addition is 0 : 5 + 0 = 5</li>
<li>The <em>identity</em> for multiplication is 1 : 2 * 1 = 2</li>
<li>The <em>identity</em> for <code>string</code> is "" : "hello" + "" = "hello"</li>
</ol>
<p>In F# <strong>identity</strong> is defined by the function <code>id</code> which has the signature <code>'a -> 'a</code>. "So what"? you may ask. How could something that does nothing ever be useful? Well thankfully we have a useful example at hand already (it is almost like I planned it).</p>
<p>Imagine we have another section of our code that needs to write content to a file but has no rules about sanitization. It just needs to write the content as is.</p>
<pre><code class="language-fsharp">// string -> string -> unit
let justWrite = write id
</code></pre>
<p>Of course we could have just put in our own function <code>fun x -> x</code> in there but this is actually quite a common situation when you are passing functions around to extend functionality, so a functional language like F# provides an easy way to do this.</p>
<h3>Piping</h3>
<p>Hopefully now you are starting to feel a bit more comfortable with F# functions. One thing you will start noticing about functional code is the way it tends to flow. When everything has an input and an output, you tend to start organizing your code into these workflows that chain functions together. This can lead to some really readable code once you wrap your head around the idea. This is made possible by an operator in the language that allows you to do this in a really interesting and useful way. It is the <em>forward pipe operator</em> <code>|></code> which passes the result of the function on the left to the function on the right.</p>
<p>Again let us look at some examples to try clarify. I will give multiple examples, first without <code>|></code>, followed by with.</p>
<pre><code class="language-fsharp">// trim a string
let trimmed1 = trim " some text "
let trimmed2 = " some text " |> trim
// get first value of a tuple
let name1 = fst ("Devon",37)
let name2 = ("Devon",37) |> fst
</code></pre>
<p>So what does this have to do with pipelines? Let us try use this to chain a workflow together.</p>
<pre><code class="language-fsharp">Console.ReadLine() // read a line in from the console
|> toUpper // convert the string to uppercase
|> trim // trim the string
|> justWrite "/to/some/file.txt"// write it without trim since we already trimmed
</code></pre>
<p>Above you see a workflow where the input from the previous step is used as the argument to the following. We read in some <code>string</code>, uppercase it, trim it, and then write it to file. I think that is some pretty descriptive code, don't you?</p>
<p>Note that <em>currying</em> comes in quite useful when wanting to use <code>|></code> since you need the result of the function to line up with that of the function argument to the right of the <code>|></code>.</p>
<h3>Composition</h3>
<p>Another concept that will seem very similar is composing functions together with the <em>forward composition operator</em> <code>>></code>. This operator allows you to take a function whose output matches the input of another function and compose those 2 together to for a new function.</p>
<pre><code class="language-fsharp">// int -> int
let inc x = x + 1
// int -> string
let intToString (x:int) = x |> string
// int -> string
let incrementedString = inc >> intToString
1 |> incrementedString // val it : string = "2"
</code></pre>
<p>So if we applied this to our previous workflow we could summarize the middle step:</p>
<pre><code class="language-fsharp">let prepareString = toUpper >> trim
Console.ReadLine()
|> prepareString
|> justWrite "/to/some/file.txt"
</code></pre>
<h3>Interop with .NET OO style</h3>
<p>You may have noticed a few signatures of functions that seemed to look a little different. When using the .NET library, it can look a little different to the functional first code. This is because the .NET BCL is an object-oriented (OO) code base. F# can talk to it fine but it is a different paradigm. For example you can see when calling <code>File.WriteAllText(path, content)</code> it looks a lot like how it would look in C#. Another thing you may have noticed is that when defining functions that work with <code>string</code>s I usually am explicit about the type in the signature eg. <code>let trim (s:string) = s.Trim()</code>. This is because F# can need to help inferring the type when dealing with objects of types coming from the OO side of .NET. <code>string</code> seems to be the most common offender here. It is something to keep in mind. When dealing with <code>string</code> or other types from the .NET BCL it is often worth writing little functional wrappers around them like you see with <code>trim</code>.</p>
<h2>Conclusion</h2>
<p>To close off this post I wanted to mention something important to consider when writing your own functions, and that is the idea of <em>purity</em>. A <strong>pure</strong> function is a function that has no internal dependencies that could change the output. As an example, our <code>trim</code> function <code>let trim (s:string) = s.Trim()</code> will give the same output for the same input every single time. Compare this to <code>File.ReadAllText("/path/to/file.txt")</code>. With <code>ReadAllText</code> the file could change at any time if the underlying file contents changes even though the same path was used as input. This is NOT a pure function.</p>
<p>Pure functions are easier to reason about and easier to test and so should be favoured. In the example above we pushed our impure functions to the beginning and end of the workflow and had our pure functions in the middle. This is generally a good pattern to follow.</p>
<p>So we covered quite a lot in this post and there is plenty more that could be said about functions but I think you have enough now to start working with them yourself. Didn't I tell you it would be <code>fun</code>? As always I appreciate any suggestions or questions, and please share this series with anyone you think might get value from it.</p>
<p>In the next article we look at <a href="/how-to-fsharp-pt-4">Control Flow</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/symbol-and-operator-reference/">Symbol reference</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Social image by <a href="https://unsplash.com/@markusspiske">Markus Spiske</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-2/How to F# - Part 22018-10-23T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-2/<p>In the <a href="/how-to-fsharp-pt-1">previous post</a> we looked at assigning values and the different types that those values could be. In this second installment we will be looking at functional programmings namesake: <em>functions</em>.</p>
<h2>Introduction</h2>
<p>Functional programming as a paradigm is quite a hard thing to pin down, just like other paradigms. In object-oriented programming the one thing that really isn't up for debate though is that the general idea is that we have an object (whatever that may mean to you) and we represent data and behavior in these objects. In functional programming then it will come as no surprise that <em>functions</em> are first class citizens and that we accomplish our goals by transforming data using these functions.</p>
<p>What is a function though?</p>
<h2>A brief reminder about mathematics</h2>
<p>Do not worry, I will not be going into deep mathematics theory here. I instead want to remind you of some mathematics you probably touched on in school just to show you that this isn't necessarily something completely new to you. Secondly, it will show that functional programming has roots that go far deeper than computer programming. Do not worry though if you didn't like this stuff at school, I promise this is way more <code>fun</code>.</p>
<p>In mathematical terms a function is a process that associates each element <em>x</em> of a set <strong>X</strong> to another value <em>y</em> which is of set <strong>Y</strong>. Let us call this process <em>f</em>. Then we have an expression:</p>
<pre><code class="language-fsharp">y = f(x)
</code></pre>
<p>This is usually read as "let <em>y</em> equal <em>f</em> of <em>x</em>".</p>
<p>The set <strong>X</strong> of possible values of <em>x</em> is known as the <strong>domain</strong>. The set <strong>Y</strong> of possible outputs <em>y</em> is known as the <strong>codomain</strong>. To label the parts of the expression, <em>x</em> is the <strong>argument</strong> and the value of the function is the output.</p>
<p>So how would we accurately define a specific function?</p>
<blockquote>
<p>let <em>f</em>: R → R be the function defined by the equation <em>f(x) = x<sup>2</sup></em>, valid for all real values of x</p>
</blockquote>
<p>Notice how we have the <code>Domain -> Codomain</code> defined using <code>-></code>. We will come back to this a little later.</p>
<p>One last thing. Remember common functions like <code>sin</code> and <code>cos</code>? It is common to write them as <code>cos x</code>, without the the brackets as long as this does not lead to any ambiguity in the meaning. So now that we have had a little mathematics refresher, let us see if this brings us any insight into F# functions.</p>
<p>Mathematics done!</p>
<h2>Defining functions in F#</h2>
<p>In the <a href="/how-to-fsharp-pt-1">previous post we looked at assigning values</a>. F# being a functional-first language means that we can treat functions like any other value.</p>
<pre><code class="language-fsharp">// int -> int
let f x = x*x
let y = f 3 //val y : int = 9
</code></pre>
<p>So above we define function <em>f</em> that takes argument <em>x</em>. Then we pass <code>3</code> as the argument and assign that to value <code>y</code>.</p>
<p>Another way to define functions in F# is to use the <code>fun</code> keyword. Let us define the same function again, this time as function <em>g</em>:</p>
<pre><code class="language-fsharp">// int -> int
let g = fun x -> x*x
let z = g 3 //val z : int = 9
</code></pre>
<p>This way makes it way more explicit that <code>g</code> is simply just another value that is assigned to. Note that this way of defining functions using <code>fun</code> is common when using functions once off inline, say for filtering a collection. We will see this in more detail in a later post when we dive deeper into collections.</p>
<h2>Understand functions</h2>
<h3>Signatures</h3>
<p>In the above code I put the signature of the function in a comment above it. The signature describes what types a function take in and what it returns. So for our function above we have <code>int -> int</code>. This means our function takes a single <code>int</code> as an argument and then returns an <code>int</code>.</p>
<p>A function always has an input and an output. In F# (and all programming languages I know of) a function can have multiple arguments. Say for example we had a <code>writeToFile</code> function that took a <code>boolean</code> for whether to overwrite the file if it exists, and a <code>string</code> with the content of the file. The signature for <code>writeToFile</code> would then be <code>bool -> string -> unit</code>. Now what is this <code>unit</code> type? It was mentioned in the previous post as the type that represents nothing. As already mentioned, functions must always have an input and an output, so if a function has no meaningful value to return, we return <code>unit</code>.</p>
<p>Do you see a similarity here? Types define the possible values that are possible. So for a signature <code>int -> int</code>, our <strong>domain</strong> is all possible numbers allowed by the type <code>int</code> numbers and our <strong>codomain</strong> is also all possible <code>int</code> values. Pretty cool right?</p>
<h3>Inference</h3>
<p>In the previous function definitions you may have noticed we defined no types but the F# compiler inferred that the type of <em>x</em> was <code>int</code>. This is because we used the multiplication <code>(*)</code> operator on it. Most of the time the compiler does a pretty good job of working out the type. This keeps our code clean from boilerplate cruft. To be sure though, if you are used to a language like Java or C#, this will take a bit of getting used to. My tip is to pay close attention to the signatures. Any IDE will display this all the time or at least on mouse over.</p>
<p>If you prefer to be explicit or in those cases where the compiler needs some help to determine the type, you can easily define the types explicitly.</p>
<pre><code class="language-fsharp">let f (x:int) = x*x // define argument type
let f x : int = x*x // define only return type
let f (x:int) : int = x*x // define argument type and return type
</code></pre>
<p>The argument type is specified with <code>(x:int)</code>. The parenthesis are needed to disambiguate the argument type from the return type.</p>
<p>Just a quick style note. Mostly in F# code the types are left off unless needed.</p>
<p>I wanted to highlight another way of defining functions, and that is by defining a type signature.</p>
<pre><code class="language-fsharp">type Unary: int -> int
let increment : Unary = fun x = x + 1
let decrement : Unary = fun x = x - 1
</code></pre>
<blockquote>
<p>A unary function is one that takes only one argument</p>
</blockquote>
<p>So we define a <strong>Unary</strong> function as one that take in a single number and returns a number, and then we have multiple implementations of that type.</p>
<h2>Conclusion</h2>
<p>In this post we introduced some of the core ideas behind functions. We learned how to define them and how to read a functions signature. We also touched on what the compiler can do for you by inferring the types, and how you can be explicit about the types.</p>
<p>In the next post we will dive deeper into <a href="/how-to-fsharp-pt-3">Working with Functions and getting them to work for you</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Function_(mathematics)">Mathematics functions</a></li>
<li><a href="https://en.wikipedia.org/wiki/Programming_paradigm">Programming paradigms</a></li>
<li><a href="https://fsharpforfunandprofit.com/posts/function-signatures/">Function signature</a></li>
</ol>https://devonburriss.me/how-to-fsharp-pt-1/How to F# - Part 12018-10-19T00:00:00+00:00Devon Burrisshttps://devonburriss.me/how-to-fsharp-pt-1/<p>Over the last few weeks I have been showing various people with different levels of programming experience how to use F#. This post is the first in a series on the basics of programming with F#. In this one we cover assigning values and the different types those values can take on.</p>
<h2>Introduction</h2>
<p>F# is a functional first language that allows for interoperation with the rest of the .NET ecosystem. This means you can use it mixed in a solution with other .NET languages like C# and VB, you can use all available Nuget packages, and you can reuse your knowledge of the existing base class library (BCL) if you are already a .NET developer. To achieve this interoperation F# allows you to program in an object oriented paradigm if you would like but it will often feel a bit clunky compared to the functional paradigm. Hence the "functional first".</p>
<h2>Warning</h2>
<p>I will be covering a lot of ground in as concise a way as possible. In a lot of ways this is "just enough to be dangerous". That been said learning a programming language is much like learning a spoken language. The best thing you can do is use it even if you feel stupid doing so.</p>
<h2>Syntax</h2>
<p>Depending on your background, one of the first things to stand out with F# is its lack of curly braces. F# uses the whitespece indentation to determine the scope of something. We will see this in a future article when we deal with functions.</p>
<h3>Assigning value</h3>
<p>We can assign values in F# using the <code>let</code> keyword. Since F# is a functional first language even functions can be assigned with the <code>let</code> keyword. Do not worry too much about what the examples below mean, some of it will be covered later.</p>
<pre><code class="language-fsharp">// the number 1 assigned to i
let i = 1
// a string assigned to `lowerTxt`
let lowerTxt = "i like to shout"
// assign a function that takes a string and changes it to uppercase
let toUpper (s:string) = s.ToUpper()
// assign value from the result of a function
let upperTxt = toUpper lowerTxt
</code></pre>
<p>In other languages these values would often be called <em>variables</em>. In F# values are immutable (cannot be changed), so they are not variable. You have to mark values that you want to be mutable with the <code>mutable</code> keyword but I recommend not doing this unless you are performance tuning. Usually it is a sign you are not doing things functionally.</p>
<h3>Commonly used types</h3>
<p>First lets touch on some common data types. Types define the type of data we can store in a value.</p>
<h4>Simple</h4>
<p><code>string</code> represents text in a program. When used in a program a <code>string</code> is defined like this with quotations <code>"Some text in my program"</code>. We saw usage of it above. It is often prudent to wrap OO .NET string methods in your own since the F# compiler struggles to work out the types of methods in the .NET library. Or any OO style methods for that matter.</p>
<p><code>int</code> defines whole numbers.</p>
<p><code>decimal</code> is a good choice for money when you need cents.</p>
<p><code>DateTime</code> and <code>DateTimeOffset</code> represent both a date and time component. The latter incorporates a timezone offset.</p>
<p><code>unit</code> this is a special type that represents nothing. Later we will see how a function that makes some changes like printing some values, then it may not need to return anything. In this case a type <code>Unit</code> will be returned.</p>
<h4>Complex</h4>
<p>Sometimes you want to capture data together in a way that pulls together simple types to represent a single coherent idea. In F# we can use classes as in C# but sticking with simplicity and immutability, a better option is F#'s record type.</p>
<pre><code class="language-fsharp">// define a type
type Person = {
Name:string
Birth:DateTime
}
// create an instance of that type
let devon = {
Name = "Devon"
Birth = DateTime.Parse("2121/01/01")
}
</code></pre>
<p>Above we defined a new type called <code>Person</code> and then created an instance of that type assigned to a value called <code>devon</code>.
Even though a value is immutable, F# does provide an easy way to create a new value from an old value while updating the fields.</p>
<pre><code class="language-fsharp">let devonBurriss = { devon with Name = "Devon Burriss" }
</code></pre>
<h4>Tuple</h4>
<p>Another common type in functional programming is a <strong>Tuple</strong>. A simple tuple is represented as <code>Tuple<T1,T2></code>, meaning it has 2 values inside of type <code>T1</code> and <code>T2</code>. So tuples are kind of like record types without the named fields. In F# we define a tuple type like this <code>bool * int</code> and we would create an instance of that type like so <code>let myTuple = (true,99)</code>. Tuples are often useful as intermediary values between functions.</p>
<pre><code class="language-fsharp">//create a tuple of type bool * int
let myTuple = (true,99)
// use the fst function to get the first value in the tuple
let b1 = fst myTuple
// use the snd function to get the second value in the tuple, with pipe forward operator
let n1 = myTuple |> snd
// use pattern matching to get the values
let (b,n) = myTuple
//val b : bool = true
//val n : int = 99
</code></pre>
<h4>Collections</h4>
<p>Dealing with a collection of elements of the same type is a common occurrence in programming. Whether a sequence of numbers or a list of people, you need a way to work with them. Although you can of course use the .NET collection types in F#, F# has some built-in types that make it easier to interact with collections in a more functional way. These types are <code>List</code>, <code>Array</code>, and <code>Seq</code>. Most of the functions for dealing with all these types are shared across all of them.</p>
<p>Examples:</p>
<pre><code class="language-fsharp">let lstFst = List.head [1;2;3] // 1
let arrFst = Array.head [|1;2;3|] // 1
let seqFst = Seq.head (seq { yield 1; yield 2; yield 3})
</code></pre>
<p>As you can see, the same function is available for getting the first element of the collection on each of the relevant modules.</p>
<p>So why are there 3 different collection types that seem so similar?</p>
<p><code>list</code> is the go to collection for me when working with in-memory data. It is an immutable collection so encourages functional best practices. This data structure is optimized for iterating through it and accessing the first element of the list (under the hood it is a linked list). Being a native F# data structure it allows superior pattern matching compared to other data types. This is actually quite common in functional programming where we often interact with a list as <code>head::tail</code> (pattern matching) where head is the first element in a list and tail is the rest of the list.</p>
<p><code>array</code> is a good choice if you need random access to elements in the collection. Is an alias for <em>BCL</em> <code>Array</code>.</p>
<pre><code class="language-fsharp">let j = Array.get [|1;2;3|] // val: j = 2
</code></pre>
<p><code>seq</code> is a lazily evaluated collection and so can represent an infinite list. This can be memory saving as each element is evaluated as needed. Is an alias for the <em>BCL</em> <code>IEnumerable</code>.</p>
<p>Two other F# data structures worth mentioning now is <code>map</code> and <code>ResizeArray</code>. <code>map</code> gives us a key-value dictionary that is often quite useful as a lookup:</p>
<pre><code class="language-fsharp">let funcFirstLangs = Map.ofList [("csharp",false);("fsharp",true)]
let isFuncFirst = Map.find "fsharp" funcFirstLangs // val: isFuncFirst = true
</code></pre>
<p><code>ResizeArray</code> is usually of interest when working with C# as it is an alias for <code>List</code>.</p>
<h4>Discriminated Unions</h4>
<p>The last type I want to touch on is Discriminated Unions (DU or sum types). DUs allow you to define a type which may be one of many types. Let me try explain by example.</p>
<pre><code class="language-fsharp">type Rating =
| Skipped
| RemindLater of DateTime
| JustVote of int
| VoteWithComment of int*string
let vote = VoteWithComment (5,"This is the best application every!!!! Worth every cent!")
</code></pre>
<p>Here we are defining a <strong>DU</strong> type <code>Rating</code> that represents a rating of a mobile application. Although each of the 4 cases contains different information, any case will be of type <code>Rating</code>. We will explore this more in a later post when we tackle pattern matching.</p>
<h2>Conclusion</h2>
<p>So that is the end of the first entry into how to use F# We covered how to assign values and took a whirlwind tour of some of the different types that those values could be. In future installments we will dive into some more advanced topics of working with these values as well as explore the idea of functional programming. I hope you found this interesting and are excited for the next installment. If anything was unclear I would really appreciate your feedback so I can improve this for the next reader who may come along.</p>
<p><a href="/how-to-fsharp-pt-2">Next How to F# - Part 2 - Understanding and working with Functions</a></p>
<h2>Resources</h2>
<ol>
<li><a href="https://fsharp.org/learn.html">Learn F# resources</a></li>
<li><a href="http://dungpa.github.io/fsharp-cheatsheet/">Cheatsheet</a></li>
<li><a href="https://fsharpforfunandprofit.com/posts/list-module-functions/">fsharp for fun and profit</a></li>
<li><a href="https://en.wikipedia.org/wiki/Algebraic_data_type">Algebraic data types</a></li>
</ol>https://devonburriss.me/fsharp-scripting/F# Scripts2018-10-17T00:00:00+00:00Devon Burrisshttps://devonburriss.me/fsharp-scripting/<p>Using F# scripts is something I only started doing after dabbling in F# for quite a while. This is unfortunate because they are a really fast and easy way of throwing some code together and thus a really good way to learn F#. This post is for anyone getting started with F# scripting.</p>
<!--more-->
<h2>Installation</h2>
<p>Check out the <a href="https://docs.microsoft.com/en-us/dotnet/fsharp/get-started/install-fsharp?tabs=windows">documentation for installing F#</a>. The easiest way is to install F# as part of Visual Studio. You may still need to add FSI.exe to your <strong>PATH</strong>.</p>
<ol>
<li><a href="https://fsharp.org/use/windows/">Instructions for Windows</a></li>
<li><a href="https://fsharp.org/use/mac/">Instructions for MacOS</a></li>
<li><a href="https://fsharp.org/use/linux/">Instructions for Linux</a></li>
</ol>
<h2>F# Interactive</h2>
<p>FSI allows you to execute F# code in an interactive console. Just type <code>fsi.exe</code> on Windows or <code>fsharpi</code> on Linux/Mac.</p>
<pre><code class="language-fsharp">> let x = 1;;
val x : int = 1
</code></pre>
<blockquote>
<p>Note: Each expression needs to end with <code>;;</code> in the interactive window.</p>
</blockquote>
<p>To get help: <code>#help;;</code><br />
To quit: <code>#quit;;</code></p>
<h2>Scripting</h2>
<p>So entering code directly into fsi is ok for trying simple things out but what about more complex code. That is where <code>.fsx</code> files come in.</p>
<p>Imagine we have a file called <em>print-name.fsx</em> with the following content:</p>
<pre><code class="language-fsharp">let name = "Devon"
printfn "Name: %s" name
</code></pre>
<p>Executing it we would see the following:</p>
<blockquote>
<p>> fsi .\samples\print-name.fsx<br />
> Name: Devon</p>
</blockquote>
<h3>Including other fsx files</h3>
<p>You can load other fsx files into a script. If we have <code>Strings.fsx</code> containing the following code:</p>
<pre><code class="language-fsharp">let toUpper (s:string) = s.ToUpper()
let toLower (s:string) = s.ToLower()
let replace (oldValue:string) (newValue:string) (s:string) = s.Replace(oldValue,newValue)
module StringBuilder =
open System.Text
let init() = new StringBuilder()
let initWith(s:string) = new StringBuilder(s)
let append (s:string) (sb:StringBuilder) = sb.Append(s)
</code></pre>
<p>We can now use it in our script file like so:</p>
<pre><code class="language-fsharp">#load "Strings.fsx"
open Strings
let name = "Devon" |> StringBuilder.initWith
|> StringBuilder.append " Burriss"
|> string |> toUpper
printfn "Name: %s" name
</code></pre>
<p>We use <code>#load "path/to/script.fsx"</code> to make it available and then <code>open NameOfFileWithoutExtension</code> to import it. So each script file is then treated as a <code>module</code>.</p>
<p>Executing it we would see the following:</p>
<blockquote>
<p>> fsi .\samples\print-name.fsx<br />
> Name: DEVON BURRISS</p>
</blockquote>
<h3>Taking Arguments</h3>
<p>It is possible to pass arguments into a script file. They are available in a field <code>fsi.CommandLineArgs</code>. Let's change our script one more time to demonstrate the usage. The arguments come through as an array so we pattern match on the number of elements to decide what to print.</p>
<blockquote>
<p>Note: The first element of the array is always the name of the script the arguments are passed into</p>
</blockquote>
<pre><code class="language-fsharp">#load "Strings.fsx"
open Strings
let stringWithSpace x = x |> string |> (sprintf " %s")
let name first = first |> toUpper
let nameAndLastName first last = first |> StringBuilder.initWith |> StringBuilder.append last |> stringWithSpace |> toUpper
let nameAndLastNameWithOccupation first last occ =
first |> StringBuilder.initWith
|> StringBuilder.append " "
|> StringBuilder.append last
|> StringBuilder.append (sprintf " (%s)" occ)
|> string |> toUpper
match fsi.CommandLineArgs with
| [|scriptName;|] -> failwith (sprintf "At least a name required for %s" scriptName)
| [|_;firstName|] -> name firstName |> printfn "Name: %s"
| [|_;firstName; lastName|] -> nameAndLastName firstName lastName |> printfn "Name: %s"
| [|_;firstName; lastName; occ|] -> nameAndLastNameWithOccupation firstName lastName occ |> printfn "Name: %s"
| _ -> failwith (sprintf "Too many arguments %A" (fsi.CommandLineArgs |> Array.tail))
</code></pre>
<p>Executing it we would see the following:</p>
<blockquote>
<p>> fsi .\samples\print-name.fsx devon burriss developer<br />
> Name: DEVON BURRISS (DEVELOPER)</p>
</blockquote>
<p>You could of course also access the arguments as a zero based array <code>fsi.CommandLineArgs.[0]</code> or loop through them.</p>
<h3>Nuget packages</h3>
<blockquote>
<p>Edit: Since .NET 5 it is possible to reference nuget packages directly in your fsx script.</p>
</blockquote>
<p>You can reference nuget packages in your script files by using the <code>#r nuget:Package.Name</code> syntax.</p>
<p>As an example:</p>
<pre><code class="language-fsharp">#r "nuget: FSharp.Data"
open FSharp.Data
let jVal = JsonValue.Parse """{ "lang": "fsharp" }"""
</code></pre>
<h3>Referencing DLLs</h3>
<p>You can reference DLLs using <code>#r "path/to/file.dll"</code>. If you want to pull down DLLs from Nuget, check out <a href="/up-and-running-with-paket">my article on using Paket dependency manager</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/fsharp-interactive/">FSI Documentation</a></li>
<li><a href="http://brandewinder.com/2016/02/06/10-fsharp-scripting-tips/">10 Tips for Productive F# Scripting</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Social Image by <a href="https://unsplash.com/@0628fromchina">yifei chen</a></li>
</ol>https://devonburriss.me/up-and-running-with-paket/Up and running with PAKET2018-10-16T00:00:00+00:00Devon Burrisshttps://devonburriss.me/up-and-running-with-paket/<p><a href="https://fsprojects.github.io/Paket/">Paket</a> is an awesome dependency manager for .NET. Comparing it to Nuget is both the easiest way to explain the basics of it and also a massive disservice to Paket. In this post I want to share some tips to make working with Paket even more awesome.</p>
<!--more-->
<p>The biggest problem with working with Paket has nothing to do with Paket itself, or even its differences with Nuget. The biggest issue is that the ecosystem is geared to make using Nuget really easy. The tooling is all geared such that Nuget is embedded in your project whether you like it or not. So let's see how we can make working with Paket as smooth as possible. This ease isn't a bad thing, it just means the barrier to entry for something better seems high. Let us see what we can do about that.</p>
<h2>TL;DR</h2>
<p>If you just want the commands to have Paket up and running in a folder fast:</p>
<h3>.NET Core 2.1 SDK and later versions</h3>
<p>You can install it in a specific directory.</p>
<p><code>dotnet tool install --tool-path ".paket" Paket --add-source https://api.nuget.org/v3/index.json</code></p>
<h3>Without .NET Core SDK</h3>
<pre><code class="language-powershell"># Download Paket exe into .paket folder
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "iex (Invoke-WebRequest 'https://gist.githubusercontent.com/dburriss/b4075863873b5871d34e32ab1ae42baa/raw/b09c0b3735ef2392dcb3b1be5df0ca109b70d24e/Install-Paket.ps1')"
# Most NB this creates 'paket.dependencies' file
.\.paket\paket.exe init
# At this point add some lines to 'paket.dependencies'. Downloads dependencies.
.\.paket\paket.exe install
</code></pre>
<h2>Install fast</h2>
<blockquote>
<p>This section goes over the details of the Powershell script. If you are using .NET Core SDK, feel free to skip this section.</p>
</blockquote>
<p>So as I mentioned, Nuget is there by default. Paket is not. You can <a href="https://fsprojects.github.io/Paket/getting-started.html#Downloading-Paket-s-Bootstrapper">install Paket manually</a> but I wanted to provide another option. Let's create a Powershell script to install Paket with a single line.</p>
<h3>One liner</h3>
<p>To work with Paket you need the binary available. Usually this is in a folder named <em>.paket</em> in the root of your solution. I have created <a href="https://gist.github.com/dburriss/b4075863873b5871d34e32ab1ae42baa">a Gist file</a> that you can download and execute with a single line that will do just that.</p>
<pre><code class="language-powershell">PowerShell -NoProfile -ExecutionPolicy Bypass -Command "iex (Invoke-WebRequest 'https://gist.githubusercontent.com/dburriss/b4075863873b5871d34e32ab1ae42baa/raw/b09c0b3735ef2392dcb3b1be5df0ca109b70d24e/Install-Paket.ps1')"
</code></pre>
<h3>Part of the family</h3>
<p>If you find yourself needing to setup Paket often as can happen if you are using F# fsx scripting files often, you may want to create an easier to remember command. The easiest way to do this is to add a function call to your Powershell profile.</p>
<p>Edit <em>"C:\Users\ < your username >\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1"</em> on Windows or <em>~/.config/powershell/profile.ps1</em> on Mac and add the following function:</p>
<pre><code class="language-powershell">function New-Paket {
New-Item -ItemType directory -Path ".paket"
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$tag = (Invoke-WebRequest -Uri https://api.github.com/repos/fsprojects/Paket/releases | ConvertFrom-Json)[0].tag_name
$uri = " https://github.com/fsprojects/Paket/releases/download/" + $tag + "/paket.bootstrapper.exe"
Invoke-WebRequest $uri -OutFile .paket/paket.exe
}
</code></pre>
<blockquote>
<p>You can reload your profile to make the command available in an already open console with <code>& $profile</code></p>
</blockquote>
<p>This will allow you to install paket with a simple <code>New-Paket</code> Powershell command.</p>
<h2>Adding dependencies</h2>
<p>Once you have Paket binary installed you can initialize by typing <code>.\.paket\paket.exe init</code>.</p>
<p>This creates a <em>paket.dependencies</em> file. This is where you place all the dependencies your solution uses. As an example:</p>
<pre><code class="language-yaml">source https://www.nuget.org/api/v2
nuget NETStandard.Library
nuget canopy
</code></pre>
<p>To download the referenced packages execute <code>.\.paket\paket.exe install</code>.</p>
<blockquote>
<p>Note: a <em>paket.lock</em> file is generated to ensure you get the same version every time. This should be committed to source control.</p>
</blockquote>
<p>At this point you have enough to work with Paket when using it with FSX script files.</p>
<p>You can reference them like so in your fsx files:</p>
<pre><code class="language-fsharp">#r "packages/Selenium.WebDriver/lib/netstandard2.0/WebDriver.dll"
#r "packages/canopy/lib/netstandard2.0/canopy.dll"
</code></pre>
<p>Check out the <a href="https://fsprojects.github.io/Paket/reference-from-repl.html">Paket FSI documentation</a> for an alternative way to get going in a script file.</p>
<h2>Going further</h2>
<p>When using Paket with projects (csproj/fsproj) there are a few more things to know. Most important is that in each project folder you need a <em>paket.references</em> file. This describes which dependencies from the <em>paket.dependencies</em> file it are used in any given project.</p>
<p>Something important to note here is that the <em>csproj/fsproj</em> files need to reference <em>.paket/paket.targets</em>. This usually looks something like this:</p>
<pre><code class="language-xml"><Import Project="..\..\.paket\Paket.Restore.targets" />
</code></pre>
<p>And the project file now no longer needs to reference nuget packages.</p>
<p>If you have an existing project you want to convert from Nuget to Paket there is a handy command for just that <code>.\.paket\paket.exe convert-from-nuget</code>.</p>
<p>If you want more details on how Paket works I recommend <a href="https://cockneycoder.wordpress.com/2017/08/07/getting-started-with-paket-part-1/">Isaac's introduction to Paket</a> and of course the <a href="https://fsprojects.github.io/Paket/">Paket documentation</a>.</p>
<blockquote>
<p>Paket can do more than pull in Nuget packages. It can pull files from disk, git, and entire repositories.</p>
</blockquote>
<h2>Conclusion</h2>
<p>Paket is an awesome replacement for Nuget and in this article we looked at how you can get up and running fast as well as make sure it is as easy as possible to get Paket quickly every time you need it.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-windows?view=powershell-6">Installing Powershell on Windows</a></li>
<li><a href="https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-linux?view=powershell-6">Installing Powershell on Linux</a></li>
<li><a href="https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-macos?view=powershell-6">Installing Powershell on MacOS</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Header image <a href="https://unsplash.com/@vtrsnts">Vitor Santos</a></li>
<li>Social image <a href="https://unsplash.com/@moonshinechild">Kira auf der Heide</a></li>
</ol>https://devonburriss.me/functional-modeling/Functional modeling2018-10-13T00:00:00+00:00Devon Burrisshttps://devonburriss.me/functional-modeling/<p>In my <a href="/functional-structural-impedance-mismatch">previous post</a> I introduced the idea of a structural model in the code that closely matches what a use-case should do functionally. Just as an ubiquitous language helps us tie concepts in our code, so a functional model helps us capture the functioning of a use-case. In this post I will go into this idea in a little more detail, giving some tips on how to get started.</p>
<!--more-->
<blockquote>
<p>I am not explicitly talking about functional programming in this article although any familiar with it will see it's influences. Even if you do not embrace FP, the concepts from it that I mention here can be applied to the benefit of your codebase.</p>
</blockquote>
<p>As an example we are looking at a real life project where we are allocating monetary amounts to sales or purchases based on agreements we have with suppliers.
Let's start with a deeper look at the example that was used in the previous post:</p>
<p><img src="/img/posts/2018/functional-structure.jpg" alt="Allocation functional structure" /></p>
<p>This was a very simplified view of the components involved for calculating the amounts to be allocated to an agreement due to sales or inbound orders. It also still shows the structural components involved. As an exercise I mapped out the calls that are made while completing a use-case. This style is borrowed from Simon Brown's <a href="https://c4model.com">C4 Model</a> but with a focus on function rather than structure.</p>
<p><img src="/img/posts/2018/functional-model.jpg" alt="Allocation functional structure" /></p>
<p>And here is the top the entry point for this use-case.</p>
<pre><code class="language-csharp">return await TryGetAgreement(agreementId)
.Bind(agreement =>
_agreementSupportedValidator.IsSupported(agreement).ToAsync())
.Bind(supportedAgreement =>
TryCreateAgreementWithHistory(supportedAgreement).ToAsync())
.Bind(agreementWithHistory =>
AllocationPathfinder(agreementWithHistory).ToAsync())
.Map(allocationResult =>
AllocationsFilter.Filter(allocationResult))
.Bind(newAllocationResult =>
TryStoreAllocations(newAllocationResult).ToAsync())
.Try();
</code></pre>
<p>Although I am the first to admit that this style is not too pretty in C#, once you get used to the <a href="https://github.com/louthy/language-ext">Functional Language Extensions</a> like <code>Bind</code>, <code>Map</code>, and <code>Try</code>, it really reads like what it does at this level of abstraction.</p>
<p>So why would we want to write code like this?</p>
<h2>High level description of process</h2>
<p>When exploring a codebase it is always nice to find the entry point to a feature that describes what happens at a single abstraction level. Too often each step is wrapped in some infected factory or manager that conveys very little intent and quickly become a class quagmire.</p>
<h2>Maps well to Event Storming</h2>
<p><a href="https://www.eventstorming.com/">Event Storming</a> is becoming increasingly more popular as a means of learning a domain. By it's very nature Event Storming is a time based rather than a state based model and it can be quite difficult</p>
<p><img src="/img/posts/2018/es-legend.jpg" alt="Event Storming legend" /></p>
<h2>Focus on doing</h2>
<p>Following on from the previous point but true of every level of the codebase, bringing the functional process forward into plain view is simpler when focusing on what is being done, rather than on the doer. we now move from modeling stateful doers to modeling the state between transformations over time. This is way more inline with how the business thinks in terms of getting work done.</p>
<h2>Testability (unit tests)</h2>
<p>If you are able to constrain external IO to the beginning and end of your flows you will have simple input/output functions in between. This sort of code is a lot more testable than those that have many dependencies. You can now concentrate on just testing the output from a certain input without worrying about injecting mocked dependencies.</p>
<h2>Composability</h2>
<p>Often in business we have branching flows. Too often this results in bad abstractions that try to handle every branch, even those not yet added by the business. A far more maintainable way to handle these is to reuse that which is common and compose it with specific implementations when things branch. This usually results in cleaner code that is far more future proof than using inheritance.</p>
<h2>Conclusion</h2>
<p>In this post we went into a little more detail of what code may look like if we started modeling the flow of events through time even within small the use-cases. We looked briefly at what this could look like and reasons it might be worth trying. The keys to implementing this well is to;</p>
<ol>
<li>Chain steps at a single abstraction level that make sense, allowing developers to dive only to the depth needed to understand what is needed</li>
<li>Instead of trying to come up with an abstraction that captures every state, model the states between transitions</li>
<li>Capture the domain language in both the states and the functions that transition from state to state</li>
<li>Push dependencies to the outside to increase testability and how easy it is to reason about the system</li>
</ol>
<p>If you are interested in really drilling into this and learn functional programming a highly recommend <a href="https://pragprog.com/book/swdddf/domain-modeling-made-functional">Scott Wlaschin's Domain Modeling made Functional</a>.</p>
<p>What do you think? If you have any ideas please leave a comment or reach out on Twitter <a href="https://twitter.com/DevonBurriss">@DevonBurriss</a>. You also may be interested in my <a href="/functional-structural-impedance-mismatch">previous article on the differences between structural and functional modeling</a> and my <a href="/managing-code-complexity">tips for managing code complexity</a>.</p>
<h2>Resources</h2>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Function_model">Function model</a></li>
</ol>https://devonburriss.me/functional-structural-impedance-mismatch/The functional-structural impedance mismatch2018-09-08T00:00:00+00:00Devon Burrisshttps://devonburriss.me/functional-structural-impedance-mismatch/<p>When modeling software we often focus on modeling state. What if instead we modeled functionality through time? This way we can more closely match our structural model to our behavioral model. I believe this increases the ease of maintaining a system, for to change a system you must first understand it.</p>
<!--more-->
<p>In this post I want to explore the way we think about, document, and design systems; taking a brief tour through history on this topic and pose a question about whether we are doing it in a way that makes sense. This question brings up something I have started calling <em>the functional-structural impedance mismatch</em>. I will go through some experiences trying to minimize this mismatch and hopefully convince you to try it yourself.</p>
<h2>A quick look at history</h2>
<p>In the 1950s well defined function model diagrams started being used in <em>systems engineering</em>, evolving from business process diagrams developed and used in the previous century. In the 1960s these were used by NASA to visualize the time sequence of space missions, and from there they developed into various usages in software development. See the resources at the end if you are interested in the details of this progression but I will move forward quickly here. By the 1990s object-oriented programming started gathering more widespread popularity, exploding when Java arrived on the scene. With it grew the popularity of UML and in particular the structural diagrams that describe how we build our OO systems. I remember many weeks drawing both behavioral and structural diagrams for my university projects in the early 90s and since then class diagrams have served as the staple for most diagrams I see for how software is built.</p>
<p>Why is this important? I think it is influential in why almost any diagram on how to build, or how a system is built, is a structural diagram describing state. The type of diagram is not a problem in and of itself. Diagramming the structure of an application is important. I myself am a big fan of the low ceremony, high contextual information of <a href="https://c4model.com/">Simon Brown's C4 model</a>. State with behavior is how OOP developers think about software, and so is how we document.</p>
<h2>Describing behavior</h2>
<p>Regardless of what discipline you come from, most people can gain a fair amount of information from a well drawn flow chart. Flow charts are pretty great behavioral diagrams that tell you how a system accomplishes something, regardless of whether that system is physical or digital. What is great about these kinds of diagrams is that they give you an indication of how a system accomplishes something <em>through time</em>. They are very intuitive for us to understand how a system behaves. And if you think about it, understanding how a system behaves (or should behave) is one of the most important things we as developers need to know to maintain and enhance a software system.</p>
<p><img src="/img/posts/2018/functional-process.jpg" alt="Allocation flow chart" /></p>
<p>Above is an excerpt from a simple flow chart describing the process of calculating the value of an agreement based on sales depending on the type of the agreement.</p>
<h2>A state based structure</h2>
<p>So let us take a look at what this flow diagram typically translates to when built by object-oriented programmers that are accustomed to modeling state.</p>
<blockquote>
<p>This is a simplified diagram of a real application I worked on. This is by no means me shitting on how something was built. There are always opportunities to learn how to improve things. This one was unique as we took the time to refactor it (as we will see in a bit).</p>
</blockquote>
<!-- ![Allocation flow chart](/img/posts/2018/functional-process.jpg)
![Allocation object structure](/img/posts/2018/object-structure.jpg) -->
<img src="../img/posts/2018/functional-process.jpg" alt="Fire" class="img-rounded pull-left" width="510" style="margin-right: 1em;">
<img src="../img/posts/2018/object-structure.jpg" alt="Fire" class="img-rounded pull-left" width="510" style="margin-right: 1em;">
<div class='clearfix'></div>
I think the resulting structure of the important classes are quite standard. I also do not think that it is crystal clear how and where each component relates to the process. It is not too hard to guess because this has been simplified and this is a pretty small system. When digging into this system though is was already difficult to reason about where what is done.
<h2>A functional structure</h2>
<p>So the team agreed that we need to try improve the structure of the existing code. Over the next couple weeks the system was refactored structurally to look like this:</p>
<img src="../img/posts/2018/functional-process.jpg" alt="Fire" class="img-rounded pull-left" width="510" style="margin-right: 1em;">
<img src="../img/posts/2018/functional-structure.jpg" alt="Fire" class="img-rounded pull-left" width="510" style="margin-right: 1em;">
<div class='clearfix'></div>
As you can see, visually this is far more in line with the functional flow diagram. This really did improve the team's ability to reason about the code, especially for new team members joining after development had progressed quite far.
<h2>Discussion</h2>
<p>Here is what some of the new joiners to the team had to say about the refactored code:</p>
<blockquote>
<p>...my first impression looking at the code was that it had a flow that I could easily follow without having to know the state of objects. I could figure out, looking at the code, that one function result led to another function in a particular flow until you reach an end result... - Bruno Lamarao</p>
</blockquote>
<blockquote>
<p>When I just joined the team we did some mob programming on the project. Being able to sit down without having opened the project before and start adding features, just shows that the behavior/flow of the program was very easy to reason about. - Thomas Bouman</p>
</blockquote>
<p>This stood out to me. It is not often I had that feeling about code and it was not something I heard often from other developers. It confirmed the suspicion I had that this may be a better way of modeling software. I am wary of silver bullets so it is possible that some systems are just better as a pure state based model. A very simple CRUD based application probably falls into this camp. As soon as we have more complex functionality though, I think it is worth modeling it to match our mental model of how it functions.</p>
<h3>Why do I think this is better?</h3>
<h4>A single model</h4>
<p>As I have already mentioned, the functional model now matches the structural model. The importance of this cannot be overstated. To know the structure of the software you only need to know what it does functionally and vice versa, to look at the structure is to look at what the system functionally does.</p>
<h4>Entry point tells a story</h4>
<p>I mentioned this in my <a href="/managing-code-complexity/">tips for managing code complexity</a> but having an entry point into your code that describes the functioning of a feature is a giant win. Each step should be at the same abstraction level, giving developers a great way to understand where they need to dive into the code to make changes.</p>
<p><img src="/img/posts/2018/use-case.jpg" alt="entry point" /></p>
<h4>Solving the trouble with Liskov Substitution Principle</h4>
<p>Good abstractions are hard to discover and even harder to maintain. As a system evolves, a previously good abstraction can start to become awkward. When you have a model that is used liberally throughout an application, a good abstraction is almost impossible to discover.</p>
<p><img src="/img/posts/2018/big-model.jpg" alt="monolithic model" /></p>
<p>And here in lies the key. By constraining a model to be used within a certain application flow, or even within a step in that flow, we limit the dependencies on it.
Where before we had a model that is monolithic and used throughout the software application, now we constrain our model and any resulting abstractions to only servicing a single step in our feature flow.</p>
<p><img src="/img/posts/2018/small-model-steps.jpg" alt="Small steps small models" /></p>
<p>This means we only need to concern ourselves with a model that satisfies a small subsection of the functionality instead of all functionality within an application. This is far, far easier to reason about.</p>
<blockquote>
<p>WARNING: This does some with one overhead. Class explosion! In a high ceremony language like C# or Java, this can be quite a high initial cost indeed. I do recommend not optimizing for the initial extra cost of creating a few more files.</p>
</blockquote>
<h2>Conclusion</h2>
<p>So far I have avoided talking about functional programming. The functional used in this article is with regard to behavior rather than functions in the usual FP sense. They are indeed related though as this style of designing applications is what you naturally tend toward following an FP approach. I avoided mentioning FP till this point though because I think that an OO paradigm programmer can benefit from applying this style of design without buying into FP. Who knows? It may be your gateway drug :)</p>
<p>In future articles I hope to demonstrate some more practical examples of developing applications this way, so keep a lookout for those ([follow on Twitter)(https://twitter.com/DevonBurriss)).</p>
<p>In the meantime if you are interested I suggest you check out Scott Wlaschin's excellent book <a href="https://pragprog.com/book/swdddf/domain-modeling-made-functional">Domain modeling made functional</a> where he demonstrates a lot of these concepts in a FP way with F#.</p>
<p>What do you think? Are you already writing your applications this way? Will you try it? Does <em>functional-structural impedance mismatch</em> as an idea make sense?</p>
<h2>Resources</h2>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Function_model">Function model</a></li>
</ol>
<h2>Credit</h2>
<ol>
<li>Social image by <a href="https://unsplash.com/@sharonp">Sharon Pittaway</a></li>
<li>Refactoring marathon <a href="https://twitter.com/Viper128">Duncan Roosma</a></li>
</ol>https://devonburriss.me/anatomy-of-automated-testing/Anatomy of an automated test suite2018-08-13T00:00:00+00:00Devon Burrisshttps://devonburriss.me/anatomy-of-automated-testing/<p>Unit, integration, end-to-end, acceptance, UI tests, and more. With so many types of automated tests is it no wonder that we so often disagree on whether something is an acceptance test or an integration test? Or maybe an end-to-end test? What if instead of thinking about the structure of the test, what it tested, we instead considered the question that the test is answering...</p>
<!--more-->
<h2>A quick note on the test pyramid</h2>
<p>For some reason, the test pyramid comes up when talking about what tests to write. The test pyramid gives us an indication of the relative return on investment of writing certain types of tests. The cost of writing and maintaining UI tests is usually quite high, thus diminishing their value as verification of correctness. Unit tests, on the other hand, should be pretty quick to write, easy to maintain, and so give more value over UI tests. Therefore we should have a relatively large number of unit tests compared to UI tests.</p>
<p>In this image, I use <em>Service</em> to encompass integration, end-to-end, acceptance tests, etc.</p>
<p><img src="/img/posts/2018/test-pyramid.jpg" alt="Test Pyramid" /></p>
<p>If we had a way that made <a href="/page-module-model/">UI tests easy to write and maintain</a>, they would switch places in the test pyramid. They might then provide more value relative to the cost of creation and maintenance.</p>
<h2>Practices around testing</h2>
<p>Let us take a look at some of the testing practices around and what they focus on. This will give us a good indication of what questions we can ask of our tests.</p>
<p>First up we have <strong>Test-Driven Development</strong>. A lot can be written about TDD and half of it would be disagreed with by half its practitioners half of the time. I will try to stay away from questions of what to mock and the granularity of the tests. I have <a href="/maintainable-unit-tests/">written about my thoughts on maintainable unit tests</a> already though. The practice of writing tests first, then making them pass, and then refactoring; gives fast and incremental feedback on both progress and the design of your code. While making functional progress a test suite is being built up that proves that what you have implemented is working as expected by you as the developer.</p>
<p><strong>Behavior-Driven Design</strong> builds on top of the idea of TDD but with a focus on capturing requirements in an automated way that fosters domain understanding and collaboration with stakeholders.</p>
<p>It really isn't clear to me that the 2 need to be separate practices. BDD is just TDD practiced by developers with a <a href="/acceptance-tests/">focus on domain knowledge and stakeholder collaboration</a>. On the other side, TDD has become what developers do when they are not focusing on stakeholder collaboration. This was not its original intent.</p>
<p>Honestly, though I do find the distinction a little bit useful in thinking about the kind of tests I am writing just because it allows me to ask questions about the quality of what was built and the functional correctness.</p>
<p><img src="/img/posts/2018/test-quadrant.jpg" alt="Test Quadrant" /></p>
<p>So for the sake of comparison, we will make the distinction that unit tests are an artifact of TDD and acceptance tests are an artifact of BDD. Don't get too attached to this idea, it is just useful for the upcoming discussion.</p>
<h2>Asking the right questions</h2>
<blockquote>
<p>UPDATE: I have noticed a lot of confusion when talking about Integration tests. In the context of this post I mean a narrow test that tests the connection handshake and contract with software outside of the process boundary. Typically I limit the scope of these but do not mock the external system, since that is the point of the test. If on the other hand there was a reliable way to verify the contract with fewer dependencies, I would be happy to drop this and name it a Contract Test.</p>
</blockquote>
<p>I promised some questions to be asked to give a different perspective on the types of tests. What if instead of thinking about tests in terms of how they were written (xUnit and C# vs Gherkin) we thought about them in terms of questions directed at the test?</p>
<p><em>Do I understand the problem?<br />
Is my feature ready to ship?<br />
Does it behave as expected?</em><br />
Check the <strong>Acceptance tests</strong>. Did it passed? Ship the feature.</p>
<p><em>Am I confident I built it well?<br />
Does my code handle exceptions correctly?<br />
Is my codes API intuitive to use?</em><br />
Check your <strong>Unit tests</strong>. Passes. I can am confident in the code. I can refactor with confidence.</p>
<p><em>Does my data access work against a real database?<br />
Do my API calls work as expected?<br />
Are my message queues configured correctly?</em><br />
Check the <strong>Integration tests</strong>. Passes. I am confident that I won't have surprises when the system runs. I will find integration problems quickly.</p>
<p>There are other types of tests like <strong>Consumer-driven contracts</strong> and <strong>UI tests</strong> that might be useful to you and I am sure you can come up with the questions if they matter to you. The point is that dividing your tests based on how they are implemented is less useful than distinguishing what answers each group of tests is good at giving.</p>
<h2>Summary</h2>
<p>In this post, I suggested that instead of looking at tests based on what they test or how they are implemented, it is more useful to ask what questions they can answer. For example:<br />
<strong>Acceptance tests</strong> answer <em>Did I build the right thing?</em> and <em>Can I ship it?</em>.<br />
<strong>Unit tests</strong> give me confidence on <em>Did I build it right?</em>.<br />
<strong>Integration tests</strong> tell me <em>Can these components communicate?</em> I especially like checking across process boundaries here.</p>
<p>Hopefully, by this point, I have convinced you to think about your tests in terms of the questions they answer and the actions you will take from those questions.</p>
<p>One last thing. Much of the gain in TDD is that unit tests gives you rapid feedback. As long as you have good trustworthy acceptance tests, deleting unit tests if they are causing any issues should be completely acceptable. They already gave a large amount of their benefits in the design and verification process.</p>
<p>I hope you found this useful. If so I would love to hear your thoughts on the different types of testing.</p>
<h2>Credits</h2>
<ul>
<li>Background photo by <a href="https://unsplash.com/@stijntestrake">Stijn te Strake</a></li>
<li>Social photo by <a href="https://unsplash.com/@tentides">Jeremy Bishop</a></li>
</ul>https://devonburriss.me/page-module-model/The Page Module Model with F# and Canopy2018-08-12T00:00:00+00:00Devon Burrisshttps://devonburriss.me/page-module-model/<p>In the past I have done some UI testing with Selenium. I quickly adopted the Page Object Model (POM) for this kind of testing to ease readability, maintenance, and re-use across tests. Recently I needed to look into doing some UI testing and I decided to use <a href="https://lefthandedgoat.github.io/canopy/">Canopy</a> to abstract away working with Selenium. Although Canopy has some great helpers around Selenium I still found myself wanting to abstract away elements on each page and the pages themselves. Enter the Page Module Model (PMM)...</p>
<!--more-->
<p>So full disclosure... I doubt PMM is a thing. I didn't even try search for it until writing the previous sentence. It isn't. Yet... It is similar to the POM except using <code>module</code>s because I am using F#.</p>
<h2>What is the page object model?</h2>
<p>The POM is simple. We encapsulate interactions with pages and elements on the site with objects. Here is an example of an <a href="https://github.com/dburriss/UiMatic">old POM framework I wrote years ago</a>.</p>
<pre><code class="language-csharp">[Url(key: "home")]
public class GoogleHomePage : Page
{
[Selector(name: "q")]
public IInput SearchBox { get; set; }
public GoogleHomePage(IDriver driver) : base(driver)
{}
}
</code></pre>
<p>We can then use this class to instantiate an object that we interact with instead of interacting with Selenium directly.</p>
<pre><code class="language-csharp">[Theory]
[InlineData(TestTarget.Chrome)]
public void Title_OnGoogleHomePageUsingConfig_IsGoogle(TestTarget target)
{
using (IDriver driver = GetDriver(target, config))
{
//create page model for test
var homePage = Page.Create<GoogleHomePage>(driver);
//tell browser to navigate to it
homePage.Go<GoogleHomePage>();
//fill a value into the text box
homePage.SearchBox.Value = "TEST";
//an example of interacting with the config if needed. This gets expected title from config.
var expectedTitle = config.GetPageSetting("home").Title;
//check the titles match
Assert.Equal(expectedTitle, homePage.Title);
}
}
</code></pre>
<p>If you have ever written tests against Selenium directly I am sure you can agree that is cleaner.</p>
<h2>Writing tests in F# and Canopy with Page Module Model</h2>
<blockquote>
<p>You can find the <a href="https://github.com/dburriss/PageModuleModelExample">source code for this example on Github</a></p>
</blockquote>
<p>So what would the Page Object Model look like with static functions on a <code>module</code>? Pretty cool actually...</p>
<pre><code class="language-fsharp">"No laptops are free" &&& fun _ ->
HomePage.searchFor "Laptops"
let results = SearchResultsPage.results()
test <@ results |> List.forall (fun x -> x.Price > 0m) @>
</code></pre>
<p>We can keep the tests really concise and describe what we want to happen. Here we search for "Laptops", get the search results, and then check that the price is not 0 on any items. We will dive a little deeper into how this is done in the next section.</p>
<p>This style also allows us to easily define simple smoke tests to run before getting into the the more functional tests. A smoke test is an easy quick test of something basic. The idea being that <em>"Where there is smoke there is fire"</em>, so if a smoke test fails, it is not worth proceeding with the more feature rich tests.</p>
<pre><code class="language-fsharp">context "Smoke tests"
skipAllTestsOnFailure <- true
"home page loads" &&& fun _ -> displayed HomePage.homePageBanner
"search box available" &&& fun _ -> displayed Header.searchBox
"cart is available" &&& fun _ -> displayed Header.basketButton
</code></pre>
<p>We use the <code>skipAllTestsOnFailure <- true</code> to make sure we skip any other tests if any smoke tests fails.</p>
<h2>The building blocks for composition</h2>
<p>I usually build a page that I need and then start extracting the reusable functions out into modules from there. Most sites will have some kind of header/navigation. Here is what I needed in a header for the tests I wrote for this post.</p>
<pre><code class="language-fsharp">module Header =
//selectors
let searchBox = "#search_query"
let basketButton = "a[href=\"/winkelmandje\"]"
//actions
let searchFor term =
searchBox << term
press enter
</code></pre>
<p>Here we define some selectors and a simple function that allows us to use the search functionality.</p>
<p>If possible to modify the HTML I recommend putting <code>data-test-xyz</code> style attributes on your elements to allow you to easily query elements. Unfortunately I did not have the luxury to do so even if the front-end developers would let a back-end developer like me near it. Probably wise :)</p>
<p>Let's look at something a bit more complex. The following module represents search results on a page.</p>
<pre><code class="language-fsharp">module SearchResults =
open OpenQA.Selenium
type SearchResultElement = {
ProductId:string
El:IWebElement
Name:string
Price:decimal
IsAvailable:bool
}
let private toPrice (s:string) = s.Split(",").[0] |> decimal
let private getOrderButton itemEl = itemEl |> elementWithin @".product__order-button"
let private isOrderButton (orderBtnEl:IWebElement) =
orderBtnEl
|> getAttrValue "class"
|> fun s -> s.Split(" ")
|> Array.contains @"action--order"
let items () =
let rowEls = element (sData "component" "products")
|> elementsWithin ".card"
let getId itemEl = itemEl |> elementWithin "a" |> getDataAttrValue "productid"
let getTitle itemEl = itemEl |> elementWithin "a" |> getAttrValue "title"
let getPrice itemEl = itemEl |> elementWithin @".product__sales-price" |> read |> toPrice
rowEls
|> List.map (fun itemEl ->
{
ProductId = itemEl |> getId
El = itemEl
Name = itemEl |> getTitle
Price = itemEl |> getPrice
IsAvailable = itemEl |> getOrderButton |> isOrderButton
})
</code></pre>
<p>This is a bit complex because of the poor selector options available to me in the HTML but still not too bad. I want to draw attention to the <code>SearchResultElement</code> record. I parse the HTML to a record rather than constantly interacting with <code>IWebElement</code>. You saw this in the test for a 0 price where I was able to easily check the <code>Price</code> field.</p>
<p>Note: I make use of some helpers here like <code>getDataAttrValue</code> that are in the <code>Selectors.fs module</code> which you can checkout in the source if you like.</p>
<h2>The Page Module</h2>
<p>With these building blocks the actual page <code>module</code> can end up being quite simple.</p>
<pre><code class="language-fsharp">module SearchResultsPage =
open Elements
open canopy.classic
let uri = "https://www.coolblue.nl/zoeken" //should use settings or relative urls
let verifyOn() = on uri
let searchFor term = Header.searchFor term
let results() = SearchResults.items()
</code></pre>
<p>With the page we can now group functionality on a module that makes semantic sense and compose our functions from the building blocks we have already defined.</p>
<h2>Summary</h2>
<p><img src="/img/posts/2018/ui-testing.jpg" alt="UI testing with Canopy" /></p>
<p>In this post we saw how the Page Object Model can be modelled in a more functional way, using building blocks to construct pages. We also saw how we can transform interesting elements of the page into records that give us type safety and intellisense.</p>
<p>Lastly, we saw how concise the combination of F# and Canopy can make our UI tests.</p>
<h2>Credits</h2>
<p>Social image by <a href="https://unsplash.com/@reinhartjulian">Reinhart Julian</a></p>https://devonburriss.me/acceptance-tests/Writing readable Acceptance tests2018-08-11T00:00:00+00:00Devon Burrisshttps://devonburriss.me/acceptance-tests/<p>Acceptance tests can be a great way of making sure you are building the right thing. When used in in a way that uses natural language it also serves as a collaboration tool with stakeholders to define what should be built before it is built. This can save a great deal of development time in making sure you don't build the wrong thing and also has the added benefit of growing a developers domain knowledge as he or she collaborates with a stakeholder in fleshing out and verifying the acceptance tests. Recently we invested a fair amount of time in a team here at work iterating on the style of the acceptance tests. We figured if the goal is to allow developers and stakeholders to collaborate, then making sure the tests makes sense to both parties is important. In this post I will share some of the experiences I have gained over the years, more specifically showing how we applied this to improving our acceptance tests in my current domain. As always though this was a collaborative effort within the team.</p>
<h2>A brief introduction</h2>
<blockquote>
<p>You probably want to skip to the next section if you already have experience with the Gherkin language.</p>
</blockquote>
<p>Acceptance or behavior tests come in many different forms but probably the most common is those described in the <a href="https://docs.cucumber.io/gherkin/reference/">Gherkin</a> language which is a domain specific language for writing easily readable specifications that can be executed. The most common keywords used are:</p>
<blockquote>
<p><code>Feature</code>: provide a high-level description of a software feature, and to group related scenarios<br />
<code>Scenario</code>: a concrete example that illustrates a business rule. Consists of one or more steps (Given, When, Then, Examples)<br />
<code>Given</code>: describe the initial state of a system<br />
<code>When</code>: describe events or actions that occur in or against the system<br />
<code>Then</code>: describe the expected outcome of the <code>When</code> actions against the system</p>
</blockquote>
<p>The Gherkin language has many different runners such as <a href="https://docs.cucumber.io/">Cucumber</a>, <a href="https://specflow.org/">Specflow</a>, and <a href="http://behat.org/">Behat</a> for whatever your programming language of choice is. Using Gherkin is not the only way of writing behavior oriented tests. Many developers just use standard testing frameworks or more low level ones oriented toward behavior testing. Personally I think if you are committed to working on the tests collaboratively with stakeholders it is difficult to overestimate the benefits of a format that is readable to non-developers.</p>
<h3>Acceptance tests vs Behavior Driven Development (BDD)</h3>
<p>Although this is not the focus of this post I did want to mention the difference here in my mind. BDD is the practice of defining specification of how a system should behave and automating the execution of those specifications. Defining the specification of what needs to be built requires deliberate discovery of requirements, which requires collaboration between stakeholders and developers. By discovering the unknowns upfront development is more productive, with less surprises and rework throughout the development life-cycle.</p>
<p>Acceptance tests can be an integral artifact from the process of BDD. In my mind Acceptance tests are simply the tests that answer these simple questions: "What must the feature do?", "Is it done?", and "Can I deploy it?". In a perfect world with perfect confidence in your acceptance tests, they are the gate for continuous delivery of features. Once they are passing the feature is in production.</p>
<h2>Lost in the woods</h2>
<p>Once you sit down to write an Acceptance test you start to realise there are many ways you can write them. What classifies as a feature? What level of abstraction do I write against? How specific do I make my scenarios? Black-box tests or not?</p>
<p>I will attempt to answer these quickly before showing you the evolution of our acceptance tests, although I suspect some of my answers will fall short considering how different teams' stories can be.</p>
<p><em>What classifies as a feature?</em> This is a single piece of functionality that can be shipped independently from others. This is often difficult to determine because sometimes just because a feature is independently shippable doesn't always make sense for it to be shipped. In the examples to follow we experienced this because although different <em>types of Purchase Agreements</em> have different behavior and can be independently shipped, until we covered a certain subset of all types it didn't make sense for us to release. A helpful question here might be <em>Could X be broken while Y is still considered correct?</em> Then it quite possibly could be a feature.</p>
<p><em>What level of abstraction do I write against?</em> In a way, this one is easy. The very highest. The one the business operates and talks at. <a href="http://devonburriss.me/managing-code-complexity/">Hopefully your code is written at this level of abstraction at the entry point as well</a>. Your acceptance tests should not be mentioning things in your code or implementation details that are not going to make sense to business stakeholders. The easiest way to check this is to ask a business stakeholder to read your test. Or better yet, co-write them.</p>
<p><em>How specific do I make my scenarios?</em> My advice here would be to make them pretty damn specific. What you are aiming for is an example that has the makings of a real life scenario that a stakeholder would be tackling. You are looking for a couple scenarios that collectively catch most permutations in the system. I don't think it is necessary to capture EVERY permutation through your scenarios. Other lower-cost forms of testing can catch these if necessary. <code>Examples</code> can also go a long way in covering permutations if you feel you need them and in a way that doesn't get too verbose. Use these judiciously though. If the test is no longer going to make sense to a stakeholder, prefer a lower cost test like a unit test to check permutations.</p>
<p><em>Black-box tests or not?</em> I use the term <strong>Black-box</strong> to describe a test that doesn't know anything about the internals of your code. A black-box acceptance test would exercise the code through a UI, REST API, or command line and then observe the results in a database, message queue, logs, or console output. This has some pros and cons. Firstly you are really exercising your system like any other client would so you can have a lot of confidence that the system is working as a whole. The downside is that measuring the effects can be quite challenging and the tests can often take quite long to run, as well as be complex to setup. Whether you want to do this depends on the cost to benefit ratio. In the past where the core of a system was to orchestrate between many systems, I thought it important to verify that these interactions happened correctly. In that case a black-box test makes sense. For the examples I am going to show later in this post the major complexity was in the numerical calculations of the value of the agreement. Here we chose to execute against elements in the code without a running application because what we cared about was documenting and verifying the workings of these calculations. The most value was in being able to write and execute these in a shorter feedback loop. It did mean we missed some complexity related to persisting the stream of calculations and these needed to be covered by other tests.</p>
<h2>Waxing lyrical like Goldilocks</h2>
<p>As mentioned in the introduction we really wanted to make sure that our acceptance tests were understandable by stakeholders and developers alike. We also really wanted these acceptance tests to serve as documentation in the future for how these calculations worked as we discovered in requirements gathering that this knowledge didn't reside in any one person's head.</p>
<p>So scenarios needed to be descriptive enough to really demonstrate how a calculation is done without each scenario being too dense with information. As it turned out this took some refinement.</p>
<p>As a quick introduction to the domain. In the contract management team we handle agreements with suppliers for an e-commerce company. Based on purchases or sales we might get money off the price of certain products purchased for stock, or sold on the website.
If an agreement is a fixed amount per product sold with a factor of 10, then selling 3 is worth 3 units x 10 EUR = 30 EUR.
Simple right?</p>
<h3>Too simple</h3>
<p>The first iteration was optimized for ease of duplication for the developer. A lot of the details of the agreement are hidden. What I particularly dislike about this style is how hard it is to pick out the details that matter. There is some magic around it being <code>agreement1</code> possibly? See what you think of the first iteration...</p>
<pre><code class="language-gherkin">Feature: FixedAmountAgreement
Scenario: Purchase agreement limited to 2 product limitations is finalized (factor is 10, agreement runs for 5 days -> 2 euros per day -> 1 euro per target)
Given Purchase agreement with id agreement1, starting yesterday and ending 3 days in the future, of type fixed amount, with status approved, with factor 10, and limitations
| Type | Name | Id |
| Product | Samsung Galaxy S8 Zwart | P1 |
| Product | Samsung Galaxy S8 Zilver | P2 |
When the allocation process runs for the Purchase agreement
Then the total allocated value for each day per product is 1
</code></pre>
<p>Is it easy for you to reason about what that scenario is? It is more about what data is used than what the actual scenario is.</p>
<h3>Too complex</h3>
<p>Another trap that is easy to fall into is trying to test too much in a single scenario. This is similar to doing TDD with data driven tests ie. <code>[Theory]</code> with <code>[InlineData]</code> when using xUnit in .NET. Here we really loose any meaning in the scenario.</p>
<pre><code class="language-gherkin">Feature: SellInAgreement
Scenario Outline: Purchase agreement limited to 2 product limitations is finalized
Given Purchase agreement with id agreement1, starting yesterday and ending tomorrow, of type <Type>, with status <Status>, with factor 2, and limitations
| Type | Name | Id |
| Product | Samsung Galaxy S8 Zwart | P1 |
| Product | Samsung Galaxy S8 Zilver | P2 |
Given a purchase delivery verified yesterday with products
| PurchaseDeliveryLineId | ProductId | Quantity | Price |
| PD1 | P1 | 15 | 300 |
| PD2 | P2 | 10 | 280 |
When the allocation process runs for the Purchase agreement
Then the total allocated value on delivery line 1 is <DeliveryLine1Value>
And the total allocated value on delivery line 2 is <DeliveryLine2Value>
Examples:
| Status | Type | DeliveryLine1Value | DeliveryLine2Value |
| approved | percentage of purchased amount | 90 | 56 |
| invoiced | percentage of purchased amount | 90 | 56 |
| waiting for credit note | percentage of purchased amount | 90 | 56 |
| pending invoice | percentage of purchased amount | 90 | 56 |
| pending approval | percentage of purchased amount | 0 | 0 |
| rejected | percentage of purchased amount | 0 | 0 |
| deleted | percentage of purchased amount | 0 | 0 |
| approved | fixed amount per product purchased | 30 | 20 |
| invoiced | fixed amount per product purchased | 30 | 20 |
| waiting for credit note | fixed amount per product purchased | 30 | 20 |
| pending invoice | fixed amount per product purchased | 30 | 20 |
| pending approval | fixed amount per product purchased | 0 | 0 |
| rejected | fixed amount per product purchased | 0 | 0 |
| deleted | fixed amount per product purchased | 0 | 0 |
</code></pre>
<p>This one gives me little information on a scenario because it is really many scenarios. This is great for test coverage with a single test. It fails to document the behavior of the system in a way that makes it easy to reason about the characteristics of the system.</p>
<h3>Just right</h3>
<p>The problem with both structures so far is they do not represent how a user of the system would reason about calculating the value of the agreement. Let's step through it and then try write a test with that mental model.</p>
<p>A user will have an agreement that they want to calculate. At any given time that agreement will apply to some deliveries on products defined in the agreement. When something happens to an agreement it will effect the calculation in a specific way. For example, if the start date of an agreement moves so the agreement runs for longer, then it is likely that more deliveries will fall within the running period of that agreement.</p>
<p>Ok so with this mental model of how a user would approach calculating the value of an agreement, can we write a test that mimics that...</p>
<pre><code class="language-gherkin">Feature: Fixed Amount Sell-in Purchase Agreement
Background:
Given a fixed amount sell-in Purchase agreement
| Name | Value |
| Starting | 2017-01-05 |
| Ending | 2017-02-25 |
| Type | FixedAmountPerProductPurchased |
| Status | Approved |
| Factor | 10 |
| Product | P1 |
| Product | P2 |
| Product | P3 |
Scenario: Agreement start date is moved backwards so more purchase delivery lines are allocated against
Given the following purchase delivery lines exist
| Purchase Delivery Line Id | Product | Quantity | Price | Verification date |
| PD1 | P1 | 3 | 100 | 01-01-2017 |
| PD2 | P2 | 6 | 110 | 05-01-2017 |
| PD3 | P3 | 10 | 210 | 05-01-2017 |
And existing allocations for the agreement
| Purchase Delivery Line Id | Product | AllocatedValue |
| PD2 | P2 | 60 |
| PD3 | P3 | 100 |
When the Purchase agreement start date changes to 2017-01-01
And allocations are calculated for the Purchase agreement
Then the following purchase delivery lines are allocated against
| Purchase Delivery Line Id | Product | AllocatedValue |
| PD1 | P1 | 30 |
And the total allocated value for the Purchase agreement is 190
</code></pre>
<p>So that <em>agreement has some terms that effect the value</em> of it. These terms <em>mostly wont change across scenarios</em>. If they do <em>we want to highlight only the changes</em>. In the setup then we want to <em>show only what matters for the scenario</em>. We also want to <em>highlight behavior</em> and the <em>end result</em>.</p>
<h4>Breakdown of the recipe</h4>
<p>So we use <code>Background</code> to define the status quo across scenarios. It doesn't mean some of these values won't change but we only mention what does. This background can then be held constant across multiple scenarios. This allows us to still be explicit about the status of the agreement without needing to be verbose in EVERY scenario about it. It allows the reader to reuse the information across scenarios. It also means we only need to mention CHANGES.</p>
<p>Our <code>Scenario</code> can now be quite explicit about what will change. This allows us to document behavior way more explicitly than the previous tests while still having explicit information available to the reader if needed in the <code>Background</code>.</p>
<p>The <code>Given</code> steps allow us to define setup that is relevant to each <code>Scenario</code> only.</p>
<p><code>When</code> steps will now typically define the actions that make a <code>Scenario</code> unique. This could of course be in the <code>Given</code> setup or a combination of both but typically it is the <code>When</code> that makes the scenario interesting.</p>
<p>Finally the <code>Then</code> steps allow us to define what happened in the system and what the final result is.</p>
<p>Do you see the focus on the actual scenario here? Did this convey more of what the business actually considers? I think so.</p>
<h2>Summary</h2>
<p>So our first takeaway was that Acceptance tests and BDD in particular are a means of driving and documenting the expected behavior of the system while engaging with stakeholders.</p>
<p>Then in writing behavior tests we want to focus on capturing scenarios that are meaningful to stakeholders and accurately capture the mental model they have of the system. By structuring the tests in such a way we not only make it easier for our stakeholders to understand but we also make it much more likely that we grow our understanding of the system. Any technique that allows developers to gain insight into the users perspective is worth more than just test coverage. Software development at its core is about learning a problem space. Writing code is the easy part.</p>
<p>I hope you found this useful. If you have any thoughts on Acceptance testing, BDD, and/or writing good tests, I would love to hear from you in the comments below.</p>
<h2>Credits</h2>
<ol>
<li>Background image by <a href="https://unsplash.com/@nepumuk">Peter Kleinau</a></li>
<li>Social image by <a href="https://unsplash.com/@annapostovaya">Hanna Postova</a></li>
</ol>https://devonburriss.me/the-torch-bearer/The torch bearer2018-05-29T00:00:00+00:00Devon Burrisshttps://devonburriss.me/the-torch-bearer/<p>Software development can be a complicated process as the complexity of systems grow and the number of people involved increases, especially when these things happen quickly. This is when clear direction is important. Equally, if not more important is the experience and maturity of the teams building the software. Their ability to learn and adapt to the challenges that arise from the growing complexity will depend on mindset and the ability to work together. If a whole team can grow to understand the driving forces delivering the right software in the right way, it can be epic. So... STORY TIME!</p>
<!--more-->
<h2>A walk in the dark</h2>
<p>Imagine trying to walk to the top of a densely forested hill in the dark with no path to follow. Even if you know where you should be going, finding your way would be nearly impossible.</p>
<p>Your goal is simple, get to the top of the hill. Easier said than done. As you stumble around in the dark, trying to head up the hill, you constantly run into obstacles. You trip over roots that you cannot see and have to push your way through brush and trees that seem to have a will of their own to hold you back. Why is there no path up the hill? Has no one gone this way before?</p>
<p>As you persist you trip and fall, sliding downward, losing elevation in seconds that you worked what seemed like hours to gain.</p>
<p>Could this be easier? Of course but how?</p>
<p>What if there was a light at the top of the hill? This would help some. For one we would know for sure if we were heading in the right direction. We would have a feeling for whether we were getting closer and how far we have to go. We could use it to determine how thick the undergrowth is by the amount of light that shines through, helping us decide on a direction of approach.</p>
<p>Does this light help us in actually walking? In doing the actual work of moving forward? Not really. It is still dark in the forest, there is still no path, and you are still tripping over those damn roots!</p>
<p>What if a friend appears with a torch? He has been up the hill before and he has a good idea of the best route with the least resistance up the hill. Does this help? Immensely! The going is still slow but you can now see where you are going. You have someone to warn you of roots. You can even see the roots. And you can catch each other when you stumble. You can even carry the torch for a while...</p>
<h2>So what the heck does this have to do with software development?</h2>
<p>For those of you who have been on some tough software development projects, the analogue is probably quite obvious.</p>
<h3>Goal: Get to the top of the hill</h3>
<p>In business having a clear goal is important but a goal tells you the what you want to achieve, not necessarily how you are going to achieve it. Blindly following the gradient up means you are probably moving toward your goal but it does little in informing how to achieve it.</p>
<h3>Vision: Follow the light</h3>
<p>Having a clear vision of how a goal is going to be achieved is important. It sets a light on top of the hill to be followed. Again it does little in terms of the <em>how</em> but it gives anyone working toward it a clear sense of moving in the right direction.</p>
<h3>Experience: A friend to help find the way</h3>
<p>The most innovative goals. The clearest vision. None of these are worth anything if execution fails. From this you may think my analogue is about Team Leads or Architects who tell development teams how to execute. It is not. The torch bearer shows you the way by walking with you. Guiding, teaching, catching, and learning too. If you want to change the software that teams build you need to change the culture of the teams that build the software.</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">If you remake awful software from scratch without changing the culture that created it: you'll remake awful software</p>— Romeu Moura (@malk_zameth) <a href="https://twitter.com/malk_zameth/status/654710109214371841?ref_src=twsrc%5Etfw">October 15, 2015</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<h2>Conclusion</h2>
<p>Goals, visions, speeches, manifestos, and tech radars. These things are all useful. They give a direction. They light a fire on top of a hill. They do not however really help people get to the top of the hill. That is done by building people up. Making sure teams have the skills, the collaborative environment, the mentors and experience available to find the path in the dark.</p>
<p>Although I like thinking software development is like engineering, mostly it is like a long hike in the wilderness.</p>
<blockquote>
<p>Sometimes it is boring...</p>
</blockquote>
<p><img src="/img/posts/2018/bored.jpg" alt="bored me" /></p>
<blockquote>
<p>sometimes you see amazing things...</p>
</blockquote>
<p><img src="/img/posts/2018/reflection.jpg" alt="reflecting pond" /></p>
<blockquote>
<p>sometimes you get lost along the way...</p>
</blockquote>
<p><img src="/img/posts/2018/wilderness.jpg" alt="view of mountains" /></p>
<blockquote>
<p>so its good to have someone with experience...</p>
</blockquote>
<p><img src="/img/posts/2018/old-man-daly.jpg" alt="view of mountains" /></p>
<blockquote>
<p>and its easier if you bring some friends.</p>
</blockquote>
<p><img src="/img/posts/2018/friends.jpg" alt="view of mountains" /></p>
<p>I hope your path is challenging and fun, that you meet good people along the way, and that it is well lit.</p>
<h2>Credits</h2>
<ul>
<li>Social photo by <a href="https://unsplash.com/@viniciusamano">Vinicius Amano</a></li>
<li>Header photo by <a href="https://unsplash.com/@worldsbetweenlines">Patrick Hendry</a></li>
<li><a href="https://twitter.com/malk_zameth">Romeu Moura</a></li>
</ul>https://devonburriss.me/first-mob-programming/Mobbing a story2018-05-09T00:00:00+00:00Devon Burrisshttps://devonburriss.me/first-mob-programming/<p>Mob programming can be a great way of sharing knowledge, building ownership, as well as a way of getting a story done with everyone checking it. Although this can be slower because of everyone having an opinion, I do strongly believe that it results in a higher quality implementation with a greater chance of being functionally correct and bug free. I thought it would be helpful to share our learnings while completing a fairly complex story using mob programming.</p>
<!--more-->
<h2>Mob programming TL;DR</h2>
<p>So a short TL;DR of mob programming if you don't know what mob programming is. Basically it is pair programming on steroids. Multiple developers work on a single problem using a single machine. This works well if there is a large screen or projector. All developers can contribute ideas and concerns while one person drives at the keyboard.
One common concern is over the efficiency of having a whole team working on a single problem. If it is a difficult problem, throwing more brain-power at it is a good idea. It also increases understanding and ownership of the code, which increases productivity of a team. Lastly, it is an opportunity for team members to learn from each other which again increases productivity over the long run.</p>
<h2>Learnings</h2>
<p>We would regularly stop and review how things had gone and what might work better. This is important to build into all team based activities. Doing the wrong thing as a single developer is one thing, doing it with more people just multiplies the inefficiency. This bring me to the first learning...</p>
<h3>Time-box the drive time</h3>
<p>Set a timer for 25 minutes (or whatever time you think works). Once a timer runs out use the moment to review what has been done in the time. Ask questions like "Are we happy with the current direction?" and "Do we want to continue on this path?". This breaks you from the flow of developing and engages all those brains involved to evaluate early and often. It also provides a good moment to swap drivers so someone else gets a chance at the keyboard. The previous driver then gets a chance to contribute without multitasking.</p>
<p>Another thing to check every few sessions is energy levels. If people run out of steam, engagement will drop and the benefits of mob programming dwindle.</p>
<h3>Park when needed</h3>
<p>One thing we noticed very early on was that we would often go off on tangents that had very little to do with the story we were implementing. As an example we touched on the <a href="/maintainable-unit-tests">style chosen to write the unit tests</a>. This is a worthwhile discussion to have and it is important that the whole team understands and is on-board. On the other hand if we engaged on every topic, we would never complete the story. We decided that if any topic that was not directly related to the story could not be resolved in a few sentences, it should be parked. We wrote down the topic on a sticky note to discuss later and moved on.</p>
<h3>Have a roadmap</h3>
<p>This was a fairly complicated problem in an existing codebase that not everybody was familiar with. At times we would lose track of what the current task was. On reflecting we decided it was useful to have a clear goal of what we were currently trying to achieve. We did this by drawing out the tasks that needed doing, their dependencies, and ticking off what had been done. The blue magnet is the task currently being worked on.</p>
<p><img src="/img/posts/2018/mob-todo.jpg" alt="mob todo list" /></p>
<h3>Avoid backseat driving</h3>
<p>We found it nonconstructive to have everyone shouting instructions at the driver. Instead we would discuss a problem and decide on a direction. The driver would then implement what was decided on with the team helping out as necessary.</p>
<h3>Be courteous to other drivers</h3>
<p>Criticizing the developer driving does not lead to a constructive environment to mob program in. Remember at some stage you should drive too.</p>
<h3>Pit-stop early and often</h3>
<p>Be sure to commit early and often. Whenever a test passes, a new direction is chosen, a refactor is done. Commit it. We learned the hard way what happens if you do a refactoring and then want to back out of it.</p>
<h2>Conclusion</h2>
<p>The team did comment on certain parts of the activity being more engaging than others. Some activities like creating types with lots of properties can be quite tedious to watch. Some learnings did come out of this, like what parts of the codebase can be repetitive which might be a code smell of over engineering.</p>
<p>Mob programming is a great activity for working more as a team and for those who have not pair-programmed before, participating without driving might make them more open to pair programming. It is also awesome for sharing knowledge throughout the team. The benefits will pay for the momentary drop in productivity due to parallelizing work. If you approach it in an agile way with continual feedback you can find ways to make it work for you. Just be sure to be accepting toward one another. Have you tried mob programming? If not, give it a go in your team. You do it regularly?
Please share your experiences!</p>
<h2>Credits</h2>
<ol>
<li>Background image by <a href="https://unsplash.com/@hudsonhintze">Hudson Hintze</a></li>
<li>Social image by <a href="https://unsplash.com/@timmarshall">Tim Marshall</a></li>
</ol>https://devonburriss.me/simple-trick-to-be-a-better-leader/This one trick will make you a better leader2018-05-04T00:00:00+00:00Devon Burrisshttps://devonburriss.me/simple-trick-to-be-a-better-leader/<p>If you are a leader, many things are expected from you. Skill. Vision. Charisma. There are other characteristics that don't always come to mind when thinking of leadership. Loyalty inspires loyalty. Calmness under pressure. Trust inspires trust. Many of the characteristics are quite hard to quantify and difficult to learn. Some people just seem to be born with them and others grow into it, being shaped by their experiences.</p>
<!--more-->
<blockquote>
<p>Full disclosure. If you have gotten to this point. The title was click bait. There is no simple trick. Sorry!</p>
</blockquote>
<p>I wanted to write about <strong>compassion</strong>. I will make a case for why it is important for leaders and point you toward how you can cultivate it.</p>
<h2>Compassion: The missing skill</h2>
<p>Skill you may ask? It may sound weird to describe compassion as a skill. Call it what you will but it can be learned, and it must be used skillfully. That sounds like a skill doesn't it?</p>
<p>How can developing such a touchy-feely thing as compassion help you be a better leader? Let me list the ways developing compassion has helped me.</p>
<ul>
<li>I am calmer when difficult situations arise with colleagues</li>
<li>Colleagues are more open to my feedback and arguments</li>
<li>People are more trusting of me</li>
<li>Difficult situations that previously were awkward are no longer because of a genuine concern for the others well-being</li>
<li>I derive more joy out of working with people I have more compassion for</li>
<li>Compassion often compels me to help others</li>
<li>Compassion gives me a desire to invest in others growth and well-being</li>
</ul>
<p>Looking at that list it may not be immediately evident how these make you a better leader. Well it depends on what type of leader you are wanting to be but I can attest that as I have cultivated compassion loyalty, calmness, and trust have developed in me. As these things have grown in me, so has the measure that others have shown those qualities back toward me.</p>
<h2>Skillful compassion</h2>
<p>I wanted to draw attention to using the skill skillfully. Compassion does not mean empathy. For me compassion means that I care for a person's well-being and am motivated to increase or protect that well-being. Empathy on the other hand is feeling the emotions that another is feeling. As a leader, feeling the emotions of everyone you lead would be extremely draining.</p>
<p>It seems to me that equanimity is important when in a situation where compassion is in play. You may be in a situation where you have to do something that is going to hurt the other person. Empathy would motivate you to not want to hurt the person because you would feel that pain as well. Compassion on the other hand motivates you to do it in a way that protects the person and lends emotional support but is still responsible.</p>
<h2>Cultivating compassion</h2>
<p>You may think that compassion is not something that can be learned but a few years ago I discovered it was. In Buddhist tradition there is a technique called <em>Loving kindness</em>. It is a meditation exercise but you do not need to be a practicing meditator to practice it. In fact religions around the world has been practicing it for millennia. In many religions it is common to pray for the goodwill of those around you, especially your loved ones. If you are religious this act will probably seem quite familiar. The difference being in how it really becomes a practice rather than something inserted in a prayer.</p>
<p>I am sure there are many variations on how to practice this but this is how I do it. Don't be afraid to experiment with what works for you.</p>
<ol>
<li>To get started <em>find a quiet place to sit</em> where you will not be interrupted.</li>
<li>Visualize the most cherished person in your life. This could be a partner, a child, or maybe a parent.</li>
<li>In your mind say their name followed by "... I wish you peace, happiness, and freedom from suffering".</li>
<li>While doing the previous step try direct feelings of love and goodwill toward the person</li>
<li>Move on to your next most cherished person and repeat the steps. Saying the phrase and projecting those feelings.</li>
</ol>
<p>You will go from family, to friends, then colleagues, and acquaintances, until you are vaguely just visualizing a specter of a stranger and directing those feelings toward your fellow humanity. Doing this daily for just a few minutes a day will start to cultivate a sense of compassion for everyone you come across. Not only will this build relationships with colleagues at work but it will also enrich your personal life.</p>
<h2>Conclusion</h2>
<p>I hope I have convinced you that compassion is both an important aspect of leadership and something that can be improved.</p>
<p>One disclaimer: I am not sure what the results of this are without a corresponding practice in mindfulness. I mentioned exercising equanimity in difficult situations and mindfulness meditation is how I cultivate that equanimity.</p>
<p>One last thing. Despite the title of this post, this is not a cheap trick to apply. It will take time and discipline to cultivate. The effects will be gradual and only be noticeable when you look back months or years on how you viewed the people around you.</p>
<p>I really hope you will give this a chance. Over time the benefits can be great and meaningful.</p>https://devonburriss.me/employees-are-like-cars-not-petrol/Employees are like cars not petrol2018-05-03T00:00:00+00:00Devon Burrisshttps://devonburriss.me/employees-are-like-cars-not-petrol/<p>You may have had the experience before that the company you work for sees you as a resource. A number on a spreadsheet. Did you feel motivated to work there? Did you stay there for a long time? Did you feel loyalty? Probably not...</p>
<p>I want to float a really bad analogue but hopefully it will make my point. Petrol (or diesel/electricity) is a resource, employees are more like cars.</p>
<!--more-->
<h2>The Analogue</h2>
<p>Imagine when you start a new job you are like a car they drive for work. The company hires you with the tank full. This is usually true in my experience as you are excited for the new opportunity and eager to do your best. One more analogue with the car. Job satisfaction (emotional reward, compensation, etc.) determines the mileage of the car.</p>
<p>As you work you slowly use up the petrol in the tank. What are you burning? New things to learn, new people to meet, energy on things that don't engage, belief in direction, etc. Name the things that make a job emotionally rewarding.</p>
<h2>Driver: The broke student</h2>
<p>Were you ever a broke student with a car that was likely to break down if it didn't run out of petrol first? I was. I would fill up the tank with just enough petrol to get me to my next destination.</p>
<p><img src="/img/posts/2018/student-tank.jpg" alt="Student tank trajectory" /></p>
<p>In this model moments come along that refill the tank a little but they are blips in the downward trend. The consumption of the resource is still greater than the sum of the top-ups. Completion of big projects, moves to a new challenge, and promotions, all add some fuel but the problem is these are not sustainable as big refuels that come along. There are so many projects, moves, and promotions available and what if there are just a few failures? These could easily wipe out any gains.</p>
<h2>Driver: The responsible car owner</h2>
<p>My dad takes care of his cars. My brother takes care of his cars. I don't own a car anymore as here in The Netherlands I don't find it necessary. When I got my first car though my dad used to tell me to fill it up often so that the fuel tank would not rust. Easier said than done as a poor student but the lesson stuck at least.</p>
<p><img src="/img/posts/2018/adult-tank.jpg" alt="Adult tank trajectory" /></p>
<p>The lesson here, and the whole point of this flaky analogue is that if you want to keep employees, they need to be topped up daily. Coming to work needs to be what tops off the fuel tank. There needs to be creative freedom. The people need to be smart and fun. The environment needs to foster learning and growth. The problems need to be challenging but solvable.</p>
<h2>Conclusion</h2>
<p>People apparently quit managers not companies. While I think this is true I don't think it is always personal. The bottom line is management (from CEO down) is custodian of the environment and culture. Although everyone is responsible to a certain extent, it is management's job to ensure it is heading in the right direction and to step in and take action if it is not.</p>
<p>DON'T let frustration linger with no solution
DO empower people to solve their own problems</p>
<p>DON'T think of employees as resources that just churn out value with no input
DO set aside time and encourage learning and personal growth</p>
<p>DON'T tell people how to do their job
DO give a vision for what the business wants to achieve strategically</p>
<p>Perks, events, money. These are not what keep people happy. If people are complaining about these things then chances are the culture is so bad that that is the only thing your employees have to hold on to. They are symptoms and make sure you are ignoring them only if you are addressing their more pressing cultural and environmental issues.</p>
<p>At the same time, find out what employees value already. It is usually easier to do more of something beneficial than stop the things that kill a healthy culture. It doesn't mean the bad doesn't need to be addressed but it will make the culture more robust and build trust.</p>
<p>Do you have your own DOs or DON'Ts for building or tearing down company culture? I would love to hear them.</p>
<h2>Credits</h2>
<ol>
<li>Social image by <a href="https://unsplash.com/@alexread">Alex Reed</a></li>
<li>Header by <a href="https://unsplash.com/@igorovsyannykov">Igor Ovsyannykov</a></li>
</ol>https://devonburriss.me/leader-archetypes/Leader Archetypes2018-04-26T00:00:00+00:00Devon Burrisshttps://devonburriss.me/leader-archetypes/<p>Why do we follow people? Why only some people? Character, situation, and value-alignment seem to jump out as obvious factors but what configuration of these allows us to suspend our own self-absorption to work toward a common goal? Are there archetypes that represent what leaders do to get people to follow them?</p>
<!--more-->
<p>Leadership is by no means something I have devoted a lot of time to compared to more technical learnings. Even when I have read up on leadership it has been either very specific to development teams or in a general life sense like "Seven Habits of Highly Effective People".</p>
<blockquote>
<p>So take the following with a pinch of salt. They are observations from Life not Leadership.</p>
</blockquote>
<p>These are three archetypes of leaders I have formulated for myself. This is based purely on introspection and observation of peers so I am sure someone more knowledgeable in leadership could point out all the depth of knowledge that I have missed.</p>
<h2>Napoleon</h2>
<p><img src="../img/posts/2018/napoleon-bonaparte-400.jpg" alt="Napoleon Bonaparte" class="img-rounded pull-left" width="300" style="margin-right: 1em;"> Skilled. Napoleon Bonaparte was a skilled military leader. It is what brought him up to the opportunity to be emperor and it is what makes him memorable. People believed in him because he had repeatedly shown his ability to win wars. He used his skill to conquer most of Europe.</p>
<p>It seems people will follow if you have shown demonstrable skill in a field that they care about. Said skill provides authority in the field of expertise but that authority does seem to bleed out into other areas. This is not entirely illogical as competence in one area means at least the capacity for competence in other areas. As humans we do over estimate that competence, which is known as the <a href="https://en.wikipedia.org/wiki/Authority_bias">Authority bias</a>. The quintessential example here is weighting the opinion of a doctor more than someone else on a field that is not medicine.</p>
<p>In tech this is a known issue with competence in software development often leading to more management type roles. Regardless of how equipped the individual is to lead people, their competence as a developer will influence how willing people are to follow their lead.</p>
<h2>Robin Hood</h2>
<p><img src="../img/posts/2018/robin-hood-400.jpg" alt="Robin Hood" class="img-rounded pull-right" width="300" style="margin-left: 1em;">Protect. In legend Robin Hood fought injustice for the people. As a leadership style this can be effective in growing influence. In the legend Robin fought for the people but the people also loved and protected him.</p>
<p>As a leader if you are seen as serving your team by fighting for their happiness, freedom to operate independently, and freedom from hardships, you will earn respect. That influence in turn can be used to resolve conflicts, multiply productivity, and lead in directions you see as beneficial.</p>
<p>There are of course consequences of taking this style too far. As a leader it will often put you at odds with others in an organization, including your boss if you have one. It can lead to <a href="https://en.wikipedia.org/wiki/In-group_favoritism">In-group favoritism</a> within the team. Related to this it can have a tribal effect where the team is seen as an outsider (outlaws if you wish), this is exacerbated by the In-group favoritism. Finally, if done very poorly it could result in coddling of the team. This is easy enough to mitigate by favouring teaching members of the team to do things themselves rather than doing it for them.</p>
<h2>Martin Luther King Jr</h2>
<p><img src="../img/posts/2018/martin-luther-king-jr-400.jpg" alt="Martin Luther King Jr" class="img-rounded pull-left" width="300" style="margin-right: 1em;"> Inspire. Martin Luther King Jr inspired people to mobilize for a cause. He was charismatic. He had a vision of what he wanted. He used that charisma to mobilize people into action. He inspired people to believe vision could become reality.</p>
<p>There are many charismatic leaders to pick from but Martin was my first choice because it wasn't just about speaking in a way that inspired others. His vision was shared by those that had to live with the inequality and those who saw the inequality and wanted it to change. His message and methods were moral (non-violent protest). And then of course his speeches were inspiring. Inspiration is more than just charisma. It is about a vision that is clear and shared by others. Martin didn't use his charisma to convince people of his vision. It was their vision too and he inspired them to make it a reality.</p>
<p>In tech charisma is a rare thing but Martin didn't just give speeches. He gave speeches at protests he had helped organize. He created an environment where people could come together to work toward their shared vision. Is charisma really the most important element in making meaningful change then?</p>
<h2>Conclusion</h2>
<p>One final point. I don't think fitting into just one of these archetypes would make an effective leader. These are just archetypes I have noticed over the years that when in play cause people to follow. An effective leader would probably fit into one of these but have strong elements of the others. That mix is what allows counteracting the ill effects of some styles.
And one final time: This is not my area of expertise...but I have run these archetypes past a few people and they seem to find it useful. Probably because they think me a competent software developer ;)</p>
<h2>Credits</h2>
<ol>
<li>Social image <a href="https://unsplash.com/@jwimmerli">Jean Wimmerlin</a></li>
<li>Napoleon and MLK Jr photo from <a href="https://pixabay.com/">Pixabay</a></li>
<li>Bow and Arrow photo by <a href="https://unsplash.com/@zoltantasi">Zoltan Tasi</a></li>
</ol>https://devonburriss.me/maintainable-unit-tests/3 tips for more maintainable unit tests2018-04-07T00:00:00+00:00Devon Burrisshttps://devonburriss.me/maintainable-unit-tests/<p>Although having a good collection of unit tests makes you feel safe and free to refactor, a bad collection of tests can make you scared to refactor. How so? A single change to application code can cause a cascade of failing tests. Here are some tips for avoiding (or fighting back) from that situation.</p>
<!--more-->
<blockquote>
<p>Important! This post contains example code. Don't copy/paste into production code.</p>
</blockquote>
<h2>Tip 1: Test behavior not structure</h2>
<p>The behavior of the system is what the business cares about and it is what you should care about as well from a verification point of view. If requirements change drastically then changes to the system are expected, including the tests. The promise of good unit test coverage is that you can refactor with confidence that your tests will catch any regressions in behavior. However if you are testing the structure of your application rather than the behavior, refactoring will be difficult since you want to change the structure of your code but your tests are asserting that structure! Worse, your test suite might not even test the behavior but you have confidence in them because of the sheer volume of structural tests.</p>
<p>If you test the behavior of the system from the outside you are free to change implementation and your tests remain valid. I am not necessarily talking about integration style tests but actual unit tests whose entry point is a natural boundary. At work we have use-case classes that form this natural entry-point into any functionality.</p>
<p>So let's look at an example of structural testing, and see the what happens when we try make a change to the implementation details. As an example we have a test against a <code>CreatePerson</code> use-case that creates a <code>Person</code> class and persists it if it is a valid person object. The initial design takes in an <code>IValidator</code> to determine whether the person is valid.</p>
<pre><code class="language-csharp">// tests
// test for invalid name omitted...
[Fact]
public void CreatingPerson_WithValidPerson_CallsIsValid()
{
var name = "Bob";
var people = Substitute.For<IPersonRepository>();
var validator = Substitute.For<IPersonValidator>();
var createPerson = new CreatePerson(people, validator);
createPerson.With(name);
validator.ReceivedWithAnyArgs(1).IsValid(Arg.Any<Person>());
}
// anemic domain entity
public class Person
{
public Person(Guid id, string name)
{
Id = id;
Name = name;
}
public Guid Id { get; set; }
public string Name { get; set; }
}
// use-case
public class CreatePerson
{
private readonly IPersonRepository personRepository;
private readonly IPersonValidator personValidator;
public CreatePerson(IPersonRepository personRepository, IPersonValidator personValidator)
{
this.personRepository = personRepository;
this.personValidator = personValidator;
}
public void With(string name)
{
var person = new Person(Guid.NewGuid(), name);
if (personValidator.IsValid(person))
{
personRepository.Create(person);
}
else
{
throw new ArgumentException(nameof(name));
}
}
}
</code></pre>
<p>Notice how we are asserting against a dependency (<code>IValidator</code>) of the use-case (<code>CreatePerson</code>). Our test has structural knowledge of how <code>CreatePerson</code> is implemented. Let's see what happens when we want to refactor this code...</p>
<p>Your team has been trying to bring in some new practices like Domain-Driven Design. The team discussed it and the <code>Person</code> class represents an easy start learning. You have been tasked with pulling behavior into the the <code>Person</code> entity and make it less anemic.</p>
<p>As a first try you move the validation logic into the <code>Person</code> class.</p>
<pre><code class="language-csharp">public class Person
{
public Person(Guid id, string name)
{
Id = id;
Name = name;
}
public bool IsValid()
{
if (Id == Guid.Empty) return false;
if (string.IsNullOrEmpty(Name)) return false;
return true;
}
public Guid Id { get; }
public string Name { get; }
}
</code></pre>
<p>Looking at the use-case, we no longer need to inject <code>IValidator</code>. Not only is what we test going to have to change, we are going to have to change the test completely because we no longer have a validator to inject as a mock. We have seen the first signs of our tests being fragile.</p>
<p>Let's try make our test focus on the behavior we expect instead of relying on the structure of our code.</p>
<pre><code class="language-csharp">// test for invalid name omitted...
[Fact]
public void CreatePerson_WithValidName_PersistsPerson()
{
var name = "Bob";
InMemoryPersonRepository people = Given.People;
var createPerson = new CreatePerson(people);
createPerson.With(name);
Assert.Equal(name, people.All().First().Name);
}
</code></pre>
<p>Don't worry too much about <code>InMemoryPersonRepository people = Given.People;</code> for now, we will come back to it. All you need to know is that <code>InMemoryPersonRepository</code> implements <code>IPersonRepository</code>.</p>
<p>Since we no longer need <code>IValidator</code> and it's implementation, we delete those. We also get to delete the test <code>CreatingPerson_WithValidPerson_CallsIsValid</code> as we have a better test now <code>CreatePerson_WithValidName_PersistsPerson</code> that asserts the behavior we care about, the use-case creating and persisting a new person. Yay, less test code, better coverage!</p>
<p>At this point you might be saying "Wait! Unit tests are supposed to test one method, on one class". No! A unit is whatever you need it to be. I am by no means saying write no tests for your small implementation details, just make sure you are comfortable deleting them if things change. With our focus on behavior tests we can delete those detailed tests freely and still be covered. In-fact, I often just delete the tests after I am done developing the component as I just used TDD for fast feedback loop on the design and implementation. Remember that test code is still code that needs maintenance so the more coverage for less the better.</p>
<p>So back to the code. What does our use-case look like now?</p>
<pre><code class="language-csharp">public class CreatePerson
{
private readonly IPersonRepository personRepository;
public CreatePerson(IPersonRepository personRepository)
{
this.personRepository = personRepository;
}
public void With(string name)
{
var person = new Person(Guid.NewGuid(), name);
if (person.IsValid())
{
personRepository.Create(person);
}
else
{
throw new ArgumentException(nameof(name));
}
}
}
</code></pre>
<p>Thats ok. We got rid of a dependency and moved some logic to our <code>Person</code> entity but we can do better. On reviewing your pull request someone in the team pointed out something important. You should be aiming to make unrepresentable states unrepresentable. The business doesn't allow saving a person without a name so let's make it so that we can't create an invalid <code>Person</code>.</p>
<pre><code class="language-csharp">// person entity
public class Person
{
public Person(Guid id, string name)
{
if (id == Guid.Empty) throw new ArgumentException(nameof(id));
if (string.IsNullOrEmpty(name)) throw new ArgumentException(nameof(name));
Id = id;
Name = name;
}
public Guid Id { get; }
public string Name { get; }
}
// use-case
public class CreatePerson
{
private readonly IPersonRepository personRepository;
public CreatePerson(IPersonRepository personRepository)
{
this.personRepository = personRepository;
}
public void With(string name)
{
var person = new Person(Guid.NewGuid(), name);
personRepository.Create(person);
}
}
</code></pre>
<p>Look at that! We refactored the implementation without having to update our test. It still passes without any changes.</p>
<p>This was a contrived example to illustrate the point but I hope this tip helps you write more maintainable tests.</p>
<h2>Tips 2: Use in-memory dependencies</h2>
<p>You have already seen <code>InMemoryPersonRepository</code> so this tip should be less verbose to explain. The claim is simply that the maintainability of your tests can be increased by using in-memory versions of your dependencies a little more and using mocking frameworks a little less.</p>
<p>I find in-memory versions of something like a repository that speaks to a database preferable to mocking frameworks for a few reasons:</p>
<ol>
<li>They tend to be easier to update than a mocking framework, especially if creation of the mocks is done in every test or fixture</li>
<li>Coupled with some tooling (see next tip) they lead to far easier setup and readability</li>
<li>They are simple to understand</li>
<li>Great debugging tool</li>
</ol>
<p>On the down side, they do take a little time to create.</p>
<p>Let's take a quick look at what the one looks like for our code so far:</p>
<pre><code class="language-csharp">public class InMemoryPersonRepository : IPersonRepository
{
private IDictionary<Guid, Person> data;
public InMemoryPersonRepository(IDictionary<Guid, Person> data)
{
this.data = data;
}
public IReadOnlyCollection<Person> All()
{
return new List<Person>(data.Values);
}
public void Create(Person person)
{
data.Add(person.Id, person);
}
}
</code></pre>
<p>Super simple! Put in the work and give it a try, it may not be as sexy as a mocking framework but it really will help make your test suite more manageable.</p>
<h2>Tip 3: Build up test tooling</h2>
<p>Test tooling in this context means utility classes to make readability and maintainability of the tests easier. A big part of this is about making your tests clear about the setup while still keeping it concise.</p>
<p>Let's discuss a few helpers you should have in any project...</p>
<h3>In-memory dependencies</h3>
<p>This was already discussed above. I can't stress enough how much this improves maintenance and simplifies reasoning about tests.</p>
<h3>Builders</h3>
<p>Builders can be used as an easy way to setup test data. They are a great way of simultaneously avoiding dozens of different setup methods for your tests and a way to make it clear what the actual setup of your test is without diving into some setup method that looks like all the other setup methods.</p>
<pre><code class="language-csharp">public class InMemoryPersonRepositoryBuilder
{
IDictionary<Guid, Person> data = new Dictionary<Guid, Person>();
public InMemoryPersonRepositoryBuilder With(params PersonBuilder[] people)
{
foreach (Person p in people)
{
data.Add(p.Id, p);
}
return this;
}
public InMemoryPersonRepository Build()
{
return new InMemoryPersonRepository(data);
}
public static implicit operator InMemoryPersonRepository(InMemoryPersonRepositoryBuilder builder)
=> builder.Build();
}
</code></pre>
<p>A little trick is to put an <code>implicit</code> conversion to the class you are building up. Also take a look at <a href="https://github.com/nrjohnstone/Fluency">Fluency</a> for helping with the creation of builders.</p>
<p>A final note on this point. Just because I use builders a lot does not mean I completely throw mocking frameworks out the window. I just tend to use mocking frameworks for things I really don't care about and really aren't likely to change. I also tend to use them within other builders rather than directly in tests. This gives way more control over the grammar that you use to setup your tests.</p>
<h3>Accessors</h3>
<p>Not sure what else to call these but it is useful to have a static class that makes access to builders and other types you would use in setup simple. Typically I have <code>Given</code> and <code>A</code>.</p>
<pre><code class="language-csharp">/// <summary>
/// Handles creation of instances useful to testing like entites, value objects, settings, etc.
/// </summary>
public static class A
{
public static PersonBuilder Person => new PersonBuilder();
}
/// <summary>
/// Handles the creation of builders that build external services for testing
/// </summary>
public static class Given
{
public static InMemoryPersonRepositoryBuilder People => new InMemoryPersonRepositoryBuilder();
}
</code></pre>
<p>This allows me to write some very concise setup code. For example if I needed to populate my person repository with 3 random people I could do so like this:</p>
<pre><code class="language-csharp">InMemoryPersonRepository people = Given.People.With(A.Person, A.Person, A.Person);
// if i wanted another with a specific name
people.Create(A.Person.With(name: "Bob"));
</code></pre>
<p>For completeness the <code>PersonBuilder</code> implementation:</p>
<pre><code class="language-csharp">public class PersonBuilder
{
private Guid id;
private string name;
public PersonBuilder()
{
id = Guid.NewGuid();
name = $"name {Guid.NewGuid()}";
}
public PersonBuilder With(Guid id)
{
this.id = id;
return this;
}
public PersonBuilder With(string name)
{
this.name = name;
return this;
}
public Person Build()
{
return new Person(id, name);
}
public static implicit operator Person(PersonBuilder builder) => builder.Build();
}
</code></pre>
<h2>Wrapping up</h2>
<p>So those are my 3 tips for making your tests more maintainable. I encourage you to give them a try. Without investing in the maintainability of your tests they can quickly become a burden rather than a boon. I have seen the practices above improve things not only in my teams but other colleagues have converged on similar learnings with the same positive results. Let me know if you find this helpful, or even if there are any points you strongly disagree with. I would love to discuss in the comments. Happy coding!</p>https://devonburriss.me/managing-code-complexity/Managing Code Complexity2018-04-06T00:00:00+00:00Devon Burrisshttps://devonburriss.me/managing-code-complexity/<p>When we write code it is often easy to get caught up in the implementation details. Communicating intent is imperative to making code understandable, and keeping code understandable is important for handling complexity.</p>
<!--more-->
<p>Even if you don't practice DDD (or the problem space does not warrant it) and functional programming there are a few lessons to be learned from these disciplines that can be brought into any codebase.</p>
<h2>Tip 1: Describe the workflow at your entry point</h2>
<p>We have all heard the phrase "code is read many more times than it is written". What else is read a lot more than it is written? A book. Code is information dense and in any information dense book we have a Table of Contents.
In your entry point to executing some use-case against your system it is important that there is a high-level workflow that gives an overview of the the complete use-case. This gives a developer reading from the entry point a "Table of Contents" to drill down into whatever step they need to.</p>
<p>In this context a workflow is the steps needed to do the work of the use-case. This entry point could be a controller or a program main. A pattern we use at work is to create a use-case specific class with a <code>Do()</code> or <code>Execute()</code> method on it. Play around with the naming though. I like the class to describe the use-case while the method that causes execution to say something about the command coming in as a parameter eg. <code>new CalculateSomething().For(command.SomeNumber)</code>.</p>
<p><img src="/img/posts/2018/use-case.jpg" alt="use-case" /></p>
<blockquote>
<p>An easily understood use-case makes a great entry point for exploring a codebase</p>
</blockquote>
<p>Inside the method on your use-case you should strive to lay out the code in the steps needed to complete that use-case. Try keep these steps at the same high-level of abstraction but not too high-level. What do I mean by too high? Be sure to describe actual meaningful steps that avoid steps that doing multiple things and all you can describe them is as <code>ProcessX</code>. If you find yourself naming a step like that it is probably worth breaking that step into smaller more meaningful steps within the use-case.</p>
<p>What you really want to avoid here is scattering the steps needed to complete a use-case throughout an object hierarchy.</p>
<p><img src="/img/posts/2018/logic-stack.jpg" alt="scattered logic" /></p>
<blockquote>
<p>Sprinkling important application logic throughout a hierarchy makes it difficult to reason about</p>
</blockquote>
<p>By spreading the workflow through the hierarchy it is really difficult to see at a glance what the workflow does and then drill down from there into how. It also makes it difficult to compose in new functionality. If it is within the hierarchy you will often find yourself putting code for new features in weird places because that is where the data is available in the call chain.</p>
<h2>Tip 2: Prefer a longer workflow to a deep dependency chain</h2>
<p>This one builds on the previous tip but where the previous tip focused on describing a workflow at the entry point, this one is more about cognitive load. Each step allows you to step into it and see the details. Each of these steps might itself have a few dependencies as well as mini-workflows captured in each of those dependencies. This is just a rule-of-thumb but if the depth of a single steps dependency hierarchy exceeds the width of the steps in a workflow, at least ask the question of whether that should maybe be 2 steps.</p>
<p>Why is this important? You do not want to have to dive very deep to understand what happens in a single step. Remember that the entry point gives a complete overview of all high-level steps. If a hierarchy is too deep if might become hard to reason about. This is of course just a rule-of-thumb and any single step could of course warrant a deep hierarchy to implement it well.</p>
<p><img src="/img/posts/2018/etl-workflow.jpg" alt="scattered logic" /></p>
<blockquote>
<p>As a rule-of-thumb; keep your workflow longer than it is deep</p>
</blockquote>
<h2>Tip 3: Make your external dependencies visible</h2>
<p>External dependencies like databases, files, and/or webservices make things difficult to reason about if they are nested deep in the dependency hierarchy where it is often unclear that they are being called. Not only that but it forces excessive use of abstractions purely for testing, which causes test induced damage to the code.</p>
<p><img src="/img/posts/2018/deeply-nested-dep.jpg" alt="deeply nested external dependencies" /></p>
<blockquote>
<p>Deeply nested external dependencies make code more difficult to reason about and test</p>
</blockquote>
<p>By making your external dependencies part of the high-level workflow you communicate the dependencies clearly. This makes it clear what is required for the system as a whole but also what data is needed to complete the use-case. This might mean thinking a little differently about the problem. Instead of querying for something the moment you need it, you might try fetch it at the start. You might say that seems wasteful as some validation might fail. That argument could be turned around though and it could be argued that there is no point in validating input if the external dependencies needed to complete a use-case are not available.</p>
<p><img src="/img/posts/2018/highlight-dependencies.jpg" alt="highlight dependencies" /></p>
<blockquote>
<p>Make your external dependency calls clear in your high-level workflow</p>
</blockquote>
<h2>Tip 4: Push your external dependencies to the boundary</h2>
<p>Obviously every use-case is different but if at all possible push your external dependencies to the beginning and the end of your workflow. This is taking a page out of functional programming where purity matters. What is meant by purity? Basically we strive to have all functions results be determined only by the value of the arguments passed in. This makes functions easy to reason about as well as easy to test.</p>
<p><img src="/img/posts/2018/dependencies-on-boundary.jpg" alt="dependencies on the boundary" /></p>
<blockquote>
<p>Calls to databases, files, and webservices should be pushed to the boundary of the workflow</p>
</blockquote>
<p>I highly recommend watching <a href="https://www.youtube.com/watch?v=cxs7oLGrxQ4">From Dependency injection to dependency rejection by Mark Seemann</a> to see a detailed discussion on the topic.</p>
<h2>Tip 5: Bring business concepts up, push technical implementations down</h2>
<p>Keep checking that you have important code that shows the details of business logic as close to the root of the object hierarchy as possible. The business logic is what developers should see first, while the implementation details are deep or at least on the boundary of the workflow.</p>
<p><img src="/img/posts/2018/business-concepts-up.jpg" alt="business concepts up implementation detail down" /></p>
<blockquote>
<p>Favour business concepts further up the dependency hierarchy and implementation details lower down</p>
</blockquote>
<h2>Tip 6: Use abstraction judiciously</h2>
<p>Abstractions are something you want at the seams of your application modules/components. Obviously you can use them elsewhere, certain design patterns call for them. The important thing is to use them where needed and not by default.</p>
<p>From a clean architecture point of view you would use them to implement Ports and Adapters as a nice way of keeping your domain logic clean of implementation details. Abstractions are part of your domain, implementations are specific and live in specific infrastructure dedicated to that implementation.</p>
<p><img src="/img/posts/2018/abstractions.jpg" alt="abstractions" /></p>
<blockquote>
<p>Place abstractions at the seams</p>
</blockquote>
<h2>Tip 7: Use honest rather than simple types</h2>
<p>Create types to represent things like entity identity. <a href="http://devonburriss.me/honest-arguments/">There is a whole series on this</a> but if you do nothing else don't let your codebase be littered with <code>Guid</code>, <code>int</code>, <code>long</code>, <code>string</code> or whatever else you use as entity identity or reference. When your code relies on <code>invoiceId</code> and <code>invoiceLineId</code> and etc. it becomes too easy to swap 2 integers. Not only does it help prevent silly bugs but using types a little more liberally can really help convey intent. Finally, it makes finding all references where a type is used simple.</p>
<h2>And we are done</h2>
<p>I hope you find some of these tips useful. If you did, I would love to hear about it. If you have questions, feel free to leave a comment. If you think I am 100% wrong, I would love to hear your reasons. Above all, let's keep learning together and happy coding!</p>https://devonburriss.me/why-i-got-hooked-on-fsharp/Why I got hooked on F#2017-12-28T00:00:00+00:00Devon Burrisshttps://devonburriss.me/why-i-got-hooked-on-fsharp/<p>I have been asked a few times "how I got started with F#?" as more than a few people have found it difficult. I myself had a few false starts with it. It looked weird, I didn't know where or how to start, it was too different to OO with C style languages, and the tooling just was not as slick. I honestly think a better question is "Why did I start using F#?"</p>
<!--more-->
<h2>The WHY of it</h2>
<p>As I have matured as a developer I have come to appreciate coding practices that constrain my options in a way that minimizes potential errors. An infinitely flexible design is also one the allows all possible errors, known and unknown. Constraining future developers to "make illegal states unrepresentable" cannot be overstated as a design goal. I sometimes say "code like the future developer on this is an idiot because current you is an idiot and future you will be to". To be clear, I say that to myself, about myself.</p>
<p>In OO we do this with constructors or factories (and hidden constructors), with encapsulation and smart APIs. This is a big part of the guidelines around aggregates in Domain-Driven design (DDD) and keeping the aggregate consistent. We have a lot of patterns and practices in OO that help with this. A LOT! In fact it is quite difficult for new developers to get up to speed with them all. And since they are often struggling with the technical implementation of features they are not worrying too much about the intricacies of the design and whether it leads future developers into the pit of success. We coach, and hopefully with good coaching they learn these things faster than we did through trial and error. I cannot help but wonder if there is a simpler way to get to well designed software than absorbing all these patterns and practices? Note I said simple, not easy.</p>
<p>Functional programming (FP) with its mathematical basis makes some claims about correctness. Correctness is hard to be certain of when global state is constantly in flux, as it is in an OO centric application. FP revolves around functions, with inputs and outputs, with the same input always yielding the same output (for pure functions).</p>
<p>So basically the WHY can be broken down into 2 points:</p>
<ul>
<li>Correctness of the program</li>
<li>Fewer concepts need to be known to develop maintainable software</li>
</ul>
<p>I remember reading <a href="http://blog.ploeh.dk/2015/04/13/less-is-more-language-features/">this article of Mark Seemann's</a> and thinking this seems like a problem I have but I cannot quite relate to his conclusion. As we will see in the next section, it took me 2 years to get to a place where I could read that article and nod my head instead of scratch it.</p>
<h2>The HOW of it</h2>
<p>I was not keeping notes so these are the highlights I remember and that I think are important.</p>
<p>Since about 2013 I had been trying to learn and apply many of the technical approaches highlighted in DDD. This lead to much more focus on types whose instance state can only be changed in a very controlled way. Not only that but the types are descriptive of the domain and do not try be too reuseable but rather represent very specific use cases.</p>
<p>By the time 2016 rolled around I had heard of the promises FP made and had even "file new project"'ed an F# console application but with very little success. I resolved to give it a better try and started reading through <a href="https://fsharpforfunandprofit.com/books/#downloadable-ebook-of-this-site">fsharpforfunandprofit</a> and looking at a few <a href="https://www.pluralsight.com/search?q=F%23&categories=course">Pluralsight</a> videos.</p>
<p>Then I was contacted by <a href="https://www.manning.com/">Manning</a> to give feedback on an early draft of <a href="https://www.manning.com/books/functional-programming-in-c-sharp">Functional Programming in C#</a>. In it Enrico Buonanno gives a really deep introduction to functional concepts and patterns, showing both the implementation and usage of FP in C#. For me this was quite nice as I could absorb concepts without getting hung up on the syntax of some new programming language. These inspired a series of posts on Honest Types, namely <a href="/honest-arguments/">Honest Arguments</a>, <a href="/honest-return-types/">Honest Return Types</a>, and <a href="/better-error-handling/">Better Error Handling</a>.</p>
<p>At work my code started taking on a more functional style in C# and a few of our projects started making use of <a href="https://github.com/louthy/language-ext">Language Extensions</a>. I have a repository demonstrating some use cases <a href="https://github.com/dburriss/ElevatedExamples">here</a>.</p>
<p>By early 2017 I was writing small console apps in F# that would crunch some CSV files, or merge some PDF documents. These were not great and I realized that although I was getting used to F# syntax I was missing something key in how to structure my applications. The penny only dropped when watching a video from Mark Seemann on <a href="https://www.youtube.com/watch?v=US8QG9I1XW0">Functional architecture - The pits of success</a>. Another good one released later is <a href="https://www.youtube.com/watch?v=cxs7oLGrxQ4">From Dependency injection to dependency rejection</a>. Both of these talk about purity and composing applications so the code with dependencies on IO are on the outside. If this sounds like Clean/Onion/Hexagonal Architecture, you are absolutely right.</p>
<p>Now here we are at the end of 2017 and and I have just finished <a href="https://fsharpforfunandprofit.com/books/#domain-modeling-made-functional-ebook-and-paper">Domain Modelling Made Functional</a> by Scott Wlaschin of <a href="https://fsharpforfunandprofit.com/">fsharpforfunandprofit</a> fame. It brings together so many deep topics in such an approachable way that it is difficult to compare to any book I have read before. It doesn't assume any knowledge and yet I learned some F#, some FP, and some DDD even though I have read multiple books dedicated to each of these topics. Scott develops a feature from beginning to end in a practical way that distills and teaches the core concepts of these advanced topics without getting bogged down in theory. I realize I am sounding like a fan boy here but I would honestly recommend this book to teach FP and F# OR DDD. It teaches both brilliantly.</p>
<p>This December I posted <a href="/argument-for-fp/">my first F# themed blog post</a> as part of the <a href="https://sergeytihon.com/2017/10/22/f-advent-calendar-in-english-2017/">FsAdvent Calendar 2017</a>. I submitted <a href="https://github.com/giraffe-fsharp/giraffe-template/pull/4">my first PR to an F# open source project</a> and now I am winding down on my 2nd FP related blog post. I am looking forward to what the next year brings and all I have to learn.</p>
<h2>Further Reading (posts)</h2>
<ol>
<li>Mark Seemann has a brilliant posts on how a <a href="http://blog.ploeh.dk/2015/04/13/less-is-more-language-features/">language can reduce the potential for errors</a></li>
<li>Scott Wlaschin on <a href="https://fsharpforfunandprofit.com/learning-fsharp/">learning F#</a></li>
</ol>
<h2>Further watching (videos)</h2>
<ol>
<li>Mark has an excellent talk on <a href="https://www.youtube.com/watch?v=US8QG9I1XW0">falling into the pit of success</a> and another on <a href="https://www.youtube.com/watch?v=cxs7oLGrxQ4">Dependency Rejection</a></li>
<li><a href="https://vimeo.com/162209391">Designing with Capabilities</a></li>
<li><a href="https://vimeo.com/113707214">Railway oriented programming</a></li>
</ol>
<h2>Recommended books</h2>
<ol>
<li><a href="https://fsharpforfunandprofit.com/books/#domain-modeling-made-functional-ebook-and-paper">Domain Modelling Made Functional</a></li>
<li><a href="https://fsharpforfunandprofit.com/books/#downloadable-ebook-of-this-site">fsharpforfunandprofit</a></li>
<li><a href="https://www.manning.com/books/functional-programming-in-c-sharp">Functional Programming in C#</a></li>
<li><a href="https://www.manning.com/books/real-world-functional-programming">Real-World Functional Programming</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Header photo by <a href="https://unsplash.com/@johnmarkarnold">John Mark Arnold</a></li>
</ol>https://devonburriss.me/argument-for-fp/An argument for functional programming2017-12-08T00:00:00+00:00Devon Burrisshttps://devonburriss.me/argument-for-fp/<p>Have you ever thought you have the perfect tool for the job at work but it is not on the allowed list of languages or frameworks? At this stage you have a decision to make. Are you going to just move on and pick something that will meet less resistance or are you going to do the work to drive some change? In this post I make my case for functional programming in enterprise development, specifically <strong>F#</strong> if your current team expertise is .NET. The same arguments could be leveled for JVM based languages like Scala if your experience is in Java.</p>
<blockquote>
<p>This post is part of <a href="https://sergeytihon.com/2017/10/22/f-advent-calendar-in-english-2017/">FsAdvent Calendar 2017</a></p>
</blockquote>
<!--more-->
<h2>TL;DR</h2>
<p>In this post I drill down through the different reasons why a business (applies to individual developers too) should consider broadening their language range in a carefully considered way. First I argue that being open to multiple languages can benefit your companies hiring as well as the experience pool. Secondly I argue that functional programming opens up new perspectives while increasing the correctness of your applications in less time. As a bonus functional programming filters even better in the hiring process for top developers. Lastly I make the case that if you already have .NET experience the F# is a natural choice for a functional language.</p>
<p>If this is all you are going to read I want to leave you with an excerpt from a study done over 728 projects on Github. I link to the full article at the end of the post.</p>
<blockquote>
<p>"The data indicates that functional languages are better than procedural languages; it suggests that disallowing implicit type conversion is better than allowing it; that static typing is better than dynamic; and that managed memory usage is better than unmanaged." - A Large-Scale Study of Programming Languages and Code Quality in GitHub</p>
</blockquote>
<h2>An argument for language diversity</h2>
<img src="../img/posts/2017/scrolls.jpg" alt="Scrolls" class="img-rounded pull-left" width="290" style="margin-right: 1em;">
Firstly I would like to make the case for why you should consider using different languages in your environment. Even if you don't buy that, I will make a case for at the very least hiring outside of the language expertise you need on the job.
<h3>Slim pickings</h3>
<p>Good developers are in short supply and the market is competitive. By opening up your hiring to other languages, or actually using multiple languages, you <strong>expand the pool of developers by a multiple of the number of languages you are willing to consider</strong>. This can be a huge advantage in the number of applicants you receive. Obviously sheer number of applicants is not the only concern and I will address this in a later point. The important point to buy in to here though is that a good developer in any language is a better pick than a poor or average developer in your language of choice. Language specific skills can be ramped up fairly quickly. Experience and professionalism on the other hand is hard earned and hard to come by. In my opinion the quality of a developer always trumps the language they use.</p>
<h3>Swag</h3>
<p>Let's face it. Your reputation as a company influences who you attract. For professional, open-minded developers that are not fan boys of a specific language, <strong>a company that is focused on hiring on quality and principles is far more appealing than a company that religiously hires on technical stack</strong>. <strong>Polyglot</strong> (fluent in multiple languages) is one of those <strong>buzzwords</strong> that started doing the rounds a while back in the programming space (in this case specific to programming languages). <strong>Being able to use it honestly in your recruitment is a real bonus</strong>.</p>
<h3>Skin the cat</h3>
<p>Different experience and different language features allow for different ways of solving problems. Often just having <strong>someone with a different background look at a problem allows them to come up with a solution in a new (for the team) and elegant way</strong>. This can have huge benefits to the team and company as a whole.</p>
<h3>Mindset is key</h3>
<p>At the rate that information based industries change it is impossible knowing everything. More important is that you can acquire new skills efficiently and effectively. Selecting for people who pick up new languages is <strong>selecting for people who actively pursue skill acquisition</strong>. This is often the number one identifier I see in hiring between average developers and awesome developers. When those languages span different programming paradigms like imperative and functional, then you have someone who is really pushing their comfort zones to find better solutions. That mindset is hard to teach and one you really want on your team. At the very least it is someone who is willing to pick up what needs to be done on the job.</p>
<h2>An argument for functional programming</h2>
<img src="../img/posts/2017/eye.jpg" alt="Eye" class="img-rounded pull-left" width="280" style="margin-right: 1em;">
When I was new to software development I was always looking for new and shiny ways to do things. Waiting for that new feature. Over the years I have come to appreciate a more minimal and opinionated approach. Some tools are great for edge-cases but are often not worth the hassle they cause when used liberally where they should not be used. Minimizing language features that allow you to make mistakes increases productivity and helps you fall into the pit of success. My path to functional programming was paved in development pain and failure. How so? When something seemed painful I would look for ways to close that path in general development so the mistake was not made by me, or any other future developer again. Functional programming increases the constraints in a good way.
<h3>Choice of 2, take it or leave it</h3>
<p>Most of the mainstream enterprise languages out there have the concept of 'null'. This has been described as the <a href="https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retractions">billion dollar mistake</a>. Functional programming has more <strong>elegant ways of representing the absence of data</strong> that encourages you to make unrepresentable states unrepresentable. This is of course not the sole domain of the functional paradigm (I have <a href="/honest-return-types">written about it in the past for C#</a>) but null based exceptions are rare to find in functional languages and if found are usually because of interop concerns. Minimizing the chance of null removes a whole class of exceptions that can possibly occur.</p>
<h3>Who moved my cheese?</h3>
<p>Another point were I experienced pain was with erratic or incorrect programs due to unintended state changes. Functional programming on the other hand pushes you toward immutability. A function has an input and an output and that output does not have a reference to the input. This makes code far more predictable. <strong>Immutability removes a whole class of errors that can occur due to unintended side-effects</strong>, which are often hard to find and fix.</p>
<h3>The I is an illusion</h3>
<p>In the age of cloud computing, auto-scaling, and concurrency, <strong>not having state means concurrency becomes almost as simple as concurrent</strong> since there is no state to lock around. This makes functional programming great for scale as it keeps things simple for the developer. As a developer you don't need to be an expert in concurrency to get it right. Again, a whole host of concurrency bugs are not representable (in state).</p>
<h3>Purity matters</h3>
<p>Functional programming values something called purity. This is basically the characteristic that you pass something into a function and get something out, and no state has been mutated inside. So for each input value you will always get the same output value. Valuing purity means code that is not pure is pushed to the boundaries of the application, which is good. <strong>Purity ensures that the bulk of your codebase is easily testable</strong>.</p>
<h3>The new goto</h3>
<p>Since functional programming encourages purity, throwing exceptions is not something you regularly do. It only happens in exceptional cases. Functional languages make this less <a href="/better-error-handling">clunky than doing it in an OO first language like C#</a>. What this means for code is <strong>there are no breaks in control flow so it is easier to reason about</strong>. Easier to reason about means easier to maintain and less bugs.</p>
<h3>Signature move</h3>
<p>I have written before about <a href="/honest-arguments">honest arguments</a> and <a href="/honest-return-types">honest return types</a> and it is something I have witnessed make a difference in code. <strong>Not only is the code more descriptive but correctness is reinforced by the compiler</strong>. Functional programming brings the signatures of functions front and center. Once again, more possible errors negated.</p>
<h3>Expanding horizons</h3>
<p>I touched on this in the section on language diversity but encouraging developers to learn <strong>a new paradigm equips them with more tools in the toolbox</strong>. I am not talking about a new framework or pattern but a new perspective at looking at a problem. A new perspective may yield a better solution to a problem.</p>
<h3>Short and sweet</h3>
<p><strong>Functional languages usually allow you to do more with less code</strong>. This is because it is declarative rather than imperative. This means your code reads like a sentence telling you what it does rather than a list of commands telling you each and every task to do.</p>
<h2>An argument for F#</h2>
<img src="../img/posts/2017/fsharp512.png" alt="fsharp" class="img-rounded pull-left" width="280" style="margin-right: 1em;">
So hopefully by this point I have convinced you (or you have convinced your boss) that having multiple languages is good. Not only that but choosing a functional first language makes good sense. My final step will be to convince you that F# should be that language.
<h3>No cold turkey necessary</h3>
<p>Although F# is a functional first language, it is actually multi-paradigm. <strong>F# supports both functional and object oriented paradigms. It has to since it interops easily with C#</strong>. So technically developers could code in an OOP style while they learned the F# language. This is absolutely an option and a pretty low risk way of introducing F#. The down side will be you might not reap the majority of the benefits I have mentioned thus far.</p>
<h3>Protect the ecosystem</h3>
<p>Part of what makes C# and .NET in general great is the tooling and libraries built up around it. <strong>Runtimes, IDEs, BCL, and library packages, they are all still available to you in F#</strong> since it is a .NET based language.</p>
<h3>Protect the investment</h3>
<p><strong>Your existing investment in libraries and business logic can be re-used as is without a re-write</strong>. You might want to write a small functional wrapper around them to make them fit in the new functional paradigm but that is a nice to have. This means your current code is re-usable and future code can still be written in whatever a team is comfortable in and still interop in the same solution.</p>
<h3>Leading the pack</h3>
<p>F# has been ahead of the curve in the .NET ecosystem in a lot of ways. So many of the great language features since C#'s initial Java clone have been inspired by F#. Current <strong>features like generics, <code>async</code>/<code>await</code>, auto-property initializers, exception filters, expression-bodied function members, and pattern-matching were all in F# first</strong>(or <a href="https://blogs.msdn.microsoft.com/dsyme/2011/03/15/netc-generics-history-some-photos-from-feb-1999/">worked on by the creator of F#</a>).</p>
<h3>Shoulders of giants</h3>
<p>Although F# has been leading the charge with Open Source for longer than probably any other Microsoft endeavour, it still has the backing of Microsoft as well as an active OSS community. F# was released by Microsoft Research in 2005 and has been on Github since 2010. It is lead by the <a href="http://foundation.fsharp.org/">FSharp Foundation</a> that is dedicated to advancing the language.</p>
<p>Then there is the actual OSS community. There are too many to name individually but some that you will either use or stand out because of their ambition are:</p>
<ol>
<li><a href="http://ionide.io/">Ionide</a> - An IDE plugin for Visual Studio Code and Atom that has been ahead of Visual Studio in supporting F# features, especially with the new <code>netstandard</code> stuff</li>
<li><a href="http://fsharp.github.io/FSharp.Data/">F# Data</a> - is a useful library for working with data from varied sources</li>
<li><a href="https://suave.io/">Suave</a> - An ambitious and full-featured web library and server that provides a functional-first programming model for web development</li>
<li><a href="https://github.com/dustinmoris/Giraffe">Giraffe</a> - a micro web framework that wraps the Asp.Net Core functionality for a more functional-first programming model</li>
<li><a href="mbrace.io">MBrace</a> - provides a simple programming model that opens up cloud computing in a way that initially seems like magic</li>
</ol>
<p>This is far from an exhaustive list. The point is there are mature and well supported projects out there because the F# community is dedicated and enthusiastic. The FsAdvent Calendar initiative is a great example of this.</p>
<h2>Caution</h2>
<p>It would be remiss of me not to leave you with a few cautionary points.</p>
<h3>Learning curve</h3>
<p>Functional programming, especially with non C like languages can be pretty mind bending when you first start. I wish I could find the quote but I think it was one of the JVM functional language designers (Scala or Clojure) who said something like "sacrificing future power and expressiveness for beginner ease of use is one of the worst traps language designers can fall into". I like the sentiment but in terms of language popularity it seems to have some unfortunate downsides. However, those who stick with it and start becoming fluent are usually die hard converts because they have realized the usefulness of the paradigm. On the other hand if most give up, the pool of developers will mostly consists of the smartest or most determined.</p>
<h3>Maturity of the team</h3>
<p>Language diversity requires a high level of maturity in your development team. A team lacking in maturity is more likely to pick something based on what they feel like using rather than assessing fitness of the tool for the solution. Hiring in at least one for two experienced people to lead would probably be a good idea.</p>
<h3>Ramp up</h3>
<p>Ramping up slowly and allowing more people in the organization to get experience on low risk projects could be a low risk way of introducing F#. <a href="https://youtu.be/qPlYbHKvk4g?t=376">A developer could learn the syntax this way without taking the productivity hit of learning a new paradigm</a>. Mark Seemann has talked about how he initially just did OOP with F# and slowly incorporated functional ideas. In Mark's case I think he was leaning toward functional concepts anyway. Without a push to do so a developer could remain a 100% OO programmer while using F#. Even worse, a developer doing this might then decide that F# provides no benefits. So a slow ramp up comes with it's own risks.</p>
<h3>Maturity of deployment</h3>
<p>With a new language you might need new deployment pipelines so make sure you have this sorted on a technology you are familiar with before going crazy with choices.</p>
<h3>Pick smart</h3>
<p>Although I argue for a polyglot environment I am not making the case for ALL languages being allowed. These projects still need to be supported by the organization. Pick a small set of languages after considering a few aspects of them:</p>
<ol>
<li>Maturity of the language, ecosystem, and the community</li>
<li>Popularity of the language (no point jumping on a sinking ship)</li>
<li>Availability of developers</li>
<li>Expected salaries (you need to be competitive)</li>
</ol>
<h2>Conclusion</h2>
<p>So I covered reasons why you should consider more languages, why one of those should be functional, and hopefully convinced you to <a href="http://fsharp.org/">give F# a try</a>. This actually isn't an exhaustive list. Personally, I have found other reasons why learning F# has been great. Learning F# made it easier for me to jump into even more languages. Elm for instance was super low resistance. Also F# has a bunch of really cool features like Type Providers, Computation Expressions, and more that blow your mind when you come across them.</p>
<h2>Further Reading</h2>
<ol>
<li><a href="https://cacm.acm.org/magazines/2017/10/221326-a-large-scale-study-of-programming-languages-and-code-quality-in-github/fulltext">A Large-Scale Study of Programming Languages and Code Quality in GitHub</a></li>
<li><a href="http://evelinag.com/blog/2014/06-09-comparing-dependency-networks/">Comparing F# and C# with dependency networks</a></li>
<li>Mark Seemann has a brilliant posts on how a <a href="http://blog.ploeh.dk/2015/04/13/less-is-more-language-features/">language can reduce the potential for errors</a></li>
<li>Mark has an excellent talk on <a href="https://www.youtube.com/watch?v=US8QG9I1XW0">falling into the pit of success</a> and another on <a href="https://www.youtube.com/watch?v=cxs7oLGrxQ4">Dependency Rejection</a></li>
<li>Scott Wlaschin has an excellent <a href="https://fsharpforfunandprofit.com/posts/low-risk-ways-to-use-fsharp-at-work/">series on low risk ways to start using F# at work</a></li>
</ol>
<h2>Credits</h2>
<ol>
<li>Header photo by <a href="https://unsplash.com/@nhoizey">Nicolas Hoizey</a></li>
<li>Social photo by <a href="https://unsplash.com/@groosheck">Michał Grosicki</a></li>
<li>Scrolls photo by <a href="https://unsplash.com/@sindreaalberg">Sindre Aalberg</a></li>
<li>Eye photo by <a href="https://unsplash.com/@amandadalbjorn">Amanda Dalbjörn</a></li>
</ol>https://devonburriss.me/touched-by-god/Touched by God2017-11-02T00:00:00+00:00Devon Burrisshttps://devonburriss.me/touched-by-god/<p>In science and life, sometimes things happen that we cannot explain. Just a few hundred years ago most of what could not be explained was attributed to the supernatural. Thankfully a lot of that mystery has been peeled back, opening us up to bigger and more fundamental questions about the universe. Compared to areas like cosmology and particle physics, things like the human mind and consciousness remain relatively unexplored by science. This leaves some questions unanswered about our place in this expanding universe. In this post I explore and contrast some of my early religious spiritual experiences to my recent self-observations in mindfulness.</p>
<!--more-->
<h2>A brief history...</h2>
<p>I grew up in a Christian household, going to church for as far back as I can remember. As a teenager I started attending a more charismatic church with some friends, which I will go into in a bit. After school I studied computer science and physics with a few other things thrown in. After university I also spent a year studying theology part-time. It was while studying theology that I realized I believed what I was raised to believe and I should look at both the religious and scientific alternatives to my world view. <em>The Truth</em> would of course hold up to any scrutiny.</p>
<h2>Touched by God</h2>
<img src="../img/posts/2017/touched-by-god-flame.jpg" alt="Fire" class="img-rounded pull-left" width="280" style="margin-right: 1em;">
For those who have never been involved in this sort of thing it is hard to explain. A meeting at a charismatic Christian event usually goes something like this. The minister will open with a prayer and a reading from the Bible. We would then launch into around 40 minutes of "Praise and Worship". This would usually start as vibrant upbeat music and end off with more emotive music. These times would often correspond with feelings of joy and awe as it felt like the Holy Spirit was among us. They could often get quite weird for the uninitiated as people would laugh uncontrollably, jump around, and speak out in "tongues"(odd sounds that only angels and those gifted with the ability could understand). The service would then continue with a short sermon and then we would go into a period of "ministry". This usually entailed people coming to the front and being prayed for while the band softly played in the background. The weirdness of the "praise and worship" is usually overshadowed in this "ministry" time for those who are not accustomed to it. People would laugh and cry uncontrollably. People would gather and pray in "tongues" for each other. People would prophesize about the future of the people standing there, and probably most extraordinary to the uninitiated people would be those "slain in the spirit". This is a phenomenon where people would fall over backward and then either just lie there basking in the magnificence of God, maybe laughing at the wonder of it, or crying as you are overwhelmed, or convulse as demons flee before the Holy Spirit.
<p>I wanted to give you a brief picture of what it is like to be in that setting. These few words do not bring the full reality of what it feels like to be involved in this. To those who have experienced it, the absolute reality of it is difficult to explain with anything other than the supernatural. There are of course psychological explanations for these effects but I instead wanted to look at them in light of a more recent experience...</p>
<h2>Within myself</h2>
<p><img src="../img/posts/2017/sand-hand.jpg" alt="Sand hand" class="img-rounded pull-left" width="280" style="margin-right: 1em;margin-bottom: 1em;">A few years back I started practicing mindfulness. At the time it was infinitely valuable in coping with a new job in a new city and a way more skeptical view of the metaphysical. This skepticism had a profound effect on my view of mortality which all of a sudden became very cognizant, causing a lot of anxiety. Mindfulness helped me come to terms with this using <a href="https://en.wikipedia.org/wiki/Maranasati">Maranasati</a>. This is a practice where you contemplate and visualize the reality of dying, death, and all that comes after. To be clear, I did not have any dead bodies to look at. Only imagination.</p>
<p>I keep notes on sessions that stand out. Or just capture how I was feeling before and after my meditation. I also <a href="https://www.heartmath.com/science/">measure</a> my heart rate variability (HRV) while meditating.
I would like to compare the "spiritual" experience recounted earlier against more recent "no-metaphysics-here" experiences. Here is the quote from the notes after the experience I want to recount.</p>
<blockquote>
<p>Started off stressed. Was a good session. At the end I started a weird feedback loop where I was aware of being a consciousness in my head and started an elation feedback loop that I could imagine getting quite... spiritual...</p>
</blockquote>
<p>This was a strange experience to say the least. All of a sudden I really felt like I was observing my thoughts arise and disappear as I chose not to follow them. Not only that but after a while I felt like I had hooks into my mental state with levers attached that I could pull on. So I pulled. I cranked up my feeling of peace and euphoria and lo and behold I felt those things acutely. With absolute clarity of mind I was sitting cross-legged on my bed laughing wholeheartedly at the elation I was feeling. When I realized the weirdness of the situation I flipped the lever and turned it off. Let's take a look at the graph of my meditation at the time of this. Typically the graph drops down into the red by the 10 minute mark because after settling in for a few minutes I start a "loving kindness meditation" exercise. By the end of that I have moved into wishing happiness onto people who can sometimes aggravate me so things have gone downhill by this point.</p>
<p><img src="/img/posts/2017/meditation-results1.png" alt="Meditation Results 1" /></p>
<p>One more thing to mention about the graphs. These graphs in themselves don't measure anything directly related to the mind. They are just a helpful indicator of calm and focus. For me personally a 7 is an extremely high level of coherence.</p>
<p>As you can see this feeling of peace and euphoria lasts less than a minute before I am shocked out of it. At this point I contemplate what has just happened. What was that? That reminds me a lot of metaphysical experiences I had at charismatic religious events. Can I control it? Although there are many similarities to the previous church experiences the obvious presence of mind and control was in stark contrast to those experiences. At this point my interest was piqued as to how much control I had of this experience, so I dove back in. Here is the graph continued...</p>
<p><img src="/img/posts/2017/meditation-results2.png" alt="Meditation Results 2" /></p>
<p>I could reproduce it! As you can see this time I was not as easily scared off by the experience and allowed myself to linger in that state for a little while.</p>
<p><em>So what does this mean?</em> I have been able to reproduce this on most of the subsequent times I have tried since then. This has not been very many times, as after the initial novelty wore off I didn't see too much value other than the insights that the experience gave me over the control I can exercise over my own emotions and state of mind. The euphoria generated from this is much the same as the previous religious experiences and the significance only slightly more. Much like the religious experience the transcendent nature of the experience quickly fades and leaves little impact after the feeling has passed other than renewed assuredness of the "reality" of the spiritual to affect the natural world. The meditative experience however did teach me something about the degree to which I can exercise control over my own emotions and give me a glimpse at what is possible at the extremes of emotion in a controlled and contemplative state. Of course maybe this is just another delusion but at least I am willing to entertain that possibility now.</p>
<h2>Conclusion</h2>
<p>I recently read Lawrence Krauss' <strong><a href="http://www.simonandschuster.co.uk/books/The-Greatest-Story-Ever-Told-So-Far/Lawrence-Krauss/9781471158377">The Greatest Story Ever Told...So Far</a></strong> and he does an awesome job of capturing the incremental nature of how science has built it's current edifice of knowledge over hundreds of years. This is in the realm of physics alone. Questions about the soul, consciousness, freewill, and <a href="/moral-behavior-is-rewarded">morality</a> tend to be ignored by science and fall to philosophers, mystics, religious leaders, and sometimes the odd psychologist. Neuroscientists and psychologists seem to be gaining more interest in answering these questions but the field is still young and hesitant to tread in the domain of the religious. That does not mean we need to wait for science to tell us how our mind works. We can start right now to explore it in a subjective way that can still possibly yield objective facts. Each mind is unique in many ways and although we all fall prey to the same biases to one degree or another we can start investigating the nature of those similarities and differences right now. Not only that but we can exercise the "muscles" of our mind and thus learn to exercise some measure of control over our thoughts.</p>
<p>If your interest was piqued by this post I highly recommend you read <a href="https://www.samharris.org/waking-up">Waking Up</a> by Sam Harris or if reading is not your thing (well done on getting this far) he has an awesome free <a href="https://www.samharris.org/podcast">podcast</a>.</p>
<h2>Credits</h2>
<ul>
<li>Header photo by <a href="https://unsplash.com/@grakozy">Greg Rakozy</a></li>
<li>Social photo by <a href="https://unsplash.com/@viniciusamano">Vinicius Amano</a></li>
<li>Content photo <a href="https://unsplash.com/@kunjparekh">Kunj Parekh</a></li>
</ul>https://devonburriss.me/moral-behavior-is-rewarded/Hypothesis: Moral behavior is rewarded2017-10-19T00:00:00+00:00Devon Burrisshttps://devonburriss.me/moral-behavior-is-rewarded/<p>I don't consider myself an immoral person. Unless you count those days when I thought there was an all seeing being in the sky watching that I didn't break the archaic rules laid out by men thousands of years ago who also thought it was ok to own slaves and commit genocide. Since then I have not given too much thought to morality other than my general rule of "Don't be a dick". A few weeks back I made a commitment to hold myself to a higher moral standard. Not only that but I laid out some experimental guidelines of rules that I would follow and a hypothesis of what I expect. Finally, I would tell people about it so they could hold me to my commitments.</p>
<!--more-->
<h2>Why?</h2>
<p>A few months back I read <a href="https://www.samharris.org/books/the-moral-landscape">The Moral Landscape</a> by Sam Harris and was really challenged by the idea of using the wellbeing of sentient beings as a measure of morality. I found it not only compelling but also something concrete to measure myself against that didn't resort to mysticism. It is a brilliant read and I highly recommend it. My own life was silhouetted against the moral landscape. It is up to me decide just how bright to make it. I figured I would map out my path in the hope others might find it useful for their own wellbeing.</p>
<h2>Hypothesis</h2>
<blockquote>
<p>Moral integrity will increase my wellbeing and the wellbeing of those around me.</p>
</blockquote>
<p>Is there any reason to think this is reasonable? Most people want wellbeing but we are poorly wired for it. That doesn't mean we don't want it for ourselves, our loved ones, and the human race in general. Of course there are just some people who behave like dicks and don't have the self awareness to care that they hurt others. Then of course there are sociopaths and psychopaths. This is a dysfunction compared to normal human behavior and experience so lets discount that.</p>
<p>Human civilization has become more and more... civilized over time. We treat each other better and care more about peoples wellbeing. I don't think it is unreasonable then that we can attribute that to a move away from violence to discourse. A move to attributing respect and equality to others. I come from South Africa and now live in The Netherlands. The differences in wellbeing I see are like night and day. And I attribute this largely to the respect and equality given to each citizen in a civil society.</p>
<h3>Expectations</h3>
<p>I had listed a few expectations when I decided to do this:</p>
<ul>
<li>It would lead to some awkwardness</li>
<li>I would be happier with myself</li>
<li>People would like me more because I am trustworthy</li>
</ul>
<h3>Guidelines</h3>
<p>The guidelines lay out some clear action plans that I can follow when situations arise. This ends up being quite an important point as it is easy to convince yourself to try take an easy way out of a difficult situation.</p>
<ol>
<li>I will not lie in any circumstance other than a life threatening situation.</li>
<li>I will not engage in gossip.</li>
<li>I will try my utmost to treat everyone respectfully at all times.</li>
<li>I will not steal.</li>
<li>I will try maintain my equanimity at all times.</li>
</ol>
<h4>Lying</h4>
<p>This isn't something that I felt I engaged in a lot. Mostly it was in the context of <em>gossip</em>. That would lead to having to pretend you don't know something you do. Other things like being non-committal on things to avoid an uncomfortable discussions. Half-truths to save face. Platitudes to avoid hurting peoples feelings. This last one might have you wondering "Surely that is ok?!". I maintain not as it is a slippery slope. This is not to say that you should be hurtful. You can still speak the truth while being compassionate and respectful of the other persons feelings. I honestly believe feedback is good for most people (if in a really unhealthy state you might want to pull back even more). Even in the case of someone in a fragile state of mind rather than "You really could have done better at that." you could just say "Let's book some time when you are feeling better. It isn't important now."</p>
<h4>Gossip</h4>
<p>This was the main point that really started to make me uncomfortable about my conduct. I work in a large organization and someone is always frustrated by someone else, myself included. This would often lead to complaining, mostly of the non-constructive sort. At first I thought it cathartic but the more I reflected on it, it actually seemed a toxic part of my life.</p>
<p>Sometimes in work or personal life talks about others is inevitable. My commitment here has been to not say or agree with anything I have not already said to someones face, or will schedule something to say to their face after these things have been said and agreed with. Knowing you will have to say things to someones face is a great way of moderating yourself and having a constant prompt of whether you want to actually be engaging in a conversation.</p>
<h4>Respect</h4>
<p>If the goal is wellbeing for all, treating people well is paramount. I don't have too much more to say on this. My original guideline of "Don't be a dick" works well enough for me. The only extra thing I find useful to meditate on as often as possible here is that everyone has a story and is generally just trying to do the best they can just like you. This means people are doing things for reasons that are important to themselves even if they are difficult for you (or sometimes even themselves) to articulate. Remember: you are nothing special, just like me ;)</p>
<h4>Stealing</h4>
<p>This is a subtle one. Obviously I am not out there robbing banks. If I was I <a href="https://www.youtube.com/watch?v=Do3PQR6Tvss">definitely would not be blogging about it</a>! There are other ways this could be interpreted, such as taking credit for something someone else has done.<br />
Not only that but in this digital age it is really easy to share or download media that you do not own. This is a tough one for some. Myself included.</p>
<h4>Equanimity</h4>
<p>I had actually found Sam Harris as an author through his neuroscience and mediation interests rather than his challenges against religious ideology that he is (in)famous for. I am not going to get into discussions of <em>self</em> here but being self aware forms a big part on following through on all the other points mentioned here not to mention it does wonders for your own wellbeing. Just observing your thoughts, learning techniques for focusing your attention, and <em>deciding</em> to act with intent can result in dramatic improvements in wellbeing.</p>
<h2>Results so far</h2>
<p>So one thing I can say for sure is this has lead to some hard conversations. I will also say it gets easier. I still mess up on these points often but I definitely believe it is having a curbing effect on my behavior in these areas.</p>
<p>Subjective observations:</p>
<ul>
<li>I am much more cognizant of what I say which I think has curtailed the amount of things I say that I later regret.</li>
<li>I am more aware in difficult conversations that there is more than just my side in an argument. I think this has actually made me more effective at convincing people of the merits of my own points.</li>
<li>I believe my feedback, even in difficult conversations has been appreciated. I have actually received this feedback directly.</li>
<li>I seem to be perceived as trustworthy. This was said to me today which I really appreciated.</li>
<li>I am starting to find the small blunt responses to questions that lead to awkwardness easier to just say and not worry about.</li>
</ul>
<p>I will see if I can get some data on this from friends and coworkers. It would be interesting to actually be able to plot perception. I will also try post my own subjective experience of this further down the road.</p>
<h3>Credit</h3>
<ul>
<li><a href="https://www.samharris.org/">Sam Harris</a></li>
<li>Photo by Jens Lelie on <a href="https://unsplash.com/photos/u0vgcIOQG08">Unsplash</a></li>
</ul>https://devonburriss.me/stop-comparing-eq-and-iq/Stop comparing EQ and IQ2017-10-17T00:00:00+00:00Devon Burrisshttps://devonburriss.me/stop-comparing-eq-and-iq/<p>I see this comparison come up online and at work a lot. The implication being that if we want success we look for people with good EQ skills and if they have weak technical skills we can teach them. Sure. These are both skills but if everyone is good at communicating but rubbish at the technical stuff, guess what the quality is like...</p>
<!--more-->
<p><em>These ideas are my own and do not represent the views of my employer.</em></p>
<p>I realise that this post has the potential to annoy or offend. Sadly I also don't expect to change too many minds. I guess I am hoping for this to be cathartic and that it will allow me to move on without being triggered in the future (I should meditate more). The idea that general intelligence is fixed and is decided for us in a genetic lottery does not sit well with us. Including me. Although I would like to be smarter I think (with my limited intellect) that the closer you align your reality with actual reality, the less suffering you will inflict on yourself and others.</p>
<h2>TL;DR</h2>
<p>IQ vs EQ is a nonsensical comparison. EQ is dependent on mental facilities correlated to IQ such as verbal comprehension, working memory, perceptual organization, and processing speed. EQ is also very dependent on skills learned while IQ is correlated with the speed and proficiency of skill acquisition. In this post I build on this to try show that IQ should be a fairly good indicator of potential EQ. It should not be surprising that both IQ and EQ are good indicators of success. They are likely BOTH found in successful individuals.</p>
<h2>Setting the ground work</h2>
<p>First let me make a few assertions. These are assertions that are either consensus in the SCIENTIFIC community or seem to be the view of the majority of publishing EXPERTS in the related fields. I contrast this to the general public where there is plenty of myth and confusion. Things like "the 8 intelligences", "street smart", and "EQ" resonate with our desire to be able to work hard and be better. I do not wish to undermine this and I am by no means asserting IQ is the sole determining trait for success. I really mean this because if I did think about it often, I would seldom assume myself the smartest person in any given room. Which could be depressing. Instead, learning is important. That aside; when confronted with facts versus what I wish was true, on a good day I try choose the facts (or as close to we have them from science when studying the brain).</p>
<h3>Assertion 1: Everything about us has a physical explanation</h3>
<p>No metaphysics apply. There is no soul that makes us think a certain way. All our thinking happens in our brain due to biological processes that are possibly mysterious to us but are due to physical systems within our body.</p>
<h3>Assertion 2: Intelligence is explained by genetics</h3>
<p>IQ is a fairly good normalized measure of general intelligence. Following on from assertion 1 it is a trait about us that is coded into our DNA. To make this practical. For the smartest people of our time, it was clear that they were special by as early as 2 years of age. Kim Ung-Young for example, with an IQ of 210, was fluent in four languages by age 2. There is little chance that child rearing was the only factor in this.</p>
<h3>Assertion 3: IQ is a good indicator of ability for skill acquisition</h3>
<p>Studies show a very positive correlation between IQ and skill acquisition. This is in both physical and mental skills. Again we are not dealing in absolutes here but the studies do show positive correlations.</p>
<h3>Assertion 4: Skill acquisition is a major contributor to success in life</h3>
<p>I don't have a study to back this one up. There do seem to be ones directly linking IQ, and I am hypothesizing this is due to the skills that a high IQ would allow you to quickly learn and master.</p>
<h3>Assertion 5: EQ is dependent on IQ (or at least correlated)</h3>
<p>So if EQ is made up of problem solving, perception, verbal communication and comprehension, and many other things along these lines it shouldn't be hard to accept that EQ correlates to general intelligence, which is what the science shows.</p>
<h2>Conclusion</h2>
<p>It seems reasonable then that there is a casual (or not so casual) relationship between IQ and EQ.</p>
<blockquote>
<p>It therefore follows that IQ is a good indicator of either having high EQ or being able to quickly improve EQ.</p>
</blockquote>
<p>Life is messy and for all our advances we are still in the dark on a lot of the processes that operate in our brain. So this is not supposed to be a post saying that the smart are destined to succeed and the rest are just here to witness it. Far from it! Determination, creativity, compassion, and many other traits make us who we are and allow us to achieve great things.</p>
<p>As a software developer though, stop telling me I don't need to work with smart people, just good communicators. It takes a lot to convince me that someone with poor people skills is smart. So stop making this a zero sum game. It is not. Or maybe it is and I am just not smart enough to realise it, and I am so poor at communicating I can't convince anyone otherwise.</p>
<h2>Resources</h2>
<ul>
<li><a href="http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html">psych.utoronto.ca</a></li>
<li><a href="http://theconversation.com/what-chess-players-can-teach-us-about-intelligence-and-expertise-72898">theconversation.com</a></li>
<li><a href="https://www.researchgate.net/publication/307874653_The_relationship_between_cognitive_ability_and_chess_skill_A_comprehensive_meta-analysis">researchgate.net</a></li>
<li><a href="http://www.sciencedirect.com/science/article/pii/S1877042813017096">sciencedirect.com</a></li>
<li><a href="http://www.memory-key.com/research/news/correlation-between-emotional-intelligence-and-iq">memory-key.com</a></li>
</ul>https://devonburriss.me/yoda-wants-you-to-be-a-functional-programmer/Yoda wants you to be a functional programmer2017-06-10T00:00:00+00:00Devon Burrisshttps://devonburriss.me/yoda-wants-you-to-be-a-functional-programmer/<p>This one is just for laughs but technical writing doesn't always have to be serious.<br />
I was double checking a Yoda quote for my previous post and it got me thinking about how many Yoda quotes could be applied to the functional programming (FP) paradigm.<br />
Star wars and programming are meant to go together.</p>
<!--more-->
<blockquote>
<h2>“Size matters not. Look at me. Judge me by my size, do you? Hmm? Hmm. And well you should not.” – Yoda</h2>
</blockquote>
<p>Functional programming involves small building blocks of functions that you compose to make more specific functions and so on. The functions and the types tend to be small and stay small.</p>
<blockquote>
<h2>“Do. Or do not. There is no try.” – Yoda</h2>
</blockquote>
<p>Functions always return a value. No <code>void</code> here.</p>
<blockquote>
<h2>“Much to learn you still have…my old padawan. This is just the beginning!” – Yoda</h2>
</blockquote>
<p>FP is a new paradigm. Learning a new paradigm is way harder than learning a new language. It is a very useful tool to have in your toolbox though.</p>
<blockquote>
<h2>“Truly wonderful, the mind of a child is.” – Yoda</h2>
</blockquote>
<p>When learning FP, don't bring your object-oriented baggage. Embrace that this is something different.</p>
<blockquote>
<h2>“Always pass on what you have learned.” – Yoda</h2>
</blockquote>
<p>Another reference (pun intended wink wink) to functions in FP always returning something.</p>
<blockquote>
<h2>“Once you start down the dark path, forever will it dominate your destiny, consume you it will.” – Yoda</h2>
</blockquote>
<p>This one was tough. Do I use this to represent that once you grok a paradigm and see it's merits, you can't unlearn that. I think instead this should be a warning against letting yourself think any one paradigm is the best or only one that matters (I am looking at OOP).</p>
<blockquote>
<h2>“Mind what you have learned. Save you it can.” – Yoda</h2>
</blockquote>
<p>FP is a new paradigm and will make you a better developer.</p>
<blockquote>
<h2>“You will find only what you bring in.” – Yoda</h2>
</blockquote>
<p>In FP you don't usually store state. You pass along what you need in arguments.</p>
<blockquote>
<h2>“Attachment leads to jealously. The shadow of greed, that is.” – Yoda</h2>
</blockquote>
<p>I could make a point here about storing state but I think it is more important to reiterate the warning about the dark side of being too attached to just one paradigm. Find balance in the force.</p>
<h2>Conclusion</h2>
<p>So that's it for Yoda on functional programming. Hope it made you think and I hope it made you smile.</p>https://devonburriss.me/productivity-tips-1/Productivity Tips2017-06-09T00:00:00+00:00Devon Burrisshttps://devonburriss.me/productivity-tips-1/<blockquote>
<p>“Do. Or do not. There is no try.” - Yoda</p>
</blockquote>
<p>Time has become a precious commodity for me lately. Between management meetings, team meetings, and then actually trying to improve small things in process and code, it is easy to loose track of things. Even worse is that it is easy to loose track of what is important. So I am finding myself going back to some old habits that died off during different shifts in my career and apply many of them again.</p>
<!--more-->
<h1>Productivity tips</h1>
<p>Most of these tips revolve around focus. When things get busy it is easy to loose focus and that is when productivity drops.</p>
<h2>All the things</h2>
<p>Ubiquitous capture. Write down anything that you aren't going to do now as soon as you become aware of it. An email comes in that you need to action. Capture it on your TODO list. Once it is down on paper, you are much less likely to worry about it and you can't forget about it.</p>
<h3>How you capture</h3>
<p>The list. How you capture is less important than being consistent. When trying out things I find todo lists can actually make things worse for a while before they become better. The reason for this is I am not capturing in a single place. Trying to minimize the number of mediums you use to capture tasks is important. I tried and like the idea of a notebook but I just didn't carry it around enough. I have settled on <a href="https://todoist.com">Todoist</a> because I can have it open on my laptop and my phone. It also has integrations with tools I use like Slack. I will discuss this a bit more in the next section. Start off simple. Don't have too many categories/projects etc. Just capture everything that comes in.</p>
<h3>Revue and rate</h3>
<p>Go over your list often and prioritize it. Make sure you are doing the most important things first. Creating a habit of going through your list every morning will make sure it is current as well as keep in mind the most important things only.</p>
<h2>Remind me</h2>
<p>Set reminders if you need to. I use bots in Slack to remind me to do things at specific times. I also use it to remind others. Just be careful of information overload. If you use it too prolifically people will start to ignore the reminders, especially if it is for things they don't find too important.</p>
<h2>Pomodoro</h2>
<p>Many people find the <a href="https://en.wikipedia.org/wiki/Pomodoro_Technique">Pomodoro Technique</a> really useful. It is especially useful when you have lots of little things that can distract you. Switching focus often can kill productivity so committing to spend at least a little time focused on one thing can make a huge difference. I use an app called <a href="http://tide.moreless.io/en/">Tide</a> that does the job for me. It has a timer and can play music or white noise. This is perfect for when working in a noisy environment. As I write this I am in a noisy cafe but am listening to birds chirping, which I find less distracting than multiple conversations, moving chairs, and clinking cups.</p>
<h2>Clear your mind</h2>
<p>Meditation has a bad reputation among many people as the pastime of hippies and mystics but it is a useful skill to develop for those who value focus and clarity of thought. There are many practices you can use to achieve different things. I will briefly touch on a few that I use. There are many others and I encourage you to explore the options. I will mention some resources at the end of this section where you can start. Also note that I will only talk from my experience so what I write might not be 100% what you might find out in the wild, or even what you may experience yourself. Meditation is about as personal as it gets as it is your consciousness observing itself.</p>
<h3>Focused attention</h3>
<p>Here you focus on something in an effort to still your monkey mind. This isn't an evolutionary reference but more a comment on how our mind works. Just observe your inner monologue and attention as you read this. "What is this guy on about? Meditaiton! Really?", "Maybe I should try this?", "Can I move things with my mind?", "It seems really boring... I could do other things...I need to go to the shops... do I have milk in the fridge...". And so our mind goes on ceaselessly. We usually about as in control of our thoughts as a leaf in a river.</p>
<p>So in focused attention I focus on my breath. First I scan through my body and try release any tension felt with each out breath. As the mind goes off I bring it back to the breath and just focus on the up and down. The sensation around my nostrils. Sometimes I only hold my attention for a few seconds before it goes off again for a few minutes on some train of thought. When you realise you bring it back to the breath, and try again. This isn't a fight you are trying to win. You are just slowly training the brain to focus on what you want it to focus on. Not only that, your brain and your body will appreciate the moments of peace where you are not lost in thought.</p>
<p><em>Benefits: Increase mental focus, relax the body, decrease stress</em></p>
<h3>Loving kindness</h3>
<p>Loving kindness is a technique for developing compassion for yourself and the people around you. This can have a profound impact on how you treat yourself and others.</p>
<p>As I near the end of my meditation I spend a few minutes cycling through the people in my life. I start with those most beloved to me and move out to colleagues and acquaintances, and eventually just general humanity. I visualize the person, or people (hard for all of humanity), and try generate feelings of compassion toward them while repeating the phrase "I am grateful for person X. I wish them peace, happiness, and freedom from suffering". That is it!</p>
<p><em>Benefits: Increase compassion for others, increase personal well-being, mend and tend relationships</em></p>
<h3>Appreciation</h3>
<p>I tend to lump this one in with my loving kindness but it is a distinct practice. After being appreciative of the people in my life I also make a point of reminding myself of other things I have to be appreciative of such as things, opportunities, and health.</p>
<p><em>Benefits: Peace and happiness</em></p>
<h3>Resources for meditation</h3>
<ul>
<li><a href="https://www.samharris.org/podcast/item/mindfulness-meditation">Sam Harris has some guided meditation recordings</a></li>
<li><a href="https://www.headspace.com/register">Headspace is a subscription service to teach meditation but has a 10 day trial</a></li>
<li><a href="https://www.audible.com/pd/Self-Development/Practicing-Mindfulness-An-Introduction-to-Meditation-Audiobook/B00DDVQQLA/">Practicing Mindfulness audible book from The Great Courses</a></li>
<li><a href="https://www.audible.com/pd/Self-Development/The-Science-of-Mindfulness-Audiobook/B00MEQRUG0/">The Science of Mindfulness audible book from The Great Courses</a></li>
</ul>
<h2>Calendar blocks</h2>
<p>This is a real simple one but it can be very helpful to block time in your calendar to do specific important tasks. This is useful if your calendar can quickly fill up with meeting requests. I block time to just be available for my team as well as for specific tasks.</p>
<p>Another little tip is to not accept meetings until you have been furnished with an agenda. This allows you to determine whether you really are the best person to be at that meeting, or if invitees are missing.</p>
<h2>Conclusion</h2>
<p>Although I stated productivity comes down to focus, we explored how to increase it from multiple prongs. Techniques and tips, training, and tools. Use what works for you but please try give all of them an honest try. I would love to hear what you use to keep focused. Please let me know in the comments below.</p>https://devonburriss.me/agile-is-a-characteristic/Agile is a Characteristic2017-03-22T00:00:00+00:00Devon Burrisshttps://devonburriss.me/agile-is-a-characteristic/<blockquote>
<p><a href="https://pragdave.me/blog/2014/03/04/time-to-kill-agile.html">Agile is dead</a></p>
</blockquote>
<p>I see more and more posts and talks claiming that Agile is dead. Broad statements like this are obviously just for effect but even if just click-bait, the sentiment is coming from somewhere. In this post I dig into reasons to say this and why we can still have hope.</p>
<!--more-->
<h1>Agile is dead</h1>
<p>Let's take a look at the ways it dies and why it can never truly die.</p>
<h2>Still-born</h2>
<p>So sometimes agile was never alive at a company anyway. I have walked into companies where they declare to me "We tried Agile and it doesn't work", or just "Agile sucks!". When you drill into what they actually did though, agility was never there. They had no process, and no way of improving the process. They knew they needed something. So they slapped the word "Scrum" on what they did. Occasionally they had a standup where people would stand up and look at their toes, and then go about their business as usual. Agile never drew it's first breath here...</p>
<h2>Lemmings</h2>
<p>Some young team hears about this new shiny thing that all the cool companies are doing, they grab a bunch of processes off a website and the start applying them. No matter that they don't know why. No matter that there are only 2 developers. They slog through for months but eventually it fades out because they don't see any value. They tried the processes without understanding the spirit of it. Hell it's not the spirit of the word... it is the actual word... Agile.</p>
<p>Another flavour here is teams that actually implement SCRUM by the book. They experience moderate success and so they double down on the processes. Soon the processes become an institution unto themselves. Recipes to be applied rather than a gifted chef tasting and experimenting with a dish.</p>
<p>Agile has become a label now to be stuck on things like tags at an estate sale. The Sprint has changed from a fluffy cushion that protects the developers to the Great Wall of China, keeping the stakeholders out. Demos are immovable institution that represent the success or failure of a team. And cancelling a sprint because requirements have changed becomes anathema to the Agile adherents as they follow their rituals off the cliff... or is that a waterfall?</p>
<h2>Darwin award</h2>
<p>Large companies struggle to overcome the inertia required to change to agile processes. Even if a development department manages to adopt, if the whole company doesn't evolve to the new way of working, the initiative is doomed to die. Not only will it die but all those involved will develop a distrust for the agile initiative. Buy in from all levels of stakeholders (decision makers, middle management, developers, etc.) is important before you even start.</p>
<h2>Immortal</h2>
<p>So why do I believe agile will never die? At it's core, agility is a characteristic of the team developing a product, not the processes they adopt to do that. And at the core of software development is what got us to the point that software development is even a thing. Adaptability. Evolution. It is why we went from hiding in a cave from predators stronger and faster than us to the dominant animal on the planet. More than any other animal on the planet we can look at our situation and we can improve it. Then we look at it and we improve it again. Sometimes we fail. Sometimes badly. Over the long game though we have trended to improvement. Although we borrow much from other production disciplines, ours is but an infant. And unless we fail morally and bring forth Skynet... we are going to grow up and get better. It is determined. It is in our genes...</p>
<h2>Recommended Reading</h2>
<ul>
<li><a href="http://agilemanifesto.org/">Agile Manifesto</a></li>
<li><a href="http://www.sciencedirect.com/science/article/pii/S0164121216300826">Challenges and success factors for large-scale agile transformations: A systematic literature review</a></li>
</ul>https://devonburriss.me/aspect-rating/Aspect Rating2017-03-21T00:00:00+00:00Devon Burrisshttps://devonburriss.me/aspect-rating/<p>I recently ran a retrospective with a team of 11 (including myself). With that many people getting focused feedback is important or meetings can drag out. I found this exercise quite useful and the rest of the team seemed to as well. See <a href="/check-in-check-out">this post for the Check-in/Check-out</a> I ran before and after.</p>
<!--more-->
<h1>The first step is measure</h1>
<p>The idea is simple.</p>
<ol>
<li>Put some aspects of team interaction along the top of the board or wall. I used the following and I suggest this order (see Analysis for why):
<ul>
<li>Direction</li>
<li>Progress</li>
<li>Process</li>
<li>Team work</li>
<li>Learning</li>
<li>Enthusiasm</li>
</ul>
</li>
<li>Draw an arrow up, labelling the bottom 1 and the top 5 (see image below)</li>
<li>Ask the team to put their name on 6 post-its</li>
<li>Explain that they need to put 1 post-it under each aspect rating that aspect of the team</li>
<li>Discuss</li>
</ol>
<img src="/img/posts/2017/aspect-rating.jpg" alt="Aspect Rating Example" class="img-thumbnail">
<p><em>Aspect Rating board: Note the order is different to my recommendations</em></p>
<p>What you want to focus on next is up to you. Be sure to celebrate the good but depending on the state of the team you may not want to spend too much time on it. We had 1 hour and we needed 5 to 10 minutes for the <a href="/check-in-check-out">Check-in/Check-out</a>. We then spent some time celebrating the good by allowing people to explain why they voted for those items.</p>
<p>My suggestion at this point is to focus on the lowest one from here and work your way up, time permitting. Unless something is systemically wrong with the team there should be quick wins to raise things that are a 1.</p>
<p>If things degenerate into technical discussions interject and ask to take it offline.</p>
<h2>Aspects</h2>
<p>Let's walk through what each aspect is in case this isn't clear from the label.</p>
<h3>Direction</h3>
<p>This is the direction of the team. Do they know what they are building? Do they know why they are building it? Do they know how they are going to build it?</p>
<h3>Progress</h3>
<p>How does the team rate its progress in building what it should be building? This comes after direction because if you don't know what you are building you are unlikely to feel like you are progressing toward it. This project in particular had a rocky start due to a dependence on an external party, so direction was low. Feelings of progress varied based on whether a team member was focusing on infrastructure or feature implementation.</p>
<h3>Process</h3>
<p>Here you are trying to find out the team's buy in to a process or feelings that the process is lacking. Again, the direction contributed but with a new team I was introducing processes as the team requested them. This is usually not a good idea unless you have an experienced agile team who are capable of raising issues proactively and self-organsing. Although a newly formed team it is comprised of experienced members so this was low risk.</p>
<h3>Team work</h3>
<p>How well does the team feel it is collaborating? Are they pair-programming? Are they stepping on each others toes? Are they aware of what each team member is doing? It was mentioned to me by <a href="https://www.erikheemskerk.nl/">a very astute team member</a> that teamwork is very difficult, if not impossible, to get right if the team does not have clear direction. See my <a href="/big-agile-teams">post on big agile teams</a> for some ideas of facilitating team communication.</p>
<h3>Learning</h3>
<p>Is the team challenged? Are they learning new things? This is important for cultivating an autonomous, self organising team as well as for enthusiasm.</p>
<h3>Enthusiasm</h3>
<p>Are team members excited to come to work? Excited to work on the project/product? Happy to work together? This forms a symbiotic relationship with all the others and will go down if any of the others stay down and when it does go down, all the others will drop even faster from the feedback effect. It is the canary, so watch it well.</p>
<h2>Analysis</h2>
<p>This was from the team's first retrospective and as mentioned the direction was shaky from the start, so this was actually better than expected from a young team (with experienced members). If anything surprises you be sure to spend a large amount of time drilling into what is going on there. From this retrospective we implemented a few more process items that showed immediate benefit. I cannot stress the significance of this enough.</p>
<p>The team identified a problem, and an agile process (think demo, refinement, etc.) was introduced because the pain was felt and a balm was applied. How many things do you do and care about in your life that have no benefit to you or anyone you care about? Why should development be any different? Only solve problems that exist. Before Agile was a label, <a href="agile-is-a-characteristic">agile was a characteristic</a> of a team.</p>
<img src="/img/posts/2017/aspect-rating-2017-03-17.jpg" alt="Aspect Rating Chart" class="img-thumbnail">
<p><em>Aspect Rating analysis</em></p>
<h3>Pros</h3>
<ul>
<li>Great snapshot of the team's perception of itself</li>
<li>Identify things to celebrate</li>
<li>Identify problem areas and provide a forum to start discussing</li>
<li>Seemed to eliminate personal rants and every team member repeating the same thing that often seems to happen with some other formats</li>
</ul>
<p>The bottom line is that it really focuses the discussion into narrow, helpful, actionable bands.</p>
<h3>Cons</h3>
<ul>
<li>It really focuses the discussion into narrow, helpful, actionable bands. Sometimes you want to generate more free form discussion or drill into technical details or inter-personal conflicts within the team. I can't say for sure but this does not seem suited.</li>
<li>The team really needs to get involved in discussing the aspects or this is going to be a very short meeting</li>
</ul>
<h2>Conclusion</h2>
<p>I think I will be using this regularly to document the teams progression on these aspects. I won't use it every retrospective but maybe every 2nd or 3rd. As this is something new I am experimenting with so please note that this is early stage beta so take it with a pinch of salt. I will try report back with more data once I have more. Did you find this useful? Do you have your own methods that you use regularly for retrospectives that gives you measurable insight? Let me know in the comments below.</p>https://devonburriss.me/big-agile-teams/Big Agile Teams2017-03-20T00:00:00+00:00Devon Burrisshttps://devonburriss.me/big-agile-teams/<p>As a team grows it becomes more difficult to apply some agile practices effectively. SCRUM meetings like standup and retrospectives become drawn out, the number of stories becomes hard to manage, and the communication within the team can easily break down.<br />
Currently I have a team of 10 and I am experimenting with ways of tackling these issues. Hopefully this will turn into a loose series of posts surrounding my experiences with a larger team. I won't go into the why suffice to say it is a project rather than a product but we don't want to go waterfall.</p>
<h1>Big team tactics</h1>
<p>Most of these tactics focus on communication of what the team is working on but there are a few process items. Some of these tactics are taken from previous smaller teams and are by no means only for large teams. By the time you have 9+ people social bonding, communication, and working memory are all suffering so these need to be focused on.</p>
<h2>Cells</h2>
<p>The team is broken up into 2s. These 2 developers are responsible for keeping eachother abreast of their own progress. This is more than just an informal pairing. If possible they work in related areas. They are preferred for peer review and pair-programming. Most notably they are responsible for reporting progress for each other at the standup. See next point for details...</p>
<h2>Developers are chickens too</h2>
<blockquote>
<p>A Pig and a Chicken are walking down the road.<br />
The Chicken says: "Hey Pig, I was thinking we should open a restaurant!"<br />
Pig replies: "Hm, maybe, what would we call it?"<br />
The Chicken responds: "How about 'ham-n-eggs'?"<br />
The Pig thinks for a moment and says: "No thanks. I'd be committed, but you'd only be involved."</p>
</blockquote>
<p>Cell members alternate between being <a href="https://en.wikipedia.org/wiki/The_Chicken_and_the_Pig">chickens and pigs</a>. At a standup only the pigs report on progress but will do it for the fellow cell chicken. This keeps everyone informed but the number of active participants smaller. Not only that but the nominated pig needs to at least understand what the chicken did well enough to explain it. Hat-tip to the Feynman Technique ;)</p>
<h2>Present the plan</h2>
<p>Before implementing a complete story developers are encouraged to discuss how they will be implementing a given story before implementation is underway (or very far). This gives others a chance to weigh in on the implementation details and bubble up any hidden knowledge or pitfalls. I suggest a regular prompt for this, possibly straight after standup.</p>
<h2>Technical demos</h2>
<p>This is not a stakeholder demo. Plan regular (2 weeks seems good) demos where the developers can deep-dive on what they have been working on with the others in a bit more of a formal way. One or two slides, some live demos of code and functionality, and a Q&A afterward.</p>
<h2>Dedicated learning time</h2>
<p>It is easy for people to get lost in the group and fall behind and as a team lead it is difficult to spend time with everyone. Dedicating a regular afternoon to discussing new technologies or methodologies is good for moral as well as raising the skills of the team.</p>
<h2>Socialize</h2>
<p>Getting the team to bond is even more important when it is bigger. Lunching together, non-work activities, or even retrospectives can help bring together. A focus on sharing feelings at points in the retrospective can help others understand how others they are not close to in the team are feeling.</p>
<h2>Conclusion</h2>
<p>I hope some of these suggestions are helpful and if you have any of your own please let me know in the comments below. These are all a work in progress and I will hopefully report back in a later post.</p>https://devonburriss.me/check-in-check-out/Check-in and Check-out2017-03-20T00:00:00+00:00Devon Burrisshttps://devonburriss.me/check-in-check-out/<p>Someone must have thought of this before but I have not read this anywhere so I thought I would jot it down. I recently ran a retrospective that I thought went really well, and apparently so did everyone else...</p>
<h1>Measure with Check-in and Check-out</h1>
<p>if you don't measure something how can you know if it improving? It is a staple of development so why shouldn't we apply it to our meetings as well. It is really easy.</p>
<ol>
<li>At the start of the retrospective ask everyone to write down a single word (or phrase) that sums up there feeling about how things are going</li>
<li>Ask if anyone would like to share what they wrote down (can be more than one or even everybody)</li>
<li>Do your retrospective</li>
<li>Repeat step 1 at the end of the retrospective and see if anything changed</li>
</ol>
<p>Easy!</p>
<img src="/img/posts/2017/check-in-out.jpg" alt="Check-in-Check-out" class="img-thumbnail">https://devonburriss.me/better-error-handling/Better error handling2017-03-19T00:00:00+00:00Devon Burrisshttps://devonburriss.me/better-error-handling/<p>In my <a href="/honest-return-types">previous post</a> I discussed handling <code>null</code> and <code>Exception</code> in the return type. In this post I will discuss returning logic errors.</p>
<h1>Handling errors</h1>
<p>There are times when valid errors can occur but are not exceptional. Validation is a common example of this and where a validation result is often the go to type. Wouldn't it be nice if we could apply the same pattern as with exceptions?</p>
<h2>Either: Errors or no errors</h2>
<p>Functional languages define a type with the following form: <code>Either<Left, Right></code>. <code>Left</code> and <code>Right</code> can be anything but in the case of error handling <code>Left</code> is the unhappy path and <code>Right</code> is the happy path. Let's assume we have an <code>Error</code> type for representing errors that occurred, then using <code>Either</code> to represent error handling could look something like this: <code>Either<IEnumerable<Error>, T></code>. <code>Error</code> has an implicit conversion to <code>string</code> so let's work with <code>string</code> for demonstration purposes below.</p>
<pre><code class="language-csharp">Func<int, int, Either<IEnumerable<string>, int>> divide =
(i, d) =>
{
if (d == 0)
return List("Cannot divided by zero.");
return (i / d);
};
Either<IEnumerable<string>, int> divideByZeroResult = divide(1, 0);
divideByZeroResult.Match(
Left: errors => errors.ToList().ForEach(x => Console.WriteLine(x)),
Right: i => Console.WriteLine($"Answer is {i}")
);
//Cannot divide by zero.
Either<IEnumerable<string>, int> twoResult = divide(4, 2);
twoResult.Match(
Left: errors => errors.ToList().ForEach(x => Console.WriteLine(x)),
Right: i => Console.WriteLine($"Answer is {i}")
);
//Answer is 2
</code></pre>
<p>This works great but <code>Either<IEnumerable<string>, int></code> is quite a verbose return type definition. If we know we are always going to use <code>IEnumerable<string></code> as <code>Left</code> why not specify that in the type? Before we do that, we are going to take a quick dive into some functional programming ideas.</p>
<h2>Functional side-bar</h2>
<p>Lets go through a couple concepts that will come up. Hopefully you read the previous post that introduced <em>Elevated types</em>. Here I will quickly go through working with elevated types.</p>
<h3>Return: To the world of elevated types</h3>
<p><em>Return</em> is raising to the world of elevated types. You have already seen examples of return already in this post. <code>Some</code> and <code>None</code> for <code>Option<T></code> and <code>Left</code> and <code>Right</code> for <code>Either<L, R></code> are just some <em>return</em> operations.</p>
<pre><code class="language-csharp">//return - elevate an int to Option<int>
Option<int> optInt = Option<int>.Some(1);
// Some(1)
</code></pre>
<h3>Apply - just this part</h3>
<p><em>Apply</em> unpacks a function and applies the first argument then returns an elevated function representing the result.</p>
<pre><code class="language-csharp">//apply
Func<int, int, int> add = (a, b) => a + b;//function
Option<int> addOpt = Some(add);//elevate function
var increment = addOpt.Apply(1) ;//apply: b => 1 + b
increment.Apply(5);
// Some(6)
</code></pre>
<h3>Map: ol' switch-a-roo</h3>
<p><em>Map</em> applies the function to the value contained in the elevated value and returns the elevated result. In C# terms <em>Map</em> like LINQ's <code>Select</code>.</p>
<pre><code class="language-csharp">Func<int, string> intToString = (i) => i.ToString();
Option<int> optInt = Option<int>.Some(1);
//map - apply function to inner value
Option<string> optString = optInt.Map(intToString);
// Some("1")
</code></pre>
<h3>Bind: functions in the darkness</h3>
<blockquote>
<p>"... and in the darkness bind them"</p>
</blockquote>
<p>Sorry that was a Lord of the Rings reference. My 2nd name is legally Aragorn (from birth), I didn't stand a chance...<br />
<em>Bind</em> allows you to compose (bind) functions in an elevated world. It is analogous to <code>SelectMany</code> from LINQ fame.</p>
<pre><code class="language-csharp">Func<string, Option<int>> ifEvenInt = (s) =>
{
if (int.TryParse(s, out int i))
{
return (i % 2 == 0) ? Some(i) : None;
}
else
{
return None;
}
};
Func<int, Option<int>> doubleIt = (i) => Some(i * 2);
Func<int, Option<int>> exp = (i) => Some(i * i);
Option<string> optString = optInt.Map("2");
//bind - passes inner value to a function that returns an elevated result
Option<int> eventResult = optString.Bind(ifEvenInt);
// used to combine elevated functions
var worked = eventResult
.Bind(doubleIt)
.Bind(exp);
// Some(16)
</code></pre>
<p>If we changed "2" to "1" the output would be <code>None</code> since <code>ifEvenInt</code> would return <code>None</code> which would short-circuit all the <code>Bind</code> calls.</p>
<h2>Match: what goes up must come down</h2>
<p><em>Match</em> is the yin to <em>Return</em>'s yang. Where <em>Return</em> operations elevate values to the elevated world, <em>Match</em> drops an elevated value back to the real world.</p>
<pre><code class="language-csharp">//match
Option<int> optInt = Option<int>.Some(1);
optInt.Match(
Some: x => Console.WriteLine(x),
None: () => Console.WriteLine("Nothing")
);
// 1
</code></pre>
<p>Now that we can get to the elevated world, do what we need to do and then return back through the cupboard, let us get back to the business at hand. Validation!</p>
<h2>Validation: Your result (might have errors)</h2>
<blockquote>
<p>You can find the <code>Validation</code> type in <a href="https://github.com/dburriss/HonestTypes#return-types">HonestTypes.Returns</a> package</p>
</blockquote>
<p>So let's define a type <code>Validation<T></code> that is <code>Either<IEnumerable<Error>, T></code>? That would remove some of the verbosity of the return type as well as give a clearer semantic to the type name.</p>
<pre><code class="language-csharp">using static F;
public Validation<Person> Validate(Person person)
{
if (person == null)
return Error("Person is null");
//short circuit on error
return Valid(person)
.Bind(ValidateFirstNames)
.Bind(ValidateLastName)
.Bind(ValidateEmail);
}
private Validation<Person> ValidateFirstNames(Person person)
{
if (string.IsNullOrWhiteSpace(person.FirstNames))
return Invalid(Error($"{nameof(person.FirstNames)} cannot be empty"));
return person;
}
private Validation<Person> ValidateLastName(Person person)
{
if (string.IsNullOrWhiteSpace(person.LastName))
return Invalid(Error($"{nameof(person.LastName)} cannot be empty"));
return person;
}
private Validation<Person> ValidateEmail(Person person)
{
if (string.IsNullOrWhiteSpace((string)person.Email))
return Invalid(Error($"{nameof(person.Email)} cannot be empty"));
return person;
}
//usage
var validatedPerson = service.Validate(person);
validatedPerson.Match(
Valid: p => Console.WriteLine($"{p.LastName}, {p.FirstNames} <{p.Email}>"),
Invalid: err => err.ToList().ForEach(x => Console.WriteLine(x.Message))
);
</code></pre>
<p>The code above uses <code>Bind</code> and short-circuits on the first error. This might not be the desired behaviour. What if we want to check all validations? Here is a version that does that...</p>
<pre><code class="language-csharp">public Validation<Person> Validate(Person person)
{
if (person == null)
return Error("Person is null");
//collect all errors
return Valid(Person.Create)
.Apply(ValidateFirstNames(person.FirstNames))
.Apply(ValidateLastName(person.LastName))
.Apply(ValidateEmail(person.Email));
}
Func<FirstNames, Validation<FirstNames>> ValidateFirstNames => firstNames =>
{
if (string.IsNullOrWhiteSpace(firstNames))
return Invalid(Error($"{nameof(firstNames)} cannot be empty"));
return firstNames;
};
Func<LastName, Validation<LastName>> ValidateLastName => lastName =>
{
if (string.IsNullOrWhiteSpace(lastName))
return Invalid(Error($"{nameof(lastName)} cannot be empty"));
return lastName;
};
Func<Email, Validation<Email>> ValidateEmail => email =>
{
if (string.IsNullOrWhiteSpace((string)email))
return Invalid(Error($"{nameof(email)} cannot be empty"));
return email;
};
</code></pre>
<p>The above code uses <code>Apply</code> and is applicative so all errors are returned. Notice how the return result is actually a <code>Func</code> that performs the validation.</p>
<p>if you don't like the <code>Func</code> style you can continue to use the <code>Bind</code> syntax but with the applicative nature using <code>Validation</code> types <code>Join</code> method...</p>
<pre><code class="language-csharp">//collect all errors
return Valid(person)
.Join(ValidateFirstNames(person))
.Join(ValidateLastName(person))
.Join(ValidateEmail(person));
</code></pre>
<h2>Conclusion</h2>
<p>And there you have some neat validation logic. If you have any comments or suggestions please leave them below. If you found this useful, please share it with someone who you think might also find it useful.</p>
<h2>Recommended Reading</h2>
<ol>
<li><a href="https://fsharpforfunandprofit.com/posts/elevated-world/">Elevated world</a></li>
<li><a href="https://fsharpforfunandprofit.com/rop/">Railway oriented programming</a></li>
</ol>https://devonburriss.me/honest-return-types/Honest Return Types2017-03-14T00:00:00+00:00Devon Burrisshttps://devonburriss.me/honest-return-types/<p>In <a href="/honest-arguments">Part 1</a> we looked at ways of making your code more descriptive by using custom types instead of simple types like <code>string</code>. In this article we will look at what your return type can tell you about a method.</p>
<blockquote>
<p>Updated: 19 March 2017</p>
</blockquote>
<!--more-->
<h1>Honest Return Types</h1>
<p>For most of this post let us build on the example of a <code>Person</code> repository. We are not going to dive into implementation but instead focus on the descriptiveness of the return type. Our starting point is this:</p>
<pre><code class="language-csharp">public interface IQueryPerson
{
Person Get(Email email);
}
</code></pre>
<p>The return type should be honest about what can happen when you call a method. Does this repository method return <code>null</code> if no record is found? Does it throw and exception? Does it return a <a href="https://martinfowler.com/eaaCatalog/specialCase.html">special case</a> subtype? Wouldn't it be nice if your return type could tell you this instead of you having to dig into the implementation to find out.</p>
<p>My 2 criteria are:</p>
<ol>
<li>A return type should be really descriptive of what the possible outcomes are</li>
<li>The interface for interacting with a type should make it difficult for developers to do the wrong thing</li>
</ol>
<h2>Result: A first try</h2>
<p>One solution is a <code>Result<T></code> or some such flavour. It might look something like this:</p>
<pre><code class="language-csharp">public class Result<T>
{
public T Value { get; set; }
public bool IsSuccess { get; set; }
public IEnumerable<string> Errors { get; set; }
public Result()
{
Errors = new List<string>();
}
public Result(T value)
{
if(value == null)
{
IsSuccess = false;
}
else
{
IsSuccess = true;
Value = value;
}
}
}
</code></pre>
<p>This could be written in slightly different ways, with error codes instead of string for Errors, or even <code>Exception</code>. Let's discuss the pros and cons of this.</p>
<h3>Pros</h3>
<ul>
<li>It does acknowledge that something could go wrong</li>
<li>Can return some error and state information without throwing an exception (read unexplicit <code>goto</code> statement)</li>
</ul>
<h3>Cons</h3>
<ul>
<li>It is not descriptive about what represents a failure</li>
<li>Value can be accessed without checking for success</li>
<li>The type doesn't convey whether <code>null</code> could still be a valid value</li>
</ul>
<p>So it is something but doesn't really fulfill either of my criteria very well. We are going to have to take a quick sidebar and talk about representing <code>null</code>. <code>Result<T></code> doesn't tell us whether we should expect <code>T</code> to be <code>null</code> and whether that is valid.</p>
<h2>Functional side-bar</h2>
<p>In functional terms an elevated type is like a wrapper. It is a higher level of abstraction that allows us to work with the type in a predictable way. <code>IEnummerable<T></code>, <code>Option<T></code>, <code>Exception<T></code>, <code>Either<L. R></code>, <code>Validation<T></code> are all examples of elevated types.</p>
<h2>Option: <code>null</code> is None</h2>
<p>"It depends" is something you hear a lot in development, and wouldn't it be great if a type conveyed this? <code>Option</code> or <code>Maybe</code> are types often found in more functional languages that highlight the fact that a value could not be present. It allows you to say that there is <code>Some</code> value, or the value is <code>None</code>. This is probably easier to demonstrate...</p>
<blockquote>
<p>I am using <a href="https://github.com/louthy/language-ext">LanguageExt</a> to get some more functional types. This one is mature and fully featured but pick whatever works for you.</p>
</blockquote>
<pre><code class="language-csharp">public Option<Person> Get(Email email)
{
Person person = QueryByEmail(email);//person could be null if no matching email found in the datasource
return person;
}
//usage example
var person1 = personRepository.Get(email);
//print out last name if person was found otherwise print "Nobody"
person1.Match(
Some: p => Console.WriteLine(p.LastName),
None: () => Console.WriteLine("Nobody")
);
//return fullname or Nobody if no one was found
var person1Name = person1.Match(
Some: p => $"{p.FirstNames} {p.LastName}",
None: () => "Nobody"
);
</code></pre>
<p>The implementation uses <code>implicit</code> conversion to return <code>None</code> if the value is <code>null</code> otherwise the <code>Person</code> is elevated with Some.<br />
I explicitly elevate the result to demonstrate what is happening. Let's also add some error-handling as this will show a problem.</p>
<pre><code class="language-csharp">using static LanguageExt.Prelude;
public Option<Person> Get(Email email)
{
try
{
Person person = QueryByEmail(email);
if(person == null)
return None;
return Some(person);
}
catch (Exception)
{
return None;
}
}
</code></pre>
<p>So this is looking a little better.</p>
<h3>Pros</h3>
<ul>
<li>Return type is explicit about possibility of no value being returned</li>
<li>The API of the type encourages handling of branch between happy and unhappy path</li>
</ul>
<h3>Cons</h3>
<ul>
<li>We cannot differentiate between no value and an exception</li>
</ul>
<h2>Exception: return don't throw</h2>
<blockquote>
<p>The following <code>Exceptional<T></code> and <code>Validation<T></code> types are defined in <a href="https://github.com/dburriss/HonestTypes">HonestTypes</a>. Check the project page for installation instructions.</p>
</blockquote>
<p>So our type needs to be a bit more explicit about what can happen. Let's introduce an <code>Exceptional<T></code> type.
This is similar to <code>Option<Person></code> but instead of <strong>Some</strong> and <strong>None</strong> it has <strong>Exception</strong> and <strong>Success</strong>.<br />
For those of you familiar with functional programming it is basically <code>Either<Exception, T></code> with left set to <code>Exception</code>.</p>
<pre><code class="language-csharp">public Exceptional<Option<Person>> Get(Email email)
{
try
{
Person person = QueryByEmail(email);
Option<Person> result = person;
return result;
}
catch (DbException ex)//only catch expected exceptions
{
return ex;
}
}
//usage
var person1 = personRepository.Get(email);
person1.Match(
Exception: ex => Console.WriteLine($"Exception: {ex.Message}"),
Success: opt => opt.Match(
None: () => Console.WriteLine("Person: Nobody"),
Some: p => Console.WriteLine($"Person: {p.FirstNames} {p.LastName}")
)
);
</code></pre>
<p>One important point in the repository implementation is you need to assign it to <code>Option<Person></code> before returning it which implicitly converts to <code>Exceptional<Option<Person>></code>.
You can't go directly from <code>Person</code> to <code>Exceptional<Option<Person>></code> unfortunately.</p>
<p>The difference in this implementation is in the exception handling. See how we just return the exception? The exception has an implicit conversion to the elevated type of <code>Exceptional<T></code>.</p>
<h3>Pros</h3>
<ul>
<li>Return type is very explicit about both errors and no value</li>
<li>API of return type encourages good handling of code paths</li>
</ul>
<h3>Cons</h3>
<ul>
<li>With the nested generics the type declaration is quite verbose</li>
</ul>
<h2>Conclusion</h2>
<p>So with a bit of borrowing from functional programming and some added verbosity to our method signature we managed to move from an admittedly simple signature to a slightly more verbose one that is brutally honest about the possible outcomes.</p>
<pre><code class="language-csharp">Person Get(Email email);
Result<Person> Get(Email email);
Option<Person> Get(Email email);
Exceptional<Option<Person>> Get(Email email);
</code></pre>
<p>I hope you found something useful in this and if you did I cannot recommend enough the brilliant <a href="https://www.manning.com/books/functional-programming-in-c-sharp">Functional Programming in C#</a> from Manning. I must warn that some of the chapters in this book are heavy going. Not because they are badly written but because as a C# and Java developer the concepts are so foreign that they take a while to sink in. Like most things worthwhile it takes effort and determination but you will be a better developer for it.</p>
<p>In my following post I will discuss <a href="/better-error-handling">error handling</a> and how logic/validation errors can be represented as return types following the same criteria as in this post.</p>
<h2>Recommended Reading</h2>
<ol>
<li><a href="https://fsharpforfunandprofit.com/posts/elevated-world/">Elevated world</a></li>
<li><a href="https://fsharpforfunandprofit.com/rop/">Railway oriented programming</a></li>
</ol>https://devonburriss.me/honest-arguments/Honest Arguments2017-03-10T00:00:00+00:00Devon Burrisshttps://devonburriss.me/honest-arguments/<p>One of the benefits of statically typed languages is that we can rely on more than the method and parameter names for information on what is expected and what is returned. A well designed method should be about more than naming. Too often we give up on this type safety and expressiveness for the ease of instantiating primitives and <code>string</code>.</p>
<!--more-->
<h1>Expressively typed parameters</h1>
<p>Consider the following 2 tips for message choice. To be fair I chose less than expressive names to demonstrate that even if a developer doesn't pick the best names (which they should of course try to do and should be fixed), the types of the argument provide all the intent needed. The parameter names could be 'l', 'f', and 'e' and a developer could still infer the usage from the types.</p>
<p><img src="/img/posts/2017/primitive-typed-method.jpg" alt="primitive parameters" />
<em>Figure 1: Using simple type parameters</em></p>
<p><img src="/img/posts/2017/expressively-typed-method.jpg" alt="expressive parameters" />
<em>Figure 2: Using expressive type parameters</em></p>
<p>So how would we represent something like a name as a type instead of a <code>string</code> but still have it play nice with the capture in a client or storage of an instance in a database?
The trick is with the <code>implicit</code> or <code>explicit</code> keywords.</p>
<h2>Lose the primitives (but play nice)</h2>
<p>For types that are always a direct conversion with no chance of failing, use the <code>implicit</code> keyword.</p>
<pre><code class="language-csharp">public class FirstNames
{
string Value { get; }
public FirstNames(string value) { Value = value; }
public static implicit operator string(FirstNames c)
=> c.Value;
public static implicit operator FirstNames(string s)
=> new FirstNames(s);
public override string ToString() => Value;
}
//usage
FirstNames name = "Devon Aragorn";
string nameAsString = name;
</code></pre>
<p>On the other hand when you start adding a bit of behaviour into your class, there is a chance that the conversion can fail. Take for instance an <code>Email</code> type that has some validation of the email address.</p>
<pre><code class="language-csharp">public class Email
{
private const string regexPattern = @"\A(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?)\Z";
private string Value { get; }
public Email(string value)
{
if(!Regex.IsMatch(value, regexPattern, RegexOptions.IgnoreCase))
{
throw new ArgumentException($"{value} is not a valid email address.", nameof(value));
}
Value = value;
}
public static explicit operator string(Email c)
=> c.Value;
public static explicit operator Email(string s)
=> new Email(s);
public override string ToString() => Value;
}
//usage
Email email = (Email)"test@test.com";
string emailAsString = (string)email;
</code></pre>
<p>Here we are using the <code>explicit</code> keyword because the constructor can throw an exception if the string is not a valid email address.</p>
<h3>Pros</h3>
<p>Let's list some reasons why you would want to do this with simpler types.</p>
<ul>
<li>Using <strong>expressive types reveal intent</strong> to consumers (other developers and future you)</li>
<li><strong>Finding usage</strong> of particular concepts can be done by type rather than searching text</li>
<li>If doing domain modelling you can now <strong>group behavior and data</strong> to have a descriptive model</li>
<li>Once assigned to an expressive type they <strong>provide type safety</strong></li>
<li>Creation of more <strong>targeted extension methods</strong></li>
</ul>
<h3>Cons</h3>
<p>As with most things in programming, #ItDepends. There are some down sides to using types this way...</p>
<ul>
<li><strong>More code</strong> to write and maintain</li>
<li><strong>Serialization</strong> requires a bit more work to do</li>
<li><strong>ORM mapping</strong> could be more complicated</li>
<li>Implicit conversion means you lose some type safety</li>
</ul>
<p>Let me quickly discuss a few of these cons and how they can be mitigated.</p>
<h4>More Code</h4>
<p>Not much to do about the maintainability part. I will say that these are relatively simple and are unlikely to change or have far reaching effects due to dependencies. To address the effort of actually creating these see <a href="/visual-studio-implicit-snippet">Visual Studio Implicit Snippet</a>.</p>
<h4>Serialization</h4>
<p>For some help easily serializing these types check out the <a href="https://github.com/dburriss/HonestTypes">Honest Types repository</a>. That package provides a Json.NET Converter like <code>new SimpleJsonConverter<LastName, string>()</code> that can be supplied to the settings when serializing and deserializing.</p>
<h4>ORM Mapping</h4>
<p>If you are modelling your domain (like with DDD) which is likely the case if you are using types this way, then you shouldn't be using your domain models for persistence. This tends to tie your domain models to the underlying data model and you will find the schema requirements will start leaking into your domain model. So create models for your data layer and map from them to your domain models in the repository.</p>
<h2>Recommended Reading</h2>
<p><a href="http://enterprisecraftsmanship.com/2015/03/07/functional-c-primitive-obsession/">Functional C#: Primitive obsession</a></p>https://devonburriss.me/visual-studio-implicit-snippet/Visual Studio Implicit Snippet2017-03-08T00:00:00+00:00Devon Burrisshttps://devonburriss.me/visual-studio-implicit-snippet/<p>Sometimes you want to create a <a href="/honest-arguments">descriptive type</a> to better represent a concept such as an email (rather than a <code>string</code>) but what stops you is the effort in creating this type. Here is a quick snippet to allow you to quickly generate these types reliably.</p>
<!--more-->
<h1>What will we be generating?</h1>
<p>What we are trying to generate is a class that ends up looking something like this.</p>
<pre><code class="language-csharp">public class LastName
{
string Value { get; }
public LastName(string value) { Value = value; }
public static implicit operator string(LastName c)
=> c.Value;
public static implicit operator LastName(string s)
=> new LastName(s);
public override string ToString() => Value;
public override int GetHashCode() => Value.GetHashCode();
public override bool Equals(object obj)
{
if (Value == null || obj == null)
return false;
if (obj.GetType() == typeof(string))
{
var otherString = obj as string;
return string.Equals(Value, otherString, StringComparison.Ordinal);
}
if (obj.GetType() == this.GetType())
{
string otherString = string.Format("{0}", obj);
return string.Equals(Value, otherString, StringComparison.Ordinal);
}
return false;
}
}
</code></pre>
<p>This class will implicitly convert between <code>LastName</code> and <code>string</code> and compares like a value type. So two different instances of the same last name will be equivalent.</p>
<h2>Visual Studio Snippet</h2>
<p>If you are using <a href="https://www.jetbrains.com/resharper/features/code_templates.html">Resharper</a> or another development productivity extension, creating snippets is fairly easy. In Visual Studio without a productivity extension it takes a little more effort but not much.</p>
<p>First you will need to create the snippet. Open up your favourite editor (<a href="https://code.visualstudio.com/">I use Visual Studio Code</a>) and create a file called <em>impl.snippet</em> and save it somewhere. You will be importing it into Visual Studio later so remember where you put it. Also be aware that it will actually be copied to <em>C:\Users{user}\Documents\Visual Studio 2017\Code Snippets\Visual C#\My Code Snippets</em> when you import it and the one you saved is not the one that Visual Studio uses. So if make changes to the original you will need to re-import it and if you edit the imported one it seems VS needs a restart.</p>
<pre><code class="language-xml"><?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet Format="1.0.0">
<Header>
<Title>Class with implicit string operator</Title>
<Author>Devon Burriss</Author>
<Description>Creates a class that can implicitly convert to and from string.</Description>
<Shortcut>impl</Shortcut>
</Header>
<Imports>
<Import>
<Namespace>System</Namespace>
</Import>
</Imports>
<Snippet>
<Declarations>
<Literal>
<ID>name</ID>
<ToolTip>Name of the class.</ToolTip>
<Default>MyImplicitType</Default>
</Literal>
</Declarations>
<Code Language="csharp">
<![CDATA[
public class $name$
{
string Value { get; }
public $name$(string value) { Value = value; }
public static implicit operator string($name$ c)
=> c.Value;
public static implicit operator $name$(string s)
=> new $name$(s);
public override string ToString() => Value;
public override int GetHashCode() => Value.GetHashCode();
public override bool Equals(object obj)
{
if (Value == null || obj == null)
return false;
if (obj.GetType() == typeof(string))
{
var otherString = obj as string;
return string.Equals(Value, otherString, StringComparison.Ordinal);
}
if (obj.GetType() == this.GetType())
{
string otherString = string.Format("{0}", obj);
return string.Equals(Value, otherString, StringComparison.Ordinal);
}
return false;
}
}
]]>
</Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>
</code></pre>
<p>Xml file: <em>impl.snippet</em></p>
<p>The <code><Header></code> element defines some generic information about the snippet. It is all self explanatory. I do want to just point out the <code><Shortcut></code> element. This is what you will edit if you want anything other than typing <strong>impl</strong> and then hit the <strong>Tab</strong> button to activate the snippet.</p>
<p>The interesting bit is the <code><Literal></code> element. It has an <code><ID></code> element which is used in the snippet template to be the replacement variable. So when you hit <strong>Tab</strong> you can type a name for the class and it will be inserted into all the relevant places.</p>
<h2>Import into Visual Studio</h2>
<p>Once you have created your snippet and saved it somewhere, go to Visual Studio (if that isn't what you used to create the snippet).</p>
<ul>
<li>Navigate to <em>Tool > Code Snippets Manager...</em> (or press Ctrl+K, Ctrl+B).</li>
<li>Click <em>Import...</em> (you can choose C# language to be safe but it seems to pick it up from the snippet)</li>
<li>Browse to the <em>impl.snippet</em> file you created earlier an click <em>Open</em></li>
<li>Make sure <strong>My Coded Snippets</strong> is selected and click <em>Finish</em></li>
</ul>
<p>And you are done. Now to create the class you can type <code>impl</code> in any .cs file and hit <strong>Tab</strong> and it will generate the class</p>
<h2>Conclusion</h2>
<p>If you find yourself creating repetitive classes, or avoiding to create classes because they are repetitive. Consider automating it to a degree by using a snippet.</p>
<h2>Further reading</h2>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/ms165396.aspx">How to</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/ms242312.aspx">Snippet functions</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/ms171418.aspx">Schema Reference</a></li>
</ul>https://devonburriss.me/cake-build/Building a Cake Script2017-03-04T00:00:00+00:00Devon Burrisshttps://devonburriss.me/cake-build/<p><a href="http://cakebuild.net/">CAKE</a> is a great automation DSL that uses C#. Not only is it comfortable for C# developers to script automation tasks in, it has a stack of built in functionality and a great ecosystem of addins that give you a great jumpstart for just about anything you would like to automate.</p>
<p>This is a quick tip on how to create a Visual Studio Code task that will build your Cake script. This is a great way of verifying your scripts without actually running Cake tasks.
Also make sure you have the Visual Studio Code extension for Cake installed to give you syntax highlighting.</p>
<!--more-->
<h2>Creating a tasks.json file</h2>
<p>Press <strong>Ctrl+Shift+P</strong> and type <strong>Tasks:C</strong> and hit enter or click 'Tasks: Configure Task Runner'. If the file does not exist it will be created. If there is an existing build task be sure to replace it. Note that that this is building the cake script, not building whatever project your Cake script is probably meant to build. That being said, if you are using Cake to build something, this task described here should probably be a custom task, not the build task.</p>
<h2>Adding our Cake build task</h2>
<p>Now that we have add the following task to the json tasks array.</p>
<pre><code class="language-json">{
"taskName": "Build",
"command": "powershell",
"isShellCommand": true,
"args": [".\\build.ps1 -Whatif"],
"showOutput": "always",
"isBuildCommand": true
}
</code></pre>
<p>Cake works by running a powershell script (default is <em>build.ps1</em>) that uses Roslyn to compile the Cake file. What our script does is execute the build script and trigger a compile but without actually executing any tasks. Not even the Default one. This is done by adding the <code>-Whatif</code> argument flag.<br />
In the example above the <code>isBuildCommand</code> is set to <strong>true</strong> so that <strong>Ctrl+Shift+B</strong> can be used to build the <em>build.cake</em> file.</p>
<h2>Conclusion</h2>
<p>Automating your builds, testing and deployment is important but don't stop there. Making sure your workspace feedback cycle is fast can also be a great way to increase productivity and decrease frustration. Hope this quick tip helps someone. Leave a comment if you have any of your own Cake tips.</p>https://devonburriss.me/ddd-glossary/Domain-Driven Design Glossary2017-02-14T00:00:00+00:00Devon Burrisshttps://devonburriss.me/ddd-glossary/<div class="row">
<div class="col-xs-6 col-md-3">
<a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=as_sl_pc_tf_mfw?&linkCode=wey&tag=wwwnervstucoz-20" class="thumbnail">
<img src="/img/posts/2017/blue-book.jpg"/>
</a>
</div>
DDD cannot be summarized in a few paragraphs. In fact it would take a few books to cover it thoroughly.
Even then like anything worthwhile it requires much practice and many mistakes to start to become proficient at it.
This is how it is with most skills that add a lot of value.
<p>A good start would be reading Eric Evans' <a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=as_sl_pc_tf_mfw?&linkCode=wey&tag=wwwnervstucoz-20">Domain-Driven Design: Tackling Complexity in the Heart of Software</a>.</p>
<p>It is worthwhile being familiar with some of the common terms thrown around in DDD.</p>
</div>
<!--more-->
<h2>What is DDD not?</h2>
<p>DDD is not:</p>
<ul>
<li>Calling your area of work a Domain</li>
<li>Modelling the state of objects required into a bunch of <a href="http://www.martinfowler.com/bliki/AnemicDomainModel.html">anemic models</a></li>
<li>Services containing logic that act on the anemic models</li>
<li>A giant ball of interconnected objects where every class in your project has a reference somehow to every other</li>
</ul>
<h2>What is DDD?</h2>
<p>DDD is about modelling, and more. It encompasses common language, techniques, patterns, and architecture. It puts the focus on the business and modelling the problems you are solving. All the while giving developers techniques for minimizing the complexity and driving collaboration.
It is about taking requirements and really mapping the business processes to the model using the same language the business uses in your code.
It also gives us a common technical language to use for the different categories of classes we create while modelling our problem space.</p>
<h2>Glossary of terms</h2>
<h3>Ubiquitous language</h3>
<p>The term <em>Ubiquitous language</em> is thrown out occasionally in DDD discussions but ironically itself is often not discussed. It is also the part often left out from the development side which means the heart of DDD is not being followed and instead some of it's technical approaches used (often incorrectly).<br />
It is the practice of <strong>using the terms used throughout the business within the codebase</strong>, and working new terms from the modeling back into the business. Language often evolves and the codebase should evolve with the language. The essence really of DDD is that your code models the processes within the business and if you are not starting with the same language then how descriptive can it really be. If a product owner is looking at the application code he should recognise the classes, methods, and variables as models, workflows, and actions that actually occur.</p>
<p>It is not a one-way street however. Often the business has over-loaded terms, or a multiplicity of terms used for the same thing. Work with the them to define a glossary of terms that is used everywhere (ubiquitously).</p>
<h3>Bounded context</h3>
<p>The <em>Bounded context</em> is the context in which the <em>Ubiquitous language</em> and the corresponding models are valid. As developer it is a common trap to fall into to try reuse code and concepts across contexts. This is a recipe for disaster since the terms and verbs used to describe a model in one context will likely be similar but not the same. This results in blurring of the model to cater for both. This adds confusion as well as inviting changes with unintended consequences. This is especially true when a model is shared across more than one team (strongly concider whether it really is one context).</p>
<h4>Example</h4>
<p>Imagine a Product class in the Logistics domain. For tracking around the warehouse you need a barcode, for shipping you need the packaged dimensions and weight. Now think of a product for display on an e-commerce website. You need photos, description, and other specs like its actual dimensions unpacked.<br />
Why would a developer need all of this at one time? Why confuse matters? Why would the clients of the code like a scanner in the warehouse or a customer on the website need both? If all that is shared is maybe a name and a SKU, is the code sharing of 2 properties worth coupling different parts of the system? Different teams together?<br />
There are many reasons to want to keep these models seperated based on their context and few reasons to combine them. Yet it is a very common occurance in development. Why? Code re-use.<br />
Only re-use models if they are indeed the same model.</p>
<h3>Entities</h3>
<p>Entities are the classes that model the domain concepts and have identity. This usually means there is a unique primary key associated with the entity. Remember that modelling in DDD takes us back to the OOP we learned in the text books... behavior and data together. This is in antithesis to the usual <a href="https://martinfowler.com/bliki/AnemicDomainModel.html">anemic models</a> found in most software.</p>
<h3>Value objects</h3>
<p>Value objects are much like entities except they do not have identity. Money is the quintessential example of a model that shows intent, contains rules, but does not have identity. The important part here is using types to convey meaning as well as place logic along with the data in a very obvious way.</p>
<h3>Aggregate</h3>
<p>An Aggregate is a hierarchy of objects (Entities and Value objects) that make up a consistency boundary.<br />
Why would we want to set a boundary rather than just reference any object needed?</p>
<p>Minimising associations helps to prevent a reference web. This can be problematic when fetching and reconstituting a hierarchy of objects into memory. Lazy loading can quickly get out of hand, alternatively null references about and conntinually need to be checked.</p>
<p>Let us turn the question around. What if the relationships of our object model clearly showed us the effects of change? For example, the aggregate was the scope of the transaction...</p>
<h4>Aggregate root</h4>
<p>The Aggregate Root is an Entity that all other Entities and Value Objects in the hierarchy hang off. For example if you have an Order with Order Lines and a Supplier, the <code>OrderRepository</code> will return an Order with all <code>OrderLines</code> and <code>OrderSupplier</code> populated. If would not be possible to fetch an <code>OrderLine</code> separately, nor a <code>OrderSupplier</code>. If needed though you would provide methods on your <code>OrderRepository</code> to fetch an order by Order Line Id or by Supplier Reference for example.</p>
<h4>Points to keep in mind</h4>
<ul>
<li>Technical difficulties implementing an aggregate (like transaction issues persisting it) are usually indicative of a poorly chosen model. Put more effort refining the model rather than trying to fix a modelling problem with a technical implementation.</li>
<li>Access to objects from outside the aggregate must occur through the Aggregate Root.</li>
<li>Aggregates are always constructed in a consistent state.</li>
<li>The logic is usually within the aggregate to disallow consistent state or at least check its consistency.</li>
<li>It is better to encapsulate changes to state through method calls rather than directly mutating properties. This shows intent as well as adds an extra layer of indirection allowing implemntation changes without changing the API.</li>
</ul>
<h3>Factories</h3>
<p>Since an aggregate should always be in a consistent state it is important that they are constructed in a consistent state to the user. Factories provide a way to <strong>ensure that new instances of an aggregate always start in a consistent state</strong>.</p>
<h3>Repositories</h3>
<p>Repositories protect us from taking a data-centric view of our code. They allow us to <strong>persist and retrieve aggregates</strong> without dealing directly with the underlying persistence. It is however important for developers to at least be aware of the underlying implementations so as not to abuse the repository from a performance or scoping way.</p>
<p>The abstraction of the repository is contained within the domain. This abstraction knows about the domain models within that context. More specifically it knows about the aggregate that it is returning. A repository returns an Entity (or collection of Entities) and the aggregate for wich that Entity is the Aggregate Root.</p>
<p>The implementation of the repository abstraction does not reside in the domain. It is a Infrastructural concern and can change. What is important though is that the repository handles mapping however the data is persisted into a fully hydrated and consistent aggregate.</p>
<p>The developer is free to add multiple query methods to the repository but the return results are always in terms of the Aggregate Root.</p>
<h4>Points to keep in mind</h4>
<ul>
<li>The repository abstraction is part of the domain</li>
<li>The repository implementation is NOT part of the domain</li>
<li>The repository exposes data in terms of that repository's Aggregate Root</li>
<li>Query methods should use the domain language</li>
<li>If complex queries look to encapsulate in query objects using the <a href="https://www.martinfowler.com/apsupp/spec.pdf">Specification</a> pattern</li>
<li>Transaction should be controlled by the client code</li>
</ul>
<h3>Domain Service</h3>
<blockquote>
<p>Sometimes, it just isn't a thing.</p>
</blockquote>
<p>When modeling sometimes an operation or workflow doesn't fit into the current model. Usually this just means you are not accurately capturing the model you need to represent the business problem but every now and again it is valid to place this operation in a domain service. If placing a workflow comflates your model objects maybe a service is the way to go. Services are represented by verbs rather than nouns and speak to what the DO. An important distinction from model objects is that they are completely stateless. A service will take various other domain objects and execute some action, possibly returning some result.</p>
<h4>Points to keep in mind</h4>
<ul>
<li>Don't give up too quickly trying to fit an operation into the model (concider a new concept that encapsulates entities and values objects... maybe this is actual aggregate root?)</li>
<li>The Service is named after an activity (verb not noun)</li>
<li>Services are stateless</li>
<li>Services still use the Ubiquitous Language</li>
</ul>
<h3>Application Service</h3>
<p>The application service is what presents an input for a use-case. It calls off to the domain for execution, calls any other services (like notifications) and returns. This could be something like a WebApi controller in .NET or you could choose to explicitly create an an application service.</p>
<h4>Points to keep in mind</h4>
<ul>
<li>A thin layer that receives a request and passes it to the domain to processes</li>
<li>Think use-case</li>
<li>A good place to handle transactions</li>
<li>Can call out to Infrastructure Services</li>
</ul>
<h3>Infrastructure Service</h3>
<p>This is a technical implementation for something that performs some task such as notifications (IM, email, etc.), put messages on a bus, or retrieve some data from another system.</p>
<h3>Anti-corruption layer (ACL)</h3>
<p>An ACL is at the very least a thin translation layer between two bounded contexts. Even if both bounded contexts are well defined, and share similar models. The models in one context should not influence the models in another and without a layer in between to translate between the two corruption will creep in. If the external system a bounded context is talking to is a legacy system with a very poor model it is even more likely it will corrupt unless the ACL acts as a strong buffer.</p>
<h3>Modules</h3>
<p>Modules are simply packages or assemblies. Whatever your technology's means is of bundling built code is.</p>
<h3>Shared Kernel</h3>
<p>Sometimes a model needs to be shared across multiple Bounded Contexts. If so a Shared Kernel can be created but in a lot of cases the coupling created between the contexts and the teams is not worth it.</p>
<h3>Clients</h3>
<p>This is not really a term from the <em>Blue Book</em> (that I remember) but I find it useful when talking about DDD and Clean Architecture. Clients are the callers of the application layer. These could be another application automated service or an application been driven by a user. Regardless the clients execute the use-cases defined in the application layer.</p>
<h3>Further reading</h3>
<ol>
<li><a href="https://lostechies.com/jimmybogard/2010/02/04/strengthening-your-domain-a-primer/">Strengthening your domain</a></li>
<li><a href="https://martinfowler.com/tags/domain%20driven%20design.html">Domain-Driven Design</a></li>
<li><a href="http://gorodinski.com/blog/2012/04/14/services-in-domain-driven-design-ddd/">Services in Domain-Driven Design</a></li>
<li><a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=as_sl_pc_tf_mfw?&linkCode=wey&tag=wwwnervstucoz-20">Domain-Driven Design: Tackling Complexity in the Heart of Software</a></li>
<li><a href="https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=0321834577&pd_rd_r=P6PNCC27GC5B7Q513JJ4&pd_rd_w=6neVY&pd_rd_wg=Rn8gy&psc=1&refRID=P6PNCC27GC5B7Q513JJ4">Implementing Domain-Driven Design</a></li>
<li><a href="https://www.amazon.com/Applying-Domain-Driven-Design-Patterns-Examples/dp/0321268202/ref=as_sl_pc_tf_mfw?&linkCode=wey&tag=wwwnervstucoz-20">Applying Domain-Driven Design Patterns Examples</a></li>
</ol>https://devonburriss.me/vscode-tasks/Visual Studio Code Tasks2017-02-11T00:00:00+00:00Devon Burrisshttps://devonburriss.me/vscode-tasks/<p>I tend to try use <a href="https://code.visualstudio.com/">Visual Studio Code</a> for tasks and languages I don't currently use on a day to day basis. Over the last few weeks that has included Java and Delphi. Then today I was trying to launch my blog from VS Code and ran into an issue because Pretzel listens for a console key. The only fix I could find for this was to launch a new Powershell window. I thought this as good a time as any to post a few of these tasks.</p>
<!--more-->
<h1>Tasks</h1>
<p>Tasks in VS Code allow you to run commands that execute and usually feedback some status. Tasks are configured in the file <em>/.vscode/tasks.json</em> from the workspace root. Hit <strong>Ctrl+Shift+P</strong> and type <strong>Tasks:C</strong> and hit enter or click 'Tasks: Configure Task Runner'. If the file does not exist it will be created.</p>
<h2>Compiling a Java application</h2>
<p>This command uses <code>javac</code> to compile the Java application and will report on compile errors. Note that this uses a single task (others in the post have multiple tasks in the file). It assumes <code>javac</code> is on your PATH. I also have the <strong>Language Support for Java</strong> extension from Red Hat installed in VS Code.</p>
<pre><code class="language-json">{
"version": "0.1.0",
"command": "javac",
"showOutput": "silent",
"isShellCommand": true,
"args": ["-d","${workspaceRoot}\\bin","${workspaceRoot}\\src\\*.java"],
"problemMatcher": {
"owner": "external",
"fileLocation": ["absolute"],
"pattern": [
{
"regexp": "^(.+\\.java):(\\d):(?:\\s+(error)):(?:\\s+(.*))$",
"file": 1,
"location": 2,
"severity": 3,
"message": 4
}]
}
}
</code></pre>
<h2>Control Maven for a Java project</h2>
<p>These control different Maven phases. Note that on the <code>exec</code> task you need to change the <code>me.devonburriss.App</code> to the entrypoint of your application. It assumes <code>mvn</code> is on your PATH. Not needed for this but note that I have the <strong>Language Support for Java</strong> extension from Red Hat installed.</p>
<pre><code class="language-json">{
"version": "0.1.0",
"command": "mvn",
"isShellCommand": true,
"showOutput": "always",
"suppressTaskName": true,
"echoCommand": true,
"tasks": [
{
"taskName": "verify",
"args": ["-B", "verify"],
"isBuildCommand": true
},
{
"taskName": "test",
"args": ["-B", "test"],
"isTestCommand": true
},
{
"taskName": "clean install",
"args": ["clean install -U"]
},
{
"taskName": "exec",
"args": ["-B", "exec:java", "-D", "exec.mainClass=\"me.devonburriss.App\""]
}
]
}
</code></pre>
<h2>Delphi (Free Pascal) Build</h2>
<p>This is using the Free Pascal compiler to compile Delphi code. It assumes that <code>fpc</code> is on your PATH. You can get it <a href="http://www.freepascal.org/download.var">here</a>.<br />
This only compiles a single unit, not a complete project. Not needed for this to work but for syntax highlighting I have the OmniPascal extension installed.</p>
<pre><code class="language-json">{
"version": "0.1.0",
"command": "fpc",
"isShellCommand": true,
"showOutput": "always",
"suppressTaskName": true,
"echoCommand": true,
"tasks": [
{
"taskName": "Compile Unit",
"args": ["-Sd", "${file}"],
"isBuildCommand": true
}
]
}
</code></pre>
<h2>Powershell, Cake, Pretzel blog Build</h2>
<p>This is one I use to call PS, which executes my Cake build and and run this blog locally. The targets for that are Bake and Taste (from Pretzel). See <a href="http://devonburriss.me/pretezel-blog-appveyor-deployment/">this post</a> for details on that.</p>
<p>I use a <em>run.ps1</em> file because I needed to launch a new Powershell window so Pretzel can wait and watch for changes.</p>
<pre><code class="language-json">{
"version": "0.1.0",
"tasks": [
{
"taskName": "Build",
"command": "powershell",
"isShellCommand": true,
"args": [".\\pretzel.ps1"],
"showOutput": "always",
"isBuildCommand": true
},
{
"taskName": "Run",
"command": "powershell",
"isShellCommand": false,
"args": [".\\run.ps1"],
"showOutput": "always",
"isTestCommand": true
}
]
}
</code></pre>
<p>Just a note that I have the Powershell extension from Microsoft for VS Code installed. Not needed for the task to run but it gives nice support for ps1 files.</p>
<h2>Extra: F5 Launch of Pretzel Blog</h2>
<p>If you want to use <strong>F5</strong> to run the blog you can press <strong>Ctrl+Shift+P</strong> and type <strong>launch</strong>. If it doesn't exist a <em>launch.json</em> file will be created.</p>
<pre><code class="language-json">{
"version": "0.2.0",
"configurations": [
{
"type": "PowerShell",
"request": "launch",
"name": "PowerShell Launch (Script)",
"script": "${workspaceRoot}/run.ps1",
"args": [],
"cwd": "${workspaceRoot}"
}
]
}
</code></pre>
<p>Where my <em>run.ps1</em> looks like this:</p>
<pre><code class="language-powershell">Start-Process powershell ".\pretzel.ps1 -target Taste -Wait"
</code></pre>
<h1>Conclusion</h1>
<p>Visual Studio Code is a great editor and has plenty of extension points. If you have any great tips I would love to hear about them in the comments.</p>https://devonburriss.me/pretezel-blog-appveyor-deployment/Deploying a Pretzel generated static site to Github Pages using Appveyor2017-01-31T00:00:00+00:00Devon Burrisshttps://devonburriss.me/pretezel-blog-appveyor-deployment/<h1>Background</h1>
<p>I was using <a href="https://pages.github.com/">Github Pages</a> and <a href="https://jekyllrb.com/">Jekyll</a> to build and host this blog up until a few days ago.
Getting Jekyll running on Windows (more specifically Ruby) is a gamble and running it in a Docker container just led me down Ruby gem issues with my theme.<br />
Finally I decided to stick with the statically generated site but move away from Jekyll. Enter Pretzel...</p>
<!--more-->
<h2>Github Pages</h2>
<p>Github Pages allows you to host static websites and comes in 2 flavours. It natively supports building Jekyll source into a static site and deploying it.</p>
<h3>Organisation/User site</h3>
<p>This one runs off a separate repository with the special convention based name of <code><username>.github.io</code> and hosts any static content (or Jekyll) that is committed to <strong>master</strong> branch.</p>
<h3>Repository site</h3>
<p>These allow a website to be hosted per repository. Think documentation and marketing site for the product being built in that repository. These are built from a special orphaned branch named <strong>gh-pages</strong> usually but can be set to <strong>master</strong> or a <code>/docs</code> folder.</p>
<h2>Pretzel</h2>
<p><a href="https://github.com/Code52/pretzel">Pretzel</a> is a .NET based tool for generating a static, blog aware site. If you have used Jekyll, it is that without all the gem hell.<br />
Installing it locally is as easy as: <code>choco install pretzel</code></p>
<blockquote>
<p>Note that I used a plugin called <a href="https://github.com/k94ll13nn3/Pretzel.Categories">Pretzel.Categories</a> to provide tag and category pages. You may need to explicitly add the dll to your repository as your global .gitignore may specify *.dll. 'git add ._plugins\Pretzel.Categories.dll -f'</p>
</blockquote>
<h1>Approach</h1>
<p>Since I am no longer using Jekyll, Github pages can no longer build my site so I need to do that outside. I wanted to keep the same workflow of just being able to commit my changes and the content on the site is updated.</p>
<p>The solution needed to satisfy the following:</p>
<ol>
<li>develop locally and view my changes before pushing the commit</li>
<li>only 1 repository that represented my blog</li>
<li>a commit should trigger a build and deployment of the updated content</li>
</ol>
<h1>Solution</h1>
<p>Let's tackle each of these requirements one at a time. First off create a branch <strong>source</strong>. <strong>master</strong> will be reserved for our auto-generated content (we will get to this at the end of the post).<br />
<code>git checkout --orphan source</code></p>
<h2>Local development</h2>
<p>For local development I have a task setup in a <a href="http://cakebuild.net/">Cake build</a> for building and running the Pretzel tool. This wouldn't give too much benefit over just command lining the 2 commands needed.
Which commands? Well Pretzel gives us a few. The 2 important ones for us though are:
<code>pretzel.exe bake</code> - this will build our static website and since we provided no output folder it puts it in a folder <em>_sites/</em>. This is important to remember later<br />
<code>pretzel.exe taste --port 5001</code> - this will serve up the site and launch the site in the browser so you can admire your work</p>
<p>Why do I put these 2 simple commands in a build script? Well I have a transformation against the <em>_config.yml</em> that will swap out my domain name and <em>localhost:5001</em> depending on whether I am building for Debug or Release. It always use localhost when I am tasting since I don't use pretzel to serve the files.</p>
<p>If you are following along converting your own blog then and have not used Cake don't worry, it is super simple.</p>
<ol>
<li>Install the Powershell build script: <code>Invoke-WebRequest http://cakebuild.net/download/bootstrapper/windows -OutFile pretzel.ps1</code></li>
<li>This Powershell creates a ps1 fiel for <em>build.ps1</em> usually but we specified <em>pretzel.ps1</em> so on line 43 change <em>build.cake</em> to <em>pretzel.cake</em></li>
<li>Create a file called <em>pretzel.cake</em> that looks like this:</li>
</ol>
<blockquote>
<p>Updated: 2017-03-19 with new <em>Pretzel.exe</em> install path</p>
</blockquote>
<script src="https://gist.github.com/dburriss/c7871549c2788c0dca507a2d24c683ed.js"></script>
<p>With this setup we can build using <code>.\pretzel.ps1</code> and preview locally with <code>.\pretzel.ps1 -target Taste</code></p>
<blockquote>
<p>If you want to check-in what you have so far delete the <em>_sites/</em> folder before adding the file to source control on the branch <em>source</em>.</p>
</blockquote>
<h2>Single repository</h2>
<p>This one was a bit of a head-scratcher for me but then I remembered Github submodules. These allow you to map a folder in your repository to another repository. What I thought I would try was create an orphaned branch in my blog repository that contains the pretzel source and link the <em>_sites/</em> folder to the <strong>master</strong> branch which is where Github pages expects the static contents if you are not using Jekyll.</p>
<h3>Some quick housekeeping</h3>
<p>If you have run the Pretzel build but have not added anything to the Github repository (even locally) then just delete the <em>_sites/</em> folder before continuing.<br />
If you have checked in the <em>_sites/</em> folder run the following git command to remove it.</p>
<p><code>git rm -r _sites</code><br />
<code>git commit -m "Remove _sites (preparing for submodule)"</code></p>
<blockquote>
<p>you might need to remove from the index as well with <code>git rm -r --cached _sites</code></p>
</blockquote>
<h3>Creating the submodule</h3>
<p>Next we are going to create the submodule that links back to the <strong>master</strong> branch where the static content is expected.</p>
<blockquote>
<p>Note that the following command uses https and not git protocol. This is important and you will get an error later in the CD process if you use git protocol.</p>
</blockquote>
<p><code>git submodule add -b master https://github.com/dburriss/dburriss.github.io.git _site</code><br />
<code>git commit -m "_sites submodule"</code></p>
<h2>Continuous Delivery</h2>
<p>I use AppVeyor to pickup changes to the <strong>source</strong> branch. It uses Choclatey to install Pretzel. It then uses Pretzel to generate the static site into <em>_sites/</em> folder.<br />
The <em>_sites/</em> folder you will remember is actually a submodule linked back to the <strong>master</strong> branch of the same repository. We will push the generated changes to <strong>master</strong>, thus updating the blog with the latest content.</p>
<p>Place the following <em>appveyor.yml</em> file in the root of your <strong>source</strong> branch.<br />
The only thing you will need to change in the <em>appveyor.yml</em> is the url for your repository and the access token.</p>
<p>You can get an access token in Github by:</p>
<h3>Github token</h3>
<ol>
<li>Profile pic dropdown top right</li>
<li>Settings</li>
<li><em>Personal access tokens</em> at the bottom of the left menu</li>
</ol>
<p>See <a href="https://help.github.com/articles/creating-an-access-token-for-command-line-use/">here</a> for detailed instructions.</p>
<h3>Encrypt the token</h3>
<ol>
<li>Next in AppVeyor click on the dropdown on your username on the top right</li>
<li>Click Encrypt data</li>
<li>Paste the Github token in and press Encrypt</li>
<li>Copy the result into the <em>appveyor.yml</em> on line 7</li>
</ol>
<script src="https://gist.github.com/dburriss/66b4809c5e534481bdc4426c1d430765.js"></script>
<h2>Conclusion</h2>
<p>And there we have it! We can commit to <strong>source</strong> and the generated changes are committed to <strong>master</strong>.<br />
Feel free to copy my blog at https://github.com/dburriss/dburriss.github.io</p>
<p>Please leave a comment if you found this useful or have any improvements.</p>https://devonburriss.me/asp-net-5-tips-urlhelper/ASP.NET 5 Tips: UrlHelper2016-01-18T00:00:00+00:00Devon Burrisshttps://devonburriss.me/asp-net-5-tips-urlhelper/<blockquote>
<p>Note that this is specific to the upcoming RC 2 using the dotnet CLI. Currently in RC 1 this is not an issue.</p>
</blockquote>
<p>So I was messing around with <a href="https://github.com/davidfowl/dotnetcli-aspnet5">David Fowl's repository</a> that makes use of the new RC 2 bits that run on the new <a href="https://github.com/dotnet/cli">dotnet CLI</a>.</p>
<p>Everything was fine until I tried to create a TagHelper that makes use of <em>IUrlHelper</em>.
In RC 1 <em>IUrlHelper</em> is registered automatically with the DI system but apparently not in RC 2. After much searching I found the following <a href="https://github.com/aspnet/Mvc/commit/9fc3a800562c866850d7c795cf24db7fa0354af6">commit</a> which explained the change.</p>
<!--more-->
<p>So what follows is how I got an <em>IUrlHelper</em> into my TagHelper.</p>
<p>It seems we should instead make use of <em>IUrlHelperFactory</em> to get an instance of <em>IUrlHelper</em>.</p>
<p>In <strong>Startup.cs</strong> service configuration I register <em>IActionContextAccessor</em> and <em>IUrlHelperFactory</em>:</p>
<pre><code class="language-csharp">
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IActionContextAccessor, ActionContextAccessor>();
services.AddSingleton<IUrlHelperFactory, UrlHelperFactory>();
services.AddMvc();
}
</code></pre>
<p>Then I inject <em>IUrlHelperFactory</em> into the TagHelper constructor and use the factory to create a new instance of a <em>IUrlHelper</em>:</p>
<pre><code class="language-csharp">
public class EmailTagHelper : TagHelper
{
private readonly IUrlHelper _urlHelper;
public EmailTagHelper(IUrlHelperFactory urlHelperFactory, IActionContextAccessor actionContextAccessor)
{
_urlHelper = urlHelperFactory.GetUrlHelper(actionContextAccessor.ActionContext);
}
//process override here
}
</code></pre>
<p>I am guessing that this article will only be useful next month when RC 2 hits but it was great to see what is coming. I am quite liking the new CLI and with a bit of digging I have managed to get most things working, so the team seems to be making great progress toward RC 2.
Please let me know below if you found this useful... or if things change :)</p>https://devonburriss.me/asp-net-5-tips-tempdata/ASP.NET 5 Tips: TempData2016-01-17T00:00:00+00:00Devon Burrisshttps://devonburriss.me/asp-net-5-tips-tempdata/<blockquote>
<p>NOTE: Handling TempData and Session is made easy with extension methods in the <a href="https://www.nuget.org/packages/BetterSession.AspNet.Mvc/">BetterSession</a> Nuget package.</p>
</blockquote>
<p>ASPNET 5 is designed to be configurable. It starts out with almost nothing and you choose what you need. In previous versions of MVC we got TempData out the box. Not so with the new iteration.</p>
<p><img src="/img/posts/2016/footprint-resized.jpg" alt="bridge cables" /></p>
<!--more-->
<p>So to enable TempData for MVC you need sessions.
In <strong>project.json</strong> add the following lines to <em>dependencies</em></p>
<pre><code class="language-csharp">
"Microsoft.AspNet.Session": "1.0.0-*",
"Microsoft.Extensions.Caching.Memory": "1.0.0-*"
</code></pre>
<p>In <strong>Startup.cs</strong> the configuration of your services will need the following:</p>
<pre><code class="language-csharp">
public void ConfigureServices(IServiceCollection services)
{
services.AddCaching();
//this is the NB line for this post
services.AddSession(o =>
{
o.IdleTimeout = TimeSpan.FromSeconds(3600);
});
services.AddMvc();
}
</code></pre>
<p>While the app builder configuration will be something like so:</p>
<pre><code class="language-csharp">
public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
//this is the NB line for this post
app.UseSession();
app.UseIISPlatformHandler();
app.UseStaticFiles();
app.UseMvc();
}
</code></pre>
<p>Then accessing TempData is done through the dependency injection/service locator:</p>
<pre><code class="language-csharp">
public class TempController : Controller
{
private const string key = "name";
private readonly ITempDataDictionary _tempData;
public TempController(ITempDataDictionary tempData)
{
this._tempData = tempData;
}
public IActionResult Index()
{
_tempData[key] = "Devon";
return RedirectToAction("Carry");
}
public IActionResult Carry()
{
return View("Index", _tempData[key]);
}
}
</code></pre>
<p>OR</p>
<pre><code class="language-csharp">
var tempData = HttpContext.RequestServices.GetRequiredService<ITempDataDictionary>();
</code></pre>
<blockquote>
<p>NOTE 1: When using ITempDataDictionary in a custom <strong>ActionResult</strong> I needed to mark the class with <strong>IKeepTempDataResult</strong> for it to work.</p>
</blockquote>
<blockquote>
<p>NOTE 2: I am not sure if this is going to change but currently the implementation for ITempDataDictionary only accepts primitive values (and string). I got around this by serializing to and from json. If you want to do this, you might find these extension methods useful.</p>
</blockquote>
<pre><code class="language-csharp">
public static void SetAsJson<T>(this ITempDataDictionary tempData, string key, T data)
{
var sData = JsonConvert.SerializeObject(data);
tempData[key] = sData;
}
public static T GetFromJson<T>(this ITempDataDictionary tempData, string key)
{
if(tempData.ContainsKey(key))
{
var v = tempData[key];
if(v is T)
{
return (T)v;
}
if(v is string && typeof(T) != typeof(string))
{
var obj = JsonConvert.DeserializeObject<T>((string)v);
return obj;
}
}
return default(T);
}
</code></pre>
<p>So hope you and future me finds this post useful. I am going to try blog little things like this as I work more with ASP.NET 5. Please let me know in the comments below if you did find it useful or if I missed anything. Also let me know if there are other topics you want me to cover.</p>https://devonburriss.me/aspnet-vsonline-ci/ASP.NET 5 CI from Git to Azure without Visual Studio2015-09-10T00:00:00+00:00Devon Burrisshttps://devonburriss.me/aspnet-vsonline-ci/<blockquote>
<p>Using Visual Studio Online Build Services for a MSBuild/xproj free deployment.</p>
</blockquote>
<p>So my laptop was in for repairs so I decided to dust off my old Macbook Pro. I upgraded to Yosemite, downloaded <a href="https://code.visualstudio.com/">VSCode</a> and ran through the the <a href="http://docs.asp.net/en/latest/getting-started/installing-on-mac.html">setup for DNX</a> on Mac. Very quickly I started to wonder about deploying to <a href="http://azure.microsoft.com/en-us/get-started/">Azure</a>.</p>
<!--more-->
<p>I had previously used the steps described <a href="https://msdn.microsoft.com/Library/vs/alm/Build/azure/deploy-aspnet5">here</a> to deploy a Visual Studio 2015 ASP.NET 5 project from Git but that relied on an xproj file for publishing.</p>
<p>The other option is publishing to Azure via source control as described <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-publish-source-control/">here</a>.</p>
<p>I wanted something similar to the 1st option but for a solution created in VSCode and the aspnet <a href="http://yeoman.io/">Yeoman</a> <a href="https://www.npmjs.com/package/generator-aspnet">generator</a> though so what follows is what I have come up with so far.</p>
<p><em>NOTE: The project structure could use some work but the scripts work.</em></p>
<p><img src="/img/posts/2015/guy-on-mac_800.jpg" alt="guy on mac" /></p>
<h3>Step 1: Project Setup</h3>
<p>The publish script uses the <code>global.json</code> file to determine the version and runtime. In the root is also <code>Publish.ps1</code> and <code>Upload.ps1</code> powershell scripts.
<a href="https://github.com/dburriss/vsfree-azure-deploy/tree/master/example">Example</a></p>
<h4>Global</h4>
<script src="https://gist.github.com/dburriss/155c693de8f534bd1536.js"></script>
<p>Setup the <code>global.json</code> file with properties needed for the publish.</p>
<h4>Publish script</h4>
<script src="https://gist.github.com/dburriss/ea01dad652e00b480a7a.js"></script>
<p>This script does a couple things along the way to publishing a folder for deployment.</p>
<ol>
<li>Bootstraps DNVM into the Powershell session</li>
<li>Installs DNX on the build host</li>
<li>Restores the packages for the project using <code>dnu restore</code></li>
<li>Packages the project using <code>dnu package</code></li>
<li>Copies the runtime foler into the package (I think dnu restore is supposed to do this but at time of writing it was not)</li>
<li>Sets the <strong>web.config</strong> DNX version and runtime</li>
</ol>
<h4>Upload Script</h4>
<p>This is a script found here <a href="https://gist.github.com/davideicardi/a8247230515177901e57">davideicardi/kuduSiteUpload.ps1 </a> which worked like a charm.
<strong>UPDATE:</strong> <em>I changed this script to stop the website before upload and start it again after as deployment was failing regularly with a 500 server error. My guess is locked files.</em></p>
<script src="https://gist.github.com/dburriss/af2e1593543b36b1ee23.js"></script>
<h4>VSOnline Build Setup</h4>
<h5>Step 1: Publish</h5>
<p><img src="/img/posts/2015/Build1.png" alt="Build step 1 - Publish" />
Firstly we add a PowerShell script and point the script at our publish script:</p>
<ul>
<li>Script fielname: site/Publish.ps1</li>
<li>Arguments: -sourceDir $(Build.SourcesDirectory)\pub</li>
</ul>
<h5>Step 2: Upload</h5>
<p><img src="/img/posts/2015/Build2.png" alt="Build step 1 - Upload" />
Next we setup the upload script by creating an <strong>Azure PowerShell</strong> script:</p>
<ul>
<li>Azure Subscription: If you do not have one setup click Manage to do so</li>
<li>Script Path: site/Upload.ps1</li>
<li>Arguments: -websiteName <em>MyWebSite</em> -sourceDir $(Build.SourcesDirectory)\pub -destinationPath /site</li>
</ul>
<p>Where <em>MyWebSite</em> is the name of the website in Azure.</p>
<p>Hit <strong>Save</strong> to save the build configuration.</p>
<h4>Step 3: Setup CI (optional)</h4>
<p>If you want CI you can go to the <strong>Triggers</strong> tab and set a build to trigger on commit to a branch.</p>
<ul>
<li>Select <strong>CI</strong>.</li>
<li>Select <strong>Batch changes</strong></li>
<li>I filtered on <strong>master</strong> branch. Choose whatever is applicable.</li>
</ul>
<p>Hit the <strong>Save</strong> button.</p>
<h4>Step 4: Test your Build</h4>
<p>Now you can either hit <strong>Queue build...</strong> or if you setup CI do a push to the trigger enabled branch. Note that the triggered build can sometimes take a few minutes to be queued and takes almost 5 minutes to build and deploy even for a small test site.</p>
<h3>Conclusion</h3>
<p>Thats it for deploying to Azure with a solution developed on OSX (or Linux). Just 2 scripts really.
I hope this helps someone and please leave a comment below if you have any questions or suggestions. Or just want to say it helped :)</p>https://devonburriss.me/installing-docker-on-hyper-v/Installing Docker on Hyper-V2015-03-07T00:00:00+00:00Devon Burrisshttps://devonburriss.me/installing-docker-on-hyper-v/<p>To be clear, currently Docker containers do not run on Windows. Microsoft is working with Docker to release something with feature parity but we will be lucky if we see that in 2015 (<a href="http://weblogs.asp.net/scottgu/docker-and-microsoft-integrating-docker-with-windows-server-and-microsoft-azure">Blogged by Scott Gu</a>). So although there is a client for Windows for managing Docker containers, we will need an Ubuntu install. <a href="http://devonburriss.me/installing-ubuntu-on-hyper-v/">Installing Ubuntu on Hyper-V</a></p>
<p><img src="/img/posts/2015/large_h.png" alt="Docker logo" /></p>
<!--more-->
<h1>Installing Docker</h1>
<p>Most of this is straight from the <a href="https://docs.docker.com/installation/ubuntulinux/">Docker documentation</a> but I ran into a few problems that I think may be due to this running on Hyper-V. Also I wanted a quick reference in the future.</p>
<p>First lets update our package repositories:
<code>sudo apt-get update</code></p>
<p>Currently the Docker docs mention pulling from their private repos to get the latest version but that was for Ubuntu 14.04. I noticed Ubuntu 14.10 repos contain Docker 1.2 which is at time of writing good enough for me.</p>
<p>So lets install Docker:
<code>sudo apt-get install docker.io</code></p>
<p>Then so we get bash completion we can type:
<code>source /etc/bash_completion.d/docker.io</code>
No <strong>sudo</strong> needed. Alternatively just reboot with:
<code>sudo reboot</code></p>
<p>Lets test our Docker install:
<code>sudo docker version</code>
<code>sudo docker info</code></p>
<p>This displays version number of the components and some basic info on the install respectively.</p>
<p>The info will contain a line <strong>WARNING: No swap limit support</strong> so lets fix that.
<code>sudo nano /etc/default/grub</code></p>
<p>Find the line <strong>GRUB___CMDLINE___LINUX</strong> and edit it:
<code>GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"</code> then save and exit nano.</p>
<p>We need to update Grub and reboot.
<code>sudo update-grub</code></p>
<p><code>sudo reboot</code></p>
<p>Now running <code>sudo docker info</code> you will see the warning is gone.</p>
<p>If we try download and run a docker image we are still not there yet but lets try:
<code>sudo docker run -i -t ubuntu /bin/bash</code></p>
<h3>Troubleshooting</h3>
<h4>Unexpected EOF</h4>
<p>This actually happens every now and again with Docker (I think if latency is bad) so just try run the command again and it will likely work.</p>
<h4>dial tcp: lookup registery-1.docker.io: no such host</h4>
<p>The documentation explains how to add a dns to the docker options in <strong>/etc/default/docker</strong> but this actually didn't work for me on the Hyper-V. I had to edit <strong>/etc/resolv.conf</strong> and add the google nameserver there (doesn't have to be google).
<code>sudo nano /etc/resolv.conf</code>
Then add <strong>nameserver 8.8.8.8</strong> on a new line. Save and exit.
You might need to <code>sudo reboot</code>.</p>
<h3>Finally lets run something</h3>
<p>So now we should be ready to go. Run
<code>sudo docker run -i -t ubuntu /bin/bash</code> again.
This should now pull down the ubuntu image and start up a container running ubuntu (yes we are running Ubuntu in a kernal process on another Ubuntu - inception right?).
The <code>-t</code> is to assign a terminal and <code>-i</code> is so the connection is interactive.
Once it is running a terminal prompt will be available. Type <code>echo 'Hi'</code>. The Ubuntu container willl say hi back :)</p>
<p>So thats it. You have Docker running on a Hyper-V guest.</p>https://devonburriss.me/installing-ubuntu-on-hyper-v/Installing Ubuntu on Hyper-V2015-03-06T00:00:00+00:00Devon Burrisshttps://devonburriss.me/installing-ubuntu-on-hyper-v/<p>The reason for this post is just to remind me of a few little things you need to do if you want to create a Generation 2 Ubuntu Virtual Machine on Hyper-V. When setting up the virtual machine in Hyper-V and you select Generation 2.</p>
<!--more-->
<h2>Create a Virtual Switch 1st</h2>
<p><img src="/img/posts/2015/Switch1.png" alt="Navigating to Virtual Switch Manager" />
I have had good mileage with creating an "External Network" and settiong it to use my Wireless adapter.
<img src="/images/posts/2015/Switch2-1.png" alt="Virtual Switches" />
For one wireless network at a coffee shop it didn't work and I had to switch to a private one which is a bit more work to create. This blog post describes that setup. One caveat was I had to disable my LAN adapter to get the private setup described to work but your mileage might vary.
See: <a href="http://www.hurryupandwait.io/blog/running-an-ubuntu-guest-on-hyper-v-assigned-an-ip-via-dhcp-over-a-wifi-connection">http://www.hurryupandwait.io/blog/running-an-ubuntu-guest-on-hyper-v-assigned-an-ip-via-dhcp-over-a-wifi-connection</a></p>
<h2>Create the Virtual Machine</h2>
<p>Go ahead now and click <strong>New > Virtual Machine</strong> and follow the wizard. Remeber to pick <strong>Generation 2</strong>. Choose the virtual switch you setup previously. In <strong>Installation Options</strong> choose the Ubuntu image you downloaded from their website. Remember that for a generation 2 it must be the 64-bit version.
<img src="/img/posts/2015/Generation2.png" alt="Pick Generation 2" />
Here is a full walkthrough of the process if you need it: <a href="http://www.servethehome.com/run-ubuntu-windows-8-hyper-v-quickly/">Step by step install of Ubuntu on Hyper-V</a></p>
<p>##Before starting it up
The final thing to remember to do before starting up the newly created virtual machine is go into its settings.
<img src="/img/posts/2015/Settings1.png" alt="Navigating to settings" />
Make sure you uncheck Safe Boot
<img src="/img/posts/2015/Settings2.png" alt="Uncheck Secure Boot" /></p>
<h2>Thats it</h2>
<p>You can now bootup you new virtual machine and Ubuntu will take you through the setup process. Hope this helped you and I am sure it will help future me when I bump up against some of these issues next time I create a new Linux Hyper-V.</p>
<h3>Setting the resolution</h3>
<p>One thing you may want to do is change the resolution that Ubuntu runs at. If you go into display settings you will find that you cannot change the resolution there.
It is fairly straight forward but does require some editing of config files.</p>
<ul>
<li>Open up Terminal</li>
<li>Type <code>sudo nano /etc/default/grub</code> and enter (or you can use vi if you prefer)</li>
<li>Find the setting <strong>GRUB_CMDLINE_LINUX_DEFAULT</strong> and add to it so it includes the resolution you want.
<code>GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=hyperv_fb:1280x720"</code></li>
<li>Save and exit nano</li>
<li>Type <code>sudo update-grub</code> and enter (I ran into a problem here)</li>
<li>Restart the VM</li>
</ul>
<p>I believe that 1920 x 1080 is the max that Hyper-V supports.</p>
<blockquote>
<p><strong>sudo update-grub</strong> was frezzing/hanging whenever I tried to run it. I suspect that this was because I had an external drive in when I created the VM and grub was searching for it.
I managed to get past this by adding the following line at the bottom of /etc/default/grub
<code>GRUB_DISABLE_OS_PROBER=true</code></p>
</blockquote>
<h4>Setup:</h4>
<blockquote>
<p>This setup is valid as of Windows 8.1 running Hyper-V and installing Ubuntu 14.10 as the guest OS.</p>
</blockquote>https://devonburriss.me/testing-the-untestable/Testing the Untestable2015-01-27T00:00:00+00:00Devon Burrisshttps://devonburriss.me/testing-the-untestable/<p>If you have ever tried written unit tests for existing code you know it can be quite challenging. Not only is finding what to test difficult, the code usually just wont be testable. If it is code that you have written and you are at liberty to make some sweeping changes, then you can refactor toward testability. If not I still go through a technique at the end of this article for providing testable classes.</p>
<p><img src="/img/posts/2015/bridge-cables-resize.jpg" alt="bridge cables" /></p>
<!--more-->
<p>Let's first try refactor toward testablility.
Our checklist is as follows:</p>
<ul>
<li>Create integration tests</li>
<li>Apply <a href="http://devonburriss.me/single-respon/">Single Responsibility Principle</a> (SRP)</li>
<li>Apply <a href="http://martinfowler.com/bliki/RoleInterface.html">Role Interfaces</a></li>
<li>Apply Inversion of Control</li>
<li>Last stand - <a href="http://amzn.to/1EN0Ymg">Extract and override</a></li>
</ul>
<blockquote>
<p>NOTE: In the rest of this article I talk about abstractions and usually use interface as an example. A base class is often just as valid as an interface (unless it has multiple roles since the languages I use only allow one inheritence parent). The NB part is that the rest of your application is coded against the abstraction and knows nothing about the implementation class.</p>
</blockquote>
<h2>Safety net</h2>
<p>Your first step should be to create some high level integration tests. This will at least give you some indication that you have broken something when you do.</p>
<h2>What is in a name?</h2>
<p>A good measure of whether a class adheres to the SRP principle is the name. If the name doesn't exactly describe what it does, or if it contains words like 'manager', 'provider', 'logic', 'handler', it probably does more than one thing. A name should tell you exactly what a class does, and a class can only have one name...
See the SRP link for an example of splitting a class into it's various responsibilities.</p>
<h2>Role abstraction</h2>
<p>A good practice that can be used in conjuction with SRP is adding role interfaces to a class. Hopefully you can refactor to these roles until a class only contains the members in the abstraction but they are a start. Don't be afraid of having classes with a minimal amount of properties and/or methods on it. It means it has a very well defined role.
Even if you do not break a class into multiple classes immediately, if you can refer to them by the role interface you will have far fewer breaks in your code later when you do split it.</p>
<h3>Example</h3>
<pre><code class="language-csharp">
public class CustomerManager
{
public IEnumerable<Customer> GetAll()
{
...
}
public string GetOrderEmailTemplate()
{
...
}
public void SendEmail(string template, Customer customer, Order order)
{
...
}
}
</code></pre>
<p>Depending on what you prefer you could split this into 2 or 3 interfaces. Definitely a store for retrieving customers and one for email. Better yet would be a 3rd for testability so you can seperate out retrieving email from sending it.</p>
<pre><code class="language-csharp">
public interface CustomerRepository
{
IEnumerable<Customer> GetAll();
}
public interface EmailStore
{
string GetOrderEmailTemplate();
}
public interface EmailService
{
void SendEmail(string template, Customer customer, Order order);
}
</code></pre>
<blockquote>
<p>NOTE: This is just an example. I would usually try refactor this so that sending the email is completely unaware of the domain model.</p>
</blockquote>
<p>Really if you have managed to refactor this far you need just split the classes by abstraction and apply Dependency Injection to invert the dependencies and by then you likely have some easily testable classes.</p>
<h2>Untestable I tell you!</h2>
<p>Ok so you have looked at the above but to no avail. You have some dependencies in your class that cannot be injected. A very common reason for this is your class has a dependency on a static class that just cannot be refactored right now to an instance. Another reason is that you just cannot make changes to the public API of the class you are testing.</p>
<blockquote>
<p>WARNING: Think long and hard before using static classes. The ease of use they offer upfront comes at the dear dear price of testability.</p>
</blockquote>
<p>So the trick to testing a class that seems untestable is <a href="http://amzn.to/1EN0Ymg">Extract and Override</a>. The technique is as follows for the untestable Monster class:</p>
<ol>
<li>Create a class <strong>TestableMonster</strong> that inherits from <strong>Monster</strong>.</li>
<li>Now move the class within <strong>Monster</strong> into protected virtual methods.</li>
<li>Now you can override any par of <strong>Monster</strong> you need to to test it.</li>
<li>In your unit test you will test against <strong>TestableMonster</strong> but you will call the base class for the bits you want to test on it and provide faked procedures for the parts you need to test <strong>Moster</strong> in isolation.</li>
</ol>
<p>Ok so we have gone over the technique in theory, lets take a look at an example.</p>
<h3>Example</h3>
<p>Here is the untestable Monster class.</p>
<pre><code class="language-csharp">
public class Monster
{
public void ScareAllTheChildren()
{
var now = DateTime.UtcNow;
IEnumerable<Child> children= DataRepository.GetAllChildrenFrom(now);
foreach (var child in children)
{
ScareService.Scare(child);
}
}
}
</code></pre>
<p>Although the actual example code is unlikely, the structure is tragically common. In less than 10 lines of code we have 3 static references. We will come back to the testable class, lets start extracting out the parts that make this class hard to test.</p>
<pre><code class="language-csharp">
public class Monster
{
public void ScareAllTheChildren()
{
DateTime now = GetCurrentUtcDateTime();
IEnumerable<Child> children = GetChildrenWithBedtimeAfter(now);
foreach (var child in children)
{
ScareChild(child);
}
}
protected virtual void ScareChild(Child child)
{
ScareService.Scare(child);
}
protected virtual IEnumerable<Child> GetChildrenWithBedtimeAfter(DateTime now)
{
return DataRepository.GetAllChildrenFrom(now);
}
protected virtual DateTime GetCurrentUtcDateTime()
{
return DateTime.UtcNow;
}
}
</code></pre>
<p>As you can see we have made no changes to the external API of the class. The internal changes were done by wrapping the statics in a method call. Not too much there that is likely to break our production code.
So how would we use this?</p>
<pre><code class="language-csharp">
public class TestableMonster : Monster
{
public DateTime TestDateTime { get; set; }
public List<Child> ScaredChildren { get; set; }
protected override DateTime GetCurrentUtcDateTime()
{
return TestDateTime;
}
protected override void ScareChild(Child child)
{
ScaredChildren.Add(child);
base.ScareChild(child);
}
}
</code></pre>
<p>The above example just shows a way to have a date that is settable in your test. You could of course override the other method to return a known list of children.
The following test is more an integration test than a unit test, as the data is not faked (unless you sent back a fake db from the method) but it demonstrates the usage.</p>
<pre><code class="language-csharp">
[TestMethod]
public void Scare_With2OutOf3ChildrenAsleep_ScareCalledOn2Children()
{
//Arrange
var db = InitializeNewDatabase();
db.Children.Add(new Child { Name = "Sam", LastWentToSleep = DateTime.Parse("2014-01-31 20:00") });
db.Children.Add(new Child { Name = "Sam", LastWentToSleep = DateTime.Parse("2014-01-31 20:30") });
db.Children.Add(new Child { Name = "Sam", LastWentToSleep = DateTime.Parse("2014-01-31 21:30") });
var sut = new TestableMonster();
sut.TestDateTime = DateTime.Parse("2014-01-31 20:45");
//Act
sut.ScareAllTheChildren();
//Assert
Assert.AreEqual(2, sut.ScaredChildren.Count);
}
</code></pre>
<h2>Summary</h2>
<p>So we went through some steps you could take to make your classes more testable. If you find you are testing a lot of static code you might want to look at the paid for version of <a href="http://www.telerik.com/products/mocking.aspx">JustMock</a> or <a href="http://typemock.com/">TypeMock</a> which are the only to frameworks I know of that allow mocking of statics.</p>
<blockquote>
<p>NOTE: A quick note on DateTime. It is a very sneaky static that leaks into code often. Try make it team policy to not use DateTime and instead use something like this suggested by <a href="http://ayende.com/blog/3408/dealing-with-time-in-tests">Ayenda Rahien</a></p>
</blockquote>
<p>Happy testing!</p>https://devonburriss.me/developer-quest-variables/Developer Quest II - Variables2014-10-12T00:00:00+00:00Devon Burrisshttps://devonburriss.me/developer-quest-variables/<blockquote>
<p>Hold this for me.</p>
</blockquote>
<h2>The story so far</h2>
<p>Lets go over what we have so far from <a href="http://devonburriss.me/developer-quest-getting-started/">Part 1</a> and touch on some terminology. We have a <strong>namespace</strong> called DeveloperQuest1. Namespaces are a way of grouping an application or parts of it. Specifically its used in the grouping of the Types that make up an application.
Then we have a <strong>class</strong> called <strong>Program</strong>. <strong>class</strong> is the keyword used to define a Reference Type in C#. We will explore it in more detail later in this tutorial. Then we have the first <em>member</em> of Program. <em>Main</em> is the <strong>method</strong> that is run when a console application starts. Methods are ways of grouping behaviour in a program that can be executed.</p>
<p><img src="/img/posts/2014/gfs_36744_2_2.jpg" alt="hero enters town" /></p>
<!--more-->
<h2>Variables</h2>
<p>Writing things to the screen is great but to make programming useful we need to be able to take input from somewhere, store it, manipulate it and possible then show it or save it.
You can think of variables as the buckets that we store values in while we are using them in the program.
We get 2 main categories of variables. <strong>Value Types</strong> and <strong>Reference Types</strong>. So every variable has a unique <strong>Type</strong> that falls into one of these 2 categories but is always a <strong>Type</strong>.</p>
<h3>Value Types</h3>
<p>Value types fall into 2 main sub-categories :</p>
<ul>
<li>struct</li>
<li>enumeration</li>
</ul>
<p>Structs in turn fall into further categories of</p>
<ul>
<li>Numeric</li>
<li>boolean</li>
<li>user-defined</li>
</ul>
<p>I just mention this so you are aware of it when we go through examples. If it doesn't make much sense right now, don't worry about it.
So let's see an example of using a numeric value type</p>
<pre><code class="language-csharp">
int myNumber = 1;
</code></pre>
<p>This assigns the number <em>1</em> to the 'bucket' named <em>myNumber</em>. The default for an <em>int</em> is zero.
There are numerous types of numeric value types that vary in terms of the size of the number they can hold as well as the precision.
Next are boolean values. The valid options here are either true or false. The default being <em>false</em>.</p>
<pre><code class="language-csharp">
bool isHero = true;
</code></pre>
<p>For the full list see here: http://msdn.microsoft.com/en-us/library/bfft1t3c.aspx</p>
<p>Finally a <strong>struct</strong>. Structs are complex values. These can be used to store groups of values together logically. You will see that these seem a lot like reference types but differ in how they are handled in the program.</p>
<ul>
<li>In the Solution Explorer <strong>Right-click</strong> on the C# Console Project DeveloperQuest1</li>
<li>Expand <strong>Add</strong></li>
<li>Click <strong>Class...</strong></li>
<li>Name the class <strong>Hero</strong></li>
<li>Click <strong>Ok</strong>
<img src="/images/posts/2014/code-change2.jpg" alt="new class image" /></li>
</ul>
<p>This will create a new <strong>class</strong> (will discuss later).</p>
<ul>
<li>Change the <strong>class</strong> keyword to a <strong>struct</strong> and add the folowing 2 <em>fields</em>.</li>
<li>Save the changes</li>
</ul>
<p>It should look like this now:</p>
<pre><code class="language-csharp">
public struct Hero
{
public int Health;
public string Name;
}
</code></pre>
<blockquote>
<p><strong>string</strong> is used to store text. It is a reference type but is handled in a special way.</p>
</blockquote>
<p><img src="/img/posts/2014/so-far-1.jpg" alt="structure of application" /></p>
<p>You will see shortly when we explore reference types how similar they look to a <strong>struct</strong>.
The key characteristic to understand about value types is that they always point to their own 'bucket'.
This can be demonstrated with the following example.
Change your Main <em>method</em> to match the code below.
Notice the <strong>using</strong> statement at the top now. This is the <em>System</em> namespace and allows us to remove <em>System</em> from in front of <strong>Console</strong>. This is because <strong>Console</strong> is a <strong>class</strong> in the <em>System</em> namepspace. This makes your code simpler to work with.</p>
<pre><code class="language-csharp">
using System;
namespace DeveloperQuest1
{
class Program
{
static void Main(string[] args)
{
System.Console.WriteLine("So you want to be a C# developer?");
Hero hero1 = new Hero(){
Health = 10,
Name = "Bob"
};
Hero hero2 = hero1;
hero2.Name = "Ted";
Type heroType = hero1.GetType();
Console.WriteLine("Hero 1 is " + hero1.Name);
Console.WriteLine("Hero 2 is " + hero2.Name);
Console.WriteLine("Type is " + heroType.Name);
Console.WriteLine("Is value type: " + heroType.IsValueType);
Console.ReadKey();
}
}
}
</code></pre>
<p>Run the application by hitting <strong>F5</strong>.</p>
<h4>Output should be:</h4>
<pre><code> Hero 1 is Bob
Hero 2 is Ted
Type is Hero
Is value type: True
</code></pre>
<p>So <em>hero1</em> and <em>hero2</em> represent 2 unique values. Changing one does not effect the other.</p>
<h3>Reference Types</h3>
<p>Reference types, as the name alludes to, can reference the same 'bucket'.
Rather than the <em>struct</em> keyword, a reference Type uses <em>class</em>. Usually you will create a <strong>class</strong> and the <em>members</em> of the <strong>class</strong> are comprised of value and reference types. <strong>Members</strong> can be <em>fields</em>, <em>properties</em>, or <em>methods</em> on a Type. <em>Name</em> and <em>Health</em> on <strong>Hero</strong> above are examples of <em>fields</em>.</p>
<p>Let's change the Hero Type from a <em>value</em> type to a <em>reference</em> type.</p>
<ul>
<li>Open the <em>Hero.cs</em> by double-clicking it in the Solution Explorer, or click on the tab if it is still open from when you created it.</li>
<li>Change <strong>struct</strong> back to <strong>class</strong></li>
<li>Save</li>
<li>Hit <strong>F5</strong> to run the application</li>
</ul>
<h4>Output should be:</h4>
<pre><code> Hero 1 is Ted
Hero 2 is Ted
Type is Hero
Is value type: False
</code></pre>
<p>So <em>hero1</em> and <em>hero2</em> both point to the same 'bucket' now. Changing one will change the other. Because <em>hero2</em> points at <em>hero1</em>, when we changed 2, 1 was also changed because they are the same thing actually. This is the essential difference between a reference type and a value type. Hopefully the names make sense now?</p>
<h3>Using our new found knowledge</h3>
<p>We have a reference type that represents our hero. Let's add functionality to the program so we can give our hero a name.
Change the program to match the following.</p>
<pre><code class="language-csharp">
using System;
namespace DeveloperQuest1
{
class Program
{
static void Main(string[] args)
{
Hero hero = new Hero();
hero.Health = 10;
Console.WriteLine("So you want to be a C# developer?");
Console.WriteLine("What is your hero's name?");
hero.Name = Console.ReadLine();
Console.WriteLine("Your adventure begins " + hero.Name);
//to pause program
Console.ReadKey();
}
}
}
</code></pre>
<p>So on line 1 we have the <em>using</em> statement that imports the <em>System</em> namespace to we can use it throughout our code without explicitly referencing it all the time.
Our program is in the <em>DeveloperQuest1</em> namespace.
It contains a <strong>Type</strong> called <strong>Program</strong> (which uses the <strong>class</strong> keyword and is such a reference type).
It contains a <em>method</em> called <strong>Main</strong> which is run by default by a console application. We will explore the arguments passed in as <strong>args</strong> in a later tutorial.
The 1st statement in the Main method declares a new <strong>Hero</strong> using the <strong>new</strong> keyword.
We then assign a value of 10 to the hero's <strong>Health</strong> <em>field</em>.
We then write to the Console asking for the hero's name and read it into the <strong>Name</strong> <em>field</em> on the hero. This is done using a <em>method</em> on <strong>Console</strong> called <em>ReadLine</em> which reads everything you type in until you hit <em>Enter</em>.
We then write out to the console the name we stored on the hero.
Lastly we still have the <em>ReadKey</em> call which pauses the application. Above it I show the use of comments. These are ignored by the program but can be used by you to leave instructional text. Use only when something is unclear.
Hit <strong>F5</strong> to run it.</p>
<h2>Summary</h2>
<p>In this tutorial we explored the Type categories you get in C# and how to create and use them. In the following tutorial we will dive into <em>classes</em> and the various <em>members</em> you can have on them.</p>
<h3>Further Reading and References</h3>
<ul>
<li>http://msdn.microsoft.com/en-us/library/s1ax56ch.aspx</li>
<li>http://www.albahari.com/valuevsreftypes.aspx</li>
</ul>https://devonburriss.me/developer-quest-getting-started/Developer Quest I - Getting started with C#2014-10-09T00:00:00+00:00Devon Burrisshttps://devonburriss.me/developer-quest-getting-started/<h2>Getting Started</h2>
<p>The first thing you are going to need as a developer is an Integrated Development Environment (IDE). Technically this is not necessary, you could use a text editor and the compiler in command line but trust me, you don't want to go that route.
Head over to http://www.visualstudio.com/downloads/download-visual-studio-vs and download Microsoft Visual Studio Express for Windows Desktop.</p>
<p><img src="/img/posts/2014/quest-for-glory-i-so-you-want-to-be-a-hero-dos-title-73699.jpg" alt="hero running from monster" /></p>
<blockquote>
<p>Update: <a href="https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx">Visual Studio Community 2015</a> is now available which is still free but much fuller featured.</p>
</blockquote>
<!--more-->
<h2>Your first application</h2>
<p>We are going to be building a Console Application initially, since this is probably the easiest to get up and running with.
A console application project is what is used to create .exe programs that you may have seen or used.
Once you have Visual Studio installed, launch it and follow these steps to create the Console Application.</p>
<ul>
<li>Click <strong>File > New Project...</strong></li>
<li>In the left-hand tree structure menu pick Visual C# and select <strong>Console Application</strong></li>
<li>In the name field enter <strong>DeveloperQuest1</strong></li>
<li>Click <strong>OK</strong></li>
</ul>
<p><img src="/img/posts/2014/new-project.jpg" alt="VS New Project Window" />
Visual Studio will now create a solution for you. A solution can hold many projects. A project can be a console app, a Windows Store app, a desktop application, website, etc. The solution file groups all these together for you in a way that lets you easily create references to related projects. Don't worry about it too much at the moment. We will come back to it in another tutorial.
You should now have a screen that looks similar to this (may differ slightly based on your setup and theme).
<img src="/img/posts/2014/ide.jpg" alt="new console application" />
The IDE shows 3 main windows above.</p>
<ul>
<li><strong>Document Editor</strong> - this is where you edit your program files. Currently it shows the Program.cs source file, which is the starting point for the console application.</li>
<li><strong>Solution Explorer</strong> - allows you to browse the contents of your solution, open files and view properties of items in the solution.</li>
<li><strong>Output Window</strong> - shows messages of what Visula Studio is doing.</li>
</ul>
<p>If you hit the <strong>F5</strong> key Visual Studio will build and run the application. Building basically means it takes your <strong>.cs</strong> files in the solution and turns them in instructions that a computer can understand.
So lets make a change to the program and run it. Add the folowing lines within the {} in the <strong>Main</strong> method of <strong>Program</strong> so it looks like this.</p>
<pre><code class="language-csharp">
System.Console.WriteLine("So you want to be a C# developer?");
System.Console.ReadKey();
</code></pre>
<p>Also remove all the <em>using</em> statements at the top from line 1 - 5.
<img src="/img/posts/2014/code-change1.jpg" alt="added console writeline charp code" />
Now hit <strong>F5</strong> again to build and run the application. The console application should ask you if this is the path for you. If it is, look out for the following tutorial in this series.</p>
<h2>Whats Next?</h2>
<p>Next we will be looking at how you can capture input from the console application so you can interact with it.
If you have any questions or suggestions, please don't hesitate to leave a comment below. Happy coding!
<a href="http://devonburriss.me/developer-quest-variables/">The adventure continues here.</a></p>https://devonburriss.me/software-development-is-like-a-piece-of-string/Software development is like a piece of string2014-10-03T00:00:00+00:00https://devonburriss.me/software-development-is-like-a-piece-of-string/<p>Software like many things in life, is one of those things that the further down a path you go, the harder it is to back out. When I think of a software project I think of a piece of string. The longer the project, the longer the string.
The string is the perfect length to reach the end. Each and every time we make a poor design decision or a bad implementation we effectively add a knot in the string. One or two of these and we might still be able to stretch it to reach the end but most likely if we want to reach the end, we are going to need to unravel the knot we created.
I see it over and over again in projects, both my own knots and the knots of colleagues. We make these knots knowingly, thinking we can come back later, or thinking they can slip by but they always hold things up somewhere.
If they don't force you to come back and undo them later, they slow the velocity of the project, negating any time you may have saved in implementation time.</p>
<p><img src="/img/posts/2014/yarn-800.jpg" alt="library" /></p>
<p>Bottom line. Don't take shortcuts. I am not saying it has to always be the very best implementation but it should always be something elegant. Dirty hacks always fester eventually.</p>https://devonburriss.me/testing-your-data-repositories/Testing your data repositories2014-09-07T00:00:00+00:00Devon Burrisshttps://devonburriss.me/testing-your-data-repositories/<blockquote>
<p>Avoiding dependency on a data layer.</p>
</blockquote>
<p>My solution was to use an in-memory H2 database (http://www.h2database.com/html/main.html) which can be created and dropped on a per test basis. To do this I used the Command Pattern (http://en.wikipedia.org/wiki/Command_pattern) to create and then drop the table for each test. In case you are not familiar with the command pattern:</p>
<p><img src="/img/posts/2014/books-800-medium.jpg" alt="library" /></p>
<h2>Command Pattern</h2>
<p>The command pattern is pretty simple. You define an interface with the method that will be called to execute some functionality.</p>
<pre><code class="language-java">
public interface Command {
void execute() throws Exception;
}
</code></pre>
<h2>The Solution</h2>
<p>So this is what the end result looks like. How you execute you commands is up to you but in case you are looking for the details I have included them further down in the article.</p>
<pre><code class="language-java">
public class CommitteeTableCommandTest {
private String connectionString = "jdbc:h2:~/test";
@Test
public void create_NewCommitteeRecord_PersistsToDb() throws Exception {
try(Database database = new H2DatabaseImpl(connectionString, "", "")){
Command cc = new CreateCommitteeTableCommand(database);
cc.execute();
CommitteeEntity entity = new CommitteeEntity();
entity.setName("Test");
entity.setMandate("Blah Blah");
CommitteeRepository sut = new CommitteeRepositoryImpl(database);
sut.create(entity);
Assert.assertNotNull(sut.getByName("Test").get(0));
Command cd = new DropCommitteeTableCommand(database);
cd.execute();
}
Assert.assertTrue(true);
}
}
</code></pre>
<h3>The Details</h3>
<p>For the creation and dropping of the table I created a generic abstract base class for each. I am using OrmLite (http://ormlite.com/) (the Java library, not C# one – which is unrelated) for my Object Relational Mapper. This gives me a database agnostic way for handling the mundane database tasks without mixing my Java and SQL. You could quite easily write SQL for this, as long as you take any differences in database providers into consideration. On to the solution…</p>
<p><em>Base create command</em></p>
<pre><code class="language-java">
public abstract class BaseCreateTableCommand<T> implements Command {
private Database database;
private Class<T> typeOfT;
@SuppressWarnings("unchecked")
public BaseCreateTableCommand(Database database){
this.database = database;
ParameterizedType genericSuperclass = (ParameterizedType) getClass().getGenericSuperclass();
Type type = genericSuperclass.getActualTypeArguments()[0];
if (type instanceof Class) {
this.typeOfT = (Class<T>) type;
} else if (type instanceof ParameterizedType) {
this.typeOfT = (Class<T>) ((ParameterizedType)type).getRawType();
}
}
protected void createTableIfNotExists() throws Exception {
ConnectionSource connectionSource = new JdbcConnectionSource(database.getConnectionUri(), database.getUsername(), database.getPassword());
TableUtils.createTableIfNotExists(connectionSource, typeOfT);
connectionSource.close();
}
public void execute() throws Exception {
this.createTableIfNotExists();
}
}
</code></pre>
<p><em>Base drop command</em></p>
<pre><code class="language-java">
public abstract class BaseDropTableCommand<T> implements Command {
private Database database;
private Class<T> typeOfT;
@SuppressWarnings("unchecked")
public BaseDropTableCommand(Database database){
this.database = database;
this.typeOfT = (Class<T>)((ParameterizedType)getClass().getGenericSuperclass()).getActualTypeArguments()[0];
}
protected void dropTable(Boolean ignoreErrors) throws Exception {
ConnectionSource connectionSource = new JdbcConnectionSource(database.getConnectionUri(), database.getUsername(), database.getPassword());
TableUtils.dropTable(connectionSource, typeOfT, ignoreErrors);
connectionSource.close();
}
@Override
public void execute() throws Exception {
this.dropTable(true);
}
}
</code></pre>
<p>Next, we inherit from these two classes to flesh out the create and drop commands.
<em>Create command implementation</em></p>
<pre><code class="language-java">
public class CreateCommitteeTableCommand extends BaseCreateTableCommand<CommitteeEntity> {
public CreateCommitteeTableCommand(Database database) {
super(database);
}
}
</code></pre>
<p><em>Drop command implementation</em></p>
<pre><code class="language-java">
public class DropCommitteeTableCommand extends BaseDropTableCommand<CommitteeEntity> {
public DropCommitteeTableCommand(Database database){
super(database);
}
}
</code></pre>
<p>The only other piece is the Database abstraction, which I have my doubts about so I would
not recommend copying :)</p>
<p><em>Database abstraction</em></p>
<pre><code class="language-java">
public abstract class Database implements AutoCloseable {
private static final int MAX_CONNECTIONS_PER_PARTITION = 2;
private static final int MIN_CONNECTIONS_PER_PARTITION = 1;
private static final int LOGIN_TIMEOUT = 10;
protected final Logger logger = LoggerFactory.getLogger(getClass());
protected String connectionUri;
protected String username;
protected String password;
protected BoneCP connectionPool = null;
public Database() {
super();
}
public Connection getConnection() throws SQLException {
logger.trace("getConnection called.");
return getPooledConnection();
}
public String getConnectionUri(){
return this.connectionUri;
}
public String getUsername(){
return this.username;
}
public String getPassword(){
return this.password;
}
public abstract String getDriver();
public void close() throws Exception {
logger.trace("close called (this is close() on the database...not a single connection).");
if(this.connectionPool != null)
this.connectionPool.shutdown();
this.connectionPool = null;
}
protected void setup(String driver, String connectionUri, String username, String password) throws ClassNotFoundException, SQLException {
logger.trace("setup called.");
try {
Class.forName(driver);
this.connectionUri = connectionUri;
this.username = username;
this.password = password;
DriverManager.setLoginTimeout(LOGIN_TIMEOUT);
} catch (ClassNotFoundException e) {
logger.error(e.getMessage(), e);
throw e;
}
}
private Connection getPooledConnection() throws SQLException {
Connection conn;
if(connectionPool == null)
setupConnectionPool(connectionUri, username, password);
conn = connectionPool.getConnection();
return conn;
}
private void setupConnectionPool(String connectionUri, String username, String password) throws SQLException {
BoneCPConfig config = new BoneCPConfig();
config.setJdbcUrl(connectionUri);
config.setUsername(username);
config.setPassword(password);
config.setMinConnectionsPerPartition(MIN_CONNECTIONS_PER_PARTITION);
config.setMaxConnectionsPerPartition(MAX_CONNECTIONS_PER_PARTITION);
config.setPartitionCount(1);
config.setLazyInit(true);
connectionPool = new BoneCP(config);
}
}
</code></pre>
<p><em>H2 implementation</em></p>
<pre><code class="language-java">
public class H2DatabaseImpl extends Database {
private final String driver = "org.h2.Driver";
public H2DatabaseImpl(String connectionUri, String username, String password) throws ClassNotFoundException, SQLException{
super();
this.setup(driver, connectionUri, username, password);
}
@Override
public String getDriver() {
return driver;
}
}
</code></pre>
<p><em>Just for kicks...</em></p>
<p>I created a command queue, which itself is a command to enumerate through and execute a list of commands. Here just because its useful, not for purposes of this example. You can chain your inserts and then your drops into two commands using this.</p>
<pre><code class="language-java">
public class CommandQueue implements Command {
private List<Command> commands;
private Boolean breakOnError = true;
public CommandQueue(List<Command> commands, Boolean breakOnError){
if(commands == null)
throw new IllegalArgumentException("commands");
this.commands = commands;
if(breakOnError != null)
this.breakOnError = breakOnError;
}
@Override
public void execute() throws Exception {
int pos = 0;
try {
pos = executeImpl(pos);
} catch (Exception e) {
if(this.breakOnError)
throw e;
}
}
private int executeImpl(int pos) throws Exception {
int size = this.commands.size();
for (int i = pos; i < size; i++) {
try {
this.commands.get(pos).execute();
pos++;
} catch (Exception e) {
if(this.breakOnError)
throw e;
executeImpl(++pos);
}
}
return pos;
}
}
</code></pre>
<p>Let me know if you found this useful, or if you have a better way for testing your data persistence...</p>https://devonburriss.me/single-respon/Single Responsibility Principle2014-09-05T00:00:00+00:00Devon Burrisshttps://devonburriss.me/single-respon/<blockquote>
<p>The <strong>S</strong> in <strong>SOLID</strong>.</p>
</blockquote>
<p>If I had to pick one principle that had to be enforced strongly on a code base, this would be it. Most techniques for writing elegant code fall by the wayside if this principle is not followed.</p>
<p><strong>Layering your application.</strong> Good luck!</p>
<p><strong>Inversion of Control.</strong> Constructor injection overload!</p>
<p><strong>Polymorphism.</strong> I am a concrete implementation of what exactly?</p>
<p><strong>Don’t repeat yourself.</strong> Well this does something slightly different…</p>
<p>It has been a long time but I do remember a time when I was averse to lots of files in a development project. When I had god classes that contained demi-god functions. I am not sure if it is related but it may have been a side effect of programming in a dynamic language but to blame it on a language would be naïve. Besides, I learned the basics of programming in C++ and Java. I also remember a time when every little change I made in my projects broke a chain of other parts, some expected, and way too many completely unexpected. And it was exactly those circumstances that made me question how I was doing things. Enter SRP.</p>
<!--more-->
<h2>Definition</h2>
<p>Since it is a principle, rather than a rule; it doesn’t have one clear definition but as far as I can tell Robert C. Martin (http://www.objectmentor.com/omTeam/martin_r.html) coined the term and so his definition will be used:</p>
<blockquote>
<p>THERE SHOULD NEVER BE MORE THAN ONE REASON FOR A CLASS TO CHANGE.</p>
</blockquote>
<p>This is a very simple statement but one that is quite hard to get right in practice. It takes discipline to think carefully about where each piece of code is placed to make sure it belongs there.</p>
<p><img src="/img/posts/2014/train-track-800-slim.jpg" alt="trainline into the distance" /></p>
<h2>Class Cohesion</h2>
<p>A discussion of SRP would not be complete without mention of cohesion (http://en.wikipedia.org/wiki/Cohesion_(computer_science)). Cohesion is the measure of how well the members of a class group together. An easy tell to look for when looking for classes with low cohesion is to look for fields that are used in separate functions. If you find a field that is used in some functions, and another field that is used in others, it is likely that you need 2 classes rather than 1 for the behaviour. We will see an example of this later.</p>
<h2>Example</h2>
<p>Ok. Enough talk (or writing rather…). Lets look at an example of a class that does not follow SRP and refactor it towards one that does.
The example I use is a service that processes a customer’s order.</p>
<pre><code class="language-csharp">
public class OrderServiceBefore : IDisposable
{
private const string connection = @"c:\Example.mdf";
private readonly DataContext db;
private SmtpClient emailClient;
public OrderServiceBefore()
{
this.db = new DataContext(connection);
this.emailClient = new SmtpClient();
}
public void Process(Order order)
{
//validate order
if (order == null)
throw new ArgumentNullException("order");
if (order.Customer == null)
throw new ArgumentException("Customer cannot be null.");
if (order.OrderLines.Count < 1)
throw new InvalidOperationException("Cannot process an order with no lineitems.");
//save order
db.GetTable<Order>().Attach(order);
db.SubmitChanges();
//email order form
var email = string.Format("New order {0} place on {1} by {2}.");
foreach (var item in order.OrderLines)
{
email = email + "\n";
email = email + item.Product + " : " + item.Quantity;
}
emailClient.Send(new MailMessage("me@me.com", "sales@company.com"));
}
public void Dispose()
{
if (this.db != null)
this.db.Dispose();
if (this.emailClient != null)
this.emailClient.Dispose();
}
}
</code></pre>
<p>Looking at the code you can see that the Process method does more than 1 thing. It checks the validity of the order, persists it to the database, and then emails sales with the order details.
Lets start refactoring this toward a cleaner implementation…</p>
<pre><code class="language-csharp">
public class OrderServiceIntermediate : IDisposable
{
private const string connection = @"c:\Example.mdf";
private readonly DataContext db;
private SmtpClient emailClient;
public OrderServiceIntermediate()
{
this.db = new DataContext(connection);
this.emailClient = new SmtpClient();
}
public void Process(Order order)
{
OrderProcessGaurd(order);
SaveOrder(order);
EmailOrderToSales(order);
}
private void EmailOrderToSales(Order order)
{
var email = string.Format("New order {0} place on {1} by ");
foreach (var item in order.OrderLines)
{
email = email + "\n";
email = email + item.Product + " : " + item.Quantity;
}
emailClient.Send(new MailMessage("me@me.com", "sales@comp"));
}
private void SaveOrder(Order order)
{
db.GetTable<Order>().Attach(order); db.SubmitChanges();
}
private void OrderProcessGaurd(Order order)
{
if (order == null)
throw new ArgumentNullException("order");
if (order.Customer == null)
throw new ArgumentException("Customer cannot be null.");
if (order.OrderLines.Count < 1)
throw new InvalidOperationException("Cannot process an order with no lineitems.");
}
public void Dispose()
{
if (this.db != null)
this.db.Dispose();
if (this.emailClient != null)
this.emailClient.Dispose();
}
}
</code></pre>
<p>Here all I did was extract the different activities being performed into methods. This does little else other than make the intent of the Process method clearer, which in turn highlights that this class contains implementation details outside of it’s responsibility.
So lets extract these methods into classes that are responsible for the needed functionality. We will interface each of these so we can inject the abstraction in rather than the concrete implementation.</p>
<pre><code class="language-csharp">
public class OrderRepository : IOrderRepository
{
private const string connection = @"c:\Northwnd.mdf";
private readonly DataContext db;
public OrderRepository()
{
this.db = new DataContext(connection);
}
public void SaveOrder(Order order)
{
db.GetTable<Order>().Attach(order);
db.SubmitChanges();
}
public void Dispose()
{
if (this.db != null)
this.db.Dispose();
}
}
public interface IEmailService : IDisposable
{
void SendOrderToSales(Order order);
}
public class EmailService : IEmailService {
private SmtpClient emailClient;
public EmailService() {
this.emailClient = new SmtpClient();
}
public void SendOrderToSales(Order order)
{
var email = BuildEmailContent(order);
emailClient.Send(new MailMessage("me@me.com", "sales@company.com"));
}
private string BuildEmailContent(Order order)
{
var email = string.Format("New order {0} place on {1} by {2}." );
foreach (var item in order.OrderLines)
{
email = email + "\n";
email = email + item.Product + " : " + item.Quantity;
}
return email;
}
public void Dispose()
{
if (this.emailClient != null)
this.emailClient.Dispose();
}
}
</code></pre>
<p>With these new classes extracted we can now make use of them in our OrderService class.</p>
<pre><code class="language-csharp">
public class OrderServiceAfter : IDisposable
{
private readonly IOrderRepository orderRepository;
private readonly IEmailService emailService;
public OrderServiceAfter(IOrderRepository orderRepository, IEmailService emailService)
{
this.orderRepository = orderRepository;
this.emailService = emailService;
}
public void Process(Order order)
{
OrderProcessGaurd(order);
orderRepository.SaveOrder(order);
emailService.SendOrderToSales(order);
}
private void OrderProcessGaurd(Order order)
{
if (order == null)
throw new ArgumentNullException("order");
if (order.Customer == null)
throw new ArgumentException("Customer cannot be null.");
if (order.OrderLines.Count < 1)
throw new InvalidOperationException("Cannot process an order with no lineitems.");
}
public void Dispose()
{
if (orderRepository != null)
orderRepository.Dispose();
if (emailService != null)
emailService.Dispose();
}
}
</code></pre>
<h3>Analysis</h3>
<p>Lets take a quick look at what running code metrics on this in Visual Studio 2013 looks like (Analyze > Calculate Code Metrics for Selected Projects).
<img src="/img/posts/2014/Code-Metrics-SRP.png" alt="" /></p>
<p><strong>Maintainability Index</strong> – Here we see a nice gain just separating out into functions, with a 1 point drop when separating out into classes. I guess Microsoft see it as less maintainable with the logic in different classes. Marginally. The gains on the other criteria more than make up for the 1 point drop though. See: <a href="http://blogs.msdn.com/b/zainnab/archive/2011/05/26/code-metrics-maintainability-index.aspx">http://blogs.msdn.com/b/zainnab/archive/2011/05/26/code-metrics-maintainability-index.aspx</a></p>
<p><a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity"><strong>Cyclomatic Complexity</strong></a> – This basically highlights the paths through the code. It is a good measure of how complex the code is. This dropped so marginally. Typically we can see much better gains here when applying SRP on more complex problems. See: <a href="http://blogs.msdn.com/b/zainnab/archive/2011/05/17/code-metrics-cyclomatic-complexity.aspx">http://blogs.msdn.com/b/zainnab/archive/2011/05/17/code-metrics-cyclomatic-complexity.aspx</a></p>
<p><strong>Depth of Inheritance</strong> – We are not using inheritance to solve this problem so not going to touch on this. See: [http://blogs.msdn.com/b/zainnab/archive/2011/05/19/code-metrics-depth-of-inheritance-dit.aspx](http://blogs.msdn.com/b/zainnab/archive/2011/05/19/code- metrics-depth-of-inheritance-dit.aspx)</p>
<p><a href="http://en.wikipedia.org/wiki/Coupling_(computer_programming)"><strong>Class Coupling</strong></a> – We dropped the coupling to other classes quite substantially. This is a very good thing. The less dependencies you class has, the less likely that it breaks due to a change elsewhere in the codebase. See: <a href="http://blogs.msdn.com/b/zainnab/archive/2014/02/22/10168042.aspx">http://blogs.msdn.com/b/zainnab/archive/2014/02/22/10168042.aspx</a></p>
<h3>Resources</h3>
<p><a href="http://www.objectmentor.com/resources/articles/srp.pdf">http://www.objectmentor.com/resources/articles/srp.pdf</a></p>https://devonburriss.me/estimation/Estimation2014-08-07T00:00:00+00:00Devon Burrisshttps://devonburriss.me/estimation/<blockquote>
<p>Tackling the uncertainty of software estimation.</p>
</blockquote>
<p>Most developers are horrible at estimation. Period. There are numerous reasons for this. Some of the responsibility falls outside of a developers control but there are still steps that a developer is obligated to take.</p>
<!--more-->
<h2>Under-estimating the complexity</h2>
<p>Without actually writing the code a developer can never know every nuance of the problem and possible corresponding solutions. Not to mention the problems spawned from the chosen solutions. This gets better with experience but is not an exact science. Even with UML diagrams and use-cases, the devil is in the details. The best course of action for a developer here is to break the the problem down into such small subtasks that the possible problems start to expose themselves but even this is not a guarantee. Not to mention the time that this actually takes. It falls to management to ensure that developers have the time they need to make these estimates, as well as all the information to do so. It falls to the developers to insist on both of these. Even so. These are only estimates and should be seen as such and not taken by any stake-holders as commitments, unless the developer has committed to these times under no duress.</p>
<h3>Solution: Break down tasks</h3>
<p>As mentioned. Breaking down the tasks into easier to estimate chunks will go a long way in refining the schedule, as well as revealing hidden complexity.</p>
<h2>Over-estimating ability</h2>
<p>Often a problem seems simple and as a developer you would like to think you could implement a solution in minimal time. This often happens when problems emerge similar to ones we have solved before. Resist the urge to commit. Find out all the information. Break it down. Plan. Estimate. Do not let your ego get you into a position where you are sacrificing your health, family, and friends for a deadline you cannot realistically meet. And DO NOT sacrifice quality. There are no true shortcuts. What you gain in the short term you will lose over the length of the project with interest.</p>
<h3>Solution: Planning Poker</h3>
<p>Planning Poker (http://en.wikipedia.org/wiki/Planning_poker) is an estimation technique. The basics are such:
Get some developers into a room.
Discuss a task that needs implementation.
All developers write down an estimate or hold up fingers at the same time with their estimate.
If there are huge discrepancies the task is discussed more. Discussions and estimations are repeated until all developer estimations are similar.</p>
<p>See: <a href="http://www.mountaingoatsoftware.com/agile/planning-poker">http://www.mountaingoatsoftware.com/agile/planning-poker</a></p>
<h2>Handed down deadlines</h2>
<p>Sometimes deadlines are given to you from above. As an employee you will feel pressured to accept these deadlines. It is your choice whether you accept them. In The Clean Coder, “Uncle Bob” talks about the responsibilities of developers and managers. CEOs are trying to strategically grow a business, marketing is trying to win customers, project managers are trying to meet deadlines, and as a developer you are tasked with developing a quality product for the customer. By agreeing to unrealistic deadlines, you endanger the project. The earlier problems are identified, the more chance that catastrophe can be avoided.</p>
<h3>Solution: Team discussion of workable solution</h3>
<p>If a deadline is immovable, the team (including the customer) need to work together toward a realistic goal. Features can be cut, overtime can be worked (within reason), and additional resources can be allocated (to a point) but the end result should always be a quality solution. Cutting corners just slows down development in the long run. A project becomes a mess. Productivity grinds to a halt. It is a chore to work on and eventually developers leave the company rather than work on the project.</p>
<h2>PERT</h2>
<p><a href="http://en.wikipedia.org/wiki/Program_evaluation_and_review_technique_(PERT)">Pert</a> is an estimation technique developed by the U.S Navy for estimating projects. Combining it with planning poker should give a reasonable idea of when you can expect a task to be done. It works as follows.
A developer will give 3 estimates for a work item (use with Planning Poker).</p>
<p><strong>O:</strong> Optimistic estimate – this is the time to complete a task if the stars align and unicorns come down and help complete the code. In other words, the best case scenario.</p>
<p><strong>P:</strong> Pessimistic estimate – this is the time to complete a task when you have invoked the wrath of the programming gods. So. The worst case.</p>
<p><strong>M:</strong> Most likely estimate – this is the time that a developer usually gives.</p>
<p>Plugging these values in we can get the time estimate for a task.</p>
<p><strong>T = (O + 4M + P) ÷ 6</strong></p>
<p>Banking on this value would be dangerous though. Some buffer time is usually added to estimates. Rather than just thumb-sucking a buffer time, lets calculate the variance and add that to the estimate.</p>
<p><strong>V = (P – O) ÷ 6</strong></p>
<p><strong>Estimate = T + V</strong></p>
<h3>Example</h3>
<p>Ok. So lets say that your team is asked to add a Quick Contact widget to an existing website. You get 3 developers in the room and ask for times.</p>
<p>You get the following answers. 1, 3, and 4. In days.</p>
<p>The 1 came from the developer who is going to be doing the work. 3 from the developer who did most of the existing widgets. 4 from the team lead. Due to the large discrepancies, discussions ensue. It turns out the widget creation process is non trivial but some functionality is inherited from existing widgets. So another round of planning poker gives the following values 3, 3 , and 4. You decide to go with 3.
This was for the most likely time. For the best case you get 1 day and worst case is 7 days.
<strong>T</strong> = <em>(O + 4M + P) ÷ 6</em> = <em>(1 + 12 + 7) ÷ 6 = 3.3 V</em> = <em>(P – O) ÷ 6</em> = <em>(7 – 1) ÷ 6</em> = <strong>1</strong></p>
<p><strong>Estimate = T + V = 4.3</strong></p>
<p>So let's schedule this for a 4.5 days.</p>
<h3>Conclusion</h3>
<p>So knowing our failings, and bearing in mind the goals of management, we can mitigate potential disaster by using the techniques outlined here. Estimation is never going to be an exact science but we can go a long way in making our estimates more accurate. Hope this helps. Good luck with your next project.</p>https://devonburriss.me/the-way-we-write-code/The way we write code…and how we talk about it2014-07-18T00:00:00+00:00Devon Burrisshttps://devonburriss.me/the-way-we-write-code/<blockquote>
<p>The true challenge in writing good software.</p>
</blockquote>
<p>Code takes on a life of its own. As developers we pour our time and intellect into solving problems, and the manifestation of those solutions are found in the lines of code we write. Too often though, the code is controlling us as much as it is controlling the hardware it runs on. We often fail to think about how we write our code, how we structure it, or how others may view or use it. We let one line run to the next, and the code leads us.</p>
<p><img src="/img/posts/2014/typewriter-800.jpg" alt="typewriter" /></p>
<!--more-->
<p>Over the years I became frustrated with the corners that the code led me into. Frustrated with the tangle it became. Frustrated with reading other peoples tangle. So I started down the path of clean code. I researched standards, OOP, clean coding techniques, design patterns, TDD, Agile, DDD, etc. My code got cleaner, maintenance got easier, development velocity didn’t drop as rapidly as the complexity of a project increased. Things were good. There is a problem though. All these methodologies and techniques come with their own dialect. They have terminology and language that describe a complex solution, or a particular design decision in one succinct word. The problem comes in that not every developer is on this path. Many are stuck in fire fighting mode. The overtime hours stack up, any spare minute at work is spent on Facebook trying to find out what was missed while they were working late into the night. Learning new things after all the hours at the office is very low on the their hierarchy of needs. The unfortunate thing is that it is knowledge and experience that gets you out of the fire fight. Testable code, maintainable code. Prioritizing tasks. Understanding deliverables. Managing expectations. Communicating. And slowly I learned that software development is primarily about communication. The larger the project, the more apparent this becomes. Developers, designers, architects, business analysts, project managers, customers. Everybody has a role, and the way they see the project is determined by the lens that that stakeholder dons. Recently I have been leaning more toward methodologies rather than technologies and patterns. These often address the more critical aspects in a project, like communication. I have found though that a lot of my hard won lessons do not garner the immediate appreciation I have for them. The hard learned vocabulary of patterns and methodologies are meaningless when you are working in a team that does not know the terminology nor the benefits of the practices that go with the elitist vocabulary. The vocabulary is important as it allows the succinct identification of a complex idea. It is more important though to be understood by all stakeholders. So while I work on shedding my vocabulary for one with less assumptions, I will try and write about the principles that shape the code I write and the architectural decisions I make. And hopefully I will make elitist snobs out of you who have read this rambling post to the end.</p>
<h2>Elitist snob training</h2>
<p>Although the lines tend to blur, I have tried to categorise as best I can.</p>
<h3>Principles</h3>
<ul>
<li>Clean code</li>
<li>SOLID</li>
</ul>
<h3>Design Patterns</h3>
<ul>
<li>Repository</li>
<li>Factory</li>
<li>Command</li>
<li>Decorator</li>
<li>Visitor</li>
</ul>
<h3>Practices</h3>
<ul>
<li>TDD</li>
<li>DDD</li>
</ul>
<h3>Methodologies</h3>
<ul>
<li>Agile</li>
</ul>