Description
On my previous post I started to explain how to create a full CI/CD DevOps automated deployment cycle for your Smarterasp.net hosting websites defining the build pipeline implementation.
After creating a Build pipeline, we are going to finish it creating a release pipeline.
Requirements
You will need to fulfil some requirements first:
- Previous post regarding build pipeline already done.
- An SmarterAsp web hosting.
Steps to create a release pipeline
(...)
[Continue Reading] Description
For personal, experimental, prototyping or small side projects lots of ASP.NET developers choose a cheap web hosting supplier like Smarterasp.net. If you are one of these developers using SmarterAsp.net for your projects nothing is stopping you to have a full CI/CD DevOps automated deployment cycle for your websites.
Traditionally the protocol to upload content to this kind of hosting plans is FTP or WebDeploy. Microsoft is offering a great DevOps free tool: Azure Devops. Setting it up is quite easy, just follow the steps after clicking [Start for free] button.
Once you have your Azure Devops project set, you will be able to configure and execute build and release pipelines for free targeting your Smarterasp.net web hosting plan.
Requirements
(...)
[Continue Reading] Description
Span<T> is a new ref struct introduced in C# 7.2 specification. It is a stack-only type that allows memory operations without allocation so, used for instance in very large arrays, it can be a significant performance improvement.
It is only applicable if your code is based from .NET Core 2.1 and .NET Standard 2.1. There are tons of technical documentation available about Span<T>, this post is just going to be focused in a practical demo to compare the performance of the slice method. Span<T> can't work inside Anync methods but you can work around this issue easily creating a non Async local method.
Detailed information can be found in the official Microsoft link: https://docs.microsoft.com/en-us/dotnet/api/system.span-1?view=net-5.0
(...)
[Continue Reading] 
Introduction
Weeks ago I decided to start creating an experimental home size "Big data" system based on Apache Spark. The first step for it is to create a distributed filesystem where Apache Spark will read and write eveything.
HDFS is the Hadoop distributed filesystem which provides features like: fault detection and recovery, huge datasets, hardware at data, etc... despite it is a Hadoop ecosystem piece, it works nice as the data distributed filesytem for Apache Spark.
(...)
[Continue Reading]