Showing posts with label How-to. Show all posts
Showing posts with label How-to. Show all posts

2018-03-16

Peanut Butter and Chocolate: Azure Functions CI/CD Pipeline with AWS CodeCommit (Part 6 of 6)

2018-02-22-01

Part 6

In Part 5 we configured the AWS CodeCommit to trigger the AWS Lambda when a commit is made to the master branch of the repository. Effectively, our CI/CD pipeline is in place. To use it properly, we first need to add a cc2af.yml configuration file. After the configuration file is there, we can push out first Azure Function to our AWS CodeCommit repository and our AWS Lambda will be triggered and start a manual deployment on the Azure Functions Web App.

We will finish out the series in this post with demonstrating an automatic deployment from AWS CodeCommit to Azure Functions and triggering our Azure Function all from PowerShell.


Series Table of Contents


2018-03-10

Peanut Butter and Chocolate: Azure Functions CI/CD Pipeline with AWS CodeCommit (Part 5 of 6)

2018-02-22-01

Part 5

Sorry for the delay between part 4 and 5! I was at the Microsoft MVP Summit this past week and didn’t have time to devote towards updating. This series is nearing completion with just a few more parts to go.

In Part 4 we published the AWS Lambda and created the AWS KMS Key that will be used for encrypting and decrypting secrets. In Part 5 we will configured the AWS CodeCommit repository trigger to invoke the AWS Lambda and encrypt our secrets to store in in the cc2af.yml file.


Series Table of Contents


2018-03-03

Peanut Butter and Chocolate: Azure Functions CI/CD Pipeline with AWS CodeCommit (Part 4 of 6)

2018-02-22-015

Part 4

In Part 3 we successfully made the first glue between Azure Functions and AWS CodeCommit by making it possible to manually trigger the Azure Functions Web App to pull from the AWS CodeCommit repository. Obviously, a manual pull is not ideal. It is certainly not a Continuous Delivery.

In Part 4 we lay the groundwork for the 2nd piece of glue between Azure Functions and AWS CodeCommit. In order to automatically trigger a pull AWS CodeCommit from Azure Functions, we need an AWS Lambda. AWS Lambda and Azure Functions are somewhat analogous. They serve almost identical purposes in their respective clouds. We also need to create a KMS key that will be used for encrypting and decrypting secrets.


Series Table of Contents


2018-02-24

Peanut Butter and Chocolate: Azure Functions CI/CD Pipeline with AWS CodeCommit (Part 3 of 6)

2018-02-22-01

Part 3

In Part 2 we created the Azure Functions Web App and the AWS CodeCommit repository. In Part 3 we will make the initial deployment from AWS CodeCommit to Azure Functions. To do that we need to create an AWS IAM User Account, grant it access to the CodeCommit repository, generate HTTPS Git Credentials for the user, and configure the Azure Functions Web App external git deployment. By the end of this post, we will be able to manually deploy from AWS CodeCommit to Azure Functions on demand. This is a critical step to make automating the process possible.

This part will be short and sweet. I want to keep the relevant pieces together regardless of their length.


Series Table of Contents


2018-02-19

Peanut Butter and Chocolate: Azure Functions CI/CD Pipeline with AWS CodeCommit (Part 1 of 6)

2018-02-19-01
Source: 1981 Reese's Peanut Butter Cup Advertisement

Intro

This blog series will cover a Proof of Concept (POC) Project for creating a PowerShell-based Azure Functions CI/CD pipeline where the code is stored in AWS CodeCommit git-based version control system. The pipeline will be created and deployed using Windows PowerShell 5.1. Every step of the pipeline deployment process will be verified with Pester tests. The result of the project will be the ability to push changes to an AWS CodeCommit repository and those changes will be automatically deployed to Azure Functions.

This blog series is targeted at intermediate level PowerShell users and basic PowerShell concepts will not be described in detail. Also, this series will require some basic understanding of both Azure and AWS clouds and their PowerShell based management. Git and C# .NET Core are also leveraged in this project but they will not be covered in depth as this is a PowerShell-centric blog. Readers need only be familiar with basic concepts of git and C#.


Series Table of Contents


2017-09-24

Multipart/form-data Support for Invoke-WebRequest and Invoke-RestMethod in PowerShell Core

20170924_blog
Pictured: A packet capture of a current build of PowerShell Core submitting a multipart/form-data POST request from Invoke-WebRequest.

Intro

Over the past few months I have been donating a generous portion of my spare time to help improve the Web Cmdlets (Invoke-WebRequest and Invoke-RestMethod) in PowerShell Core. This is partly because I want and need certain functionality for both personal and work related projects. It is also because I have had some minor gripes about these Cmdlets for some time.

One common ask I have seen repeated in just about every PowerShell forum is multipart/form-data support. It seems like a reasonable thing to ask when there are many endpoints that will only work with a multipart/form-data submission. There is an open issue (#2112) on the PowerShell GitHub echoing the same request. It was brought to my attention and I decided to give it a serious look.

The result is that PowerShell Core now has partial multipart/form-data support in both Web Cmdlets. This change didn't make the cut for 6.0.0-beta.7 but it will be available starting in 6.0.0-beta.8 and is available now if you build it manually or grab the latest nightly build.

This blog will cover some of the challenges involved in supporting multipart/form-data, how to make use of this new feature, and about future plans for additional support.

Because typing multipart/form-data is annoying, I will be shortening it to just multipart. Please don't let this be mistaken for other multipart submission methods.

Also, I will be referring collectively to Invoke-WebRequest and Invoke-RestMethod as Web Cmdlets. In this case, there is no need to call out each command as they offer the same base functionality for multipart support.

2017-03-26

Write The FAQ ‘n Manual (Part 4)

Automated Documentation in a CI/CD Pipeline for PowerShell Modules with PlatyPS, psake, AppVeyor, GitHub and ReadTheDocs



Part 4: Monitoring the Build, PowerShell Magic, Looking Forward, and Closing Thoughts


Monitor the Build Status


AppVeyor

Once the release has been pushed to GitHub, the behind the scenes webhook for AppVeyor is triggered and your build will be queued on AppVeyor. If you did your required reading, this part should be familiar to you. What we are looking for is the parts of the documentation build in the output. You can look at this build of the example project to see the full output: https://ci.appveyor.com/project/markekraus/autodocumentsexample/build/1.0.6

BuildDocs Task:

PostDeploy Task:


ReadTheDocs

ReadTheDocs will preform 2 builds. It will build once when your do you push your release to GitHub and again when AppVeyor pushes the build changes back to GitHub. The first build may only show as triggered if the AppVeyor build finishes before the first ReadTheDocs build completes. ReadTheDocs doesn't have a live feed of the build process like AppVeyor does, but you can see the results of builds. You can see the build that followed the above AppVeyor build here: https://readthedocs.org/projects/autodocumentsexample/builds/5198207/

You can get here by doing the following
  1. Go to your dashboard https://readthedocs.org/dashboard/
  2. Select your project
  3. Go to the Builds tab
  4. Click the desired build





Pull the Build Changes to your Local Git Repo

You will need to remember that since the build process pushes changes back to GtHub, you will need to refresh your local repo. This is done with git pull:
git pull




So, Where's the PowerShell?

This is a PowerShell blog and so far in this series not much PowerShell has been discussed. As I stated before, the magic is happening in /psake.ps1: https://github.com/markekraus/AutoDocumentsExample/blob/master/psake.ps1

Build Task

The Build task contains the code that Adds /RELEASE.md to the ReleaseNotes in the module manifest, maintains the /docs/ChanegLog.md and adds the version and date to /REALEASE.md.

https://github.com/markekraus/AutoDocumentsExample/blob/master/psake.ps1#L114
    # Update release notes with Version info and set the PSD1 release notes
    $parameters = @{
        Path = $ReleaseNotes
        ErrorAction = 'SilentlyContinue'
    }
    $ReleaseText = (Get-Content @parameters | Where-Object {$_ -notmatch '^# Version '}) -join "`r`n"
    if (-not $ReleaseText) {
        "Skipping realse notes`n"
        "Consider adding a RELEASE.md to your project.`n"
        return
    }
    $Header = "# Version {0} ({1})`r`n" -f $BuildVersion$BuildDate
    $ReleaseText = $Header + $ReleaseText
    $ReleaseText | Set-Content $ReleaseNotes
    Update-Metadata -Path $env:BHPSModuleManifest -PropertyName ReleaseNotes -Value $ReleaseText
    
    # Update the ChangeLog with the current release notes
    $releaseparameters = @{
        Path = $ReleaseNotes
        ErrorAction = 'SilentlyContinue'
    }
    $changeparameters = @{
        Path = $ChangeLog
        ErrorAction = 'SilentlyContinue'
    }
    (Get-Content @releaseparameters),"`r`n`r`n"(Get-Content @changeparameters) | Set-Content $ChangeLog


BuildDocs Task

The BuildDocs task is responsible for creating /mkdocs.yml, copying /RELEASE.md to /docs/RELEASE.md, and creating the function markdown files under /docs/functions/.

https://github.com/markekraus/AutoDocumentsExample/blob/master/psake.ps1#L174
Task BuildDocs -depends Test {
    $lines
    
    "Loading Module from $ENV:BHPSModuleManifest"
    Remove-Module $ENV:BHProjectName -Force -ea SilentlyContinue
    # platyPS + AppVeyor requires the module to be loaded in Global scope
    Import-Module $ENV:BHPSModuleManifest -force -Global
    
    #Build YAMLText starting with the header
    $YMLtext = (Get-Content "$ProjectRoot\header-mkdocs.yml") -join "`n"
    $YMLtext = "$YMLtext`n"
    $parameters = @{
        Path = $ReleaseNotes
        ErrorAction = 'SilentlyContinue'
    }
    $ReleaseText = (Get-Content @parameters) -join "`n"
    if ($ReleaseText) {
        $ReleaseText | Set-Content "$ProjectRoot\docs\RELEASE.md"
        $YMLText = "$YMLtext  - Realse Notes: RELEASE.md`n"
    }
    if ((Test-Path -Path $ChangeLog)) {
        $YMLText = "$YMLtext  - Change Log: ChangeLog.md`n"
    }
    $YMLText = "$YMLtext  - Functions:`n"
    # Drain the swamp
    $parameters = @{
        Recurse = $true
        Force = $true
        Path = "$ProjectRoot\docs\functions"
        ErrorAction = 'SilentlyContinue'
    }
    $null = Remove-Item @parameters
    $Params = @{
        Path = "$ProjectRoot\docs\functions"
        type = 'directory'
        ErrorAction = 'SilentlyContinue'
    }
    $null = New-Item @Params
    $Params = @{
        Module = $ENV:BHProjectName
        Force = $true
        OutputFolder = "$ProjectRoot\docs\functions"
        NoMetadata = $true
    }
    New-MarkdownHelp @Params | foreach-object {
        $Function = $_.Name -replace '\.md'''
        $Part = "    - {0}: functions/{1}" -f $Function$_.Name
        $YMLText = "{0}{1}`n" -f $YMLText$Part
        $Part
    }
    $YMLtext | Set-Content -Path "$ProjectRoot\mkdocs.yml"
}
You'll notice that the code imports the updated module into the Global scope. Some combination of PlatyPS, AppVeyor, and psake makes this a necessity. I suspect it is a PlatyPS issue, but I haven't had time to dig through their source code.

You will also notice that this deletes all of the current function markdown files. This is so functions removed from the project no longer had lingering documentation and because PlatyPS doesn't play nice with preexisting files (at least in my testing they did not).


PostDeploy Task

This task is slightly different. It's not really PowerShell. If someone has a good (this is the key word: good) PowerShell implementation of git, please let me know. All of the ones I have tried are just as terrible as doing what I have done here.  I yearn for a PowerShell native implementation of git. I wont post all of it here since it;s not truly PowerShell, but I will explain some of it. The code beings here: https://github.com/markekraus/AutoDocumentsExample/blob/master/psake.ps1#L251

https://github.com/markekraus/AutoDocumentsExample/blob/master/psake.ps1#L258
        "git config --global credential.helper store"
        cmd /c "git config --global credential.helper store 2>&1"
The first line is just to "echo" the command that is being run to AppVeyor. That makes it easier to trace down where something went wrong. Just be careful not to expose your GitHub access token and probably not the email address either.

All of the git commands are redirecting stderr to stdout and this is being done in CMD, not PowerShell. The reason for this is that I want verbose output from the git commands displayed in in the AppVeyor output. git.exe puts informational text into stderr. PowerShell interprets a non-empty stderr from an evaluated command as something went wrong with the command. Now, it's debatable whether git.exe putting info in stderr is bad or PowerShell interpreting stderr content as an exception is bad, but this is the mess we have to deal with.

I tried several different workarounds, but ultimately this got me where I wanted. It has some drawbacks. For example, this means there is no error checking. I realize there is a git.exe option that drops informative text and thus only error when there really is an error. As I indicated, I wanted verbose output. This came up in one of my build attempts:

https://ci.appveyor.com/project/markekraus/autodocumentsexample/build/1.0.5

You can see git had a fatal error, but since I'm crippling the errors on this and not implementing my own error checking the build passed even though git failed.


Help.Tests.ps1 Pester Test

I also indicated my Help.Tests.ps1 is slightly different form others. My looping is a little different. I loop around each function because I need to test for a HelpUri.

https://github.com/markekraus/AutoDocumentsExample/blob/master/Tests/Help.Tests.ps1#L10
    foreach($Function in $Functions){
        $help = Get-Help $Function.name
        Context $help.name {
            it "Has a HelpUri" {
                $Function.HelpUri | Should Not BeNullOrEmpty
            }

I am also testing for the existence of at least one .LINK
https://github.com/markekraus/AutoDocumentsExample/blob/master/Tests/Help.Tests.ps1#L16
            It "Has related Links" {
                $help.relatedLinks.navigationLink.uri.count | Should BeGreaterThan 0
            }




未来へ(To the Future)

There is much to be improved on. This is just a start for me. Well, more like a point just beyond the start as this is already several iterations in. There are several flaws in this process.

During the writing of this blog series it became apparent to me that prepending /RELEASE.md to /docs/ChangeLog.md on every build was probably a bad idea. It's probably better to do this part only on deployment builds. This way you could keep /RELEASE.md updated as you make minor changes to the code base without /docs/ChangeLog.md getting cluttered with a bunch of junk and repetitions. This of course means rethinking all of the documentation build logic to accommodate.

Another thing that needs improvement is figuring out a way to have ReadTheDocs only build after an AppVeyor commit instead of every GitHub commit. That would also mean some other build logic to handle documentation only repo updates.

I would also like to find a way to keep the default ReadTheDocs build to match the current version available on PowerShell Gallery. At least, I'd like a way to connect the published versions of the code back to the correct documentation version. I don't really see how that is possible though. Maybe further manipulation of /mkdocs.yml could achieve that. I need to research deeper.

I definitely need to get some error detection around my git code in /psake.ps1. I researched how other major projects were doing this. Many of them are just doing their git directly from /appveyor.yml. But, I want to keep /appveyor.yml as a configuration only and /psake.ps1 as code only. Which brings me to my final point:

I would like to move more of the configuration out of /psake.ps1 and into /appveyor.yml. Basically, anything static should be in /appveyor.yml (e.g. change log path) and anything that needs to be dynamically generated (e.g. build version) should be in /psake.ps1.



Closing Thoughts

I hope this series has been helpful and informative. I hope the amount of time and effort I put into it shows. Most of all, I really hope to see more documentation processes included in PowerShell build pipelines, even if what I have done here provides no help other than to raise it to the level of attention it deserves. If you have corrections, suggestion, or comments, please don't hesitate to let me know. Thanks for reading!

Go Back to Part 3

Write The FAQ ‘n Manual (Part 3)

Automated Documentation in a CI/CD Pipeline for PowerShell Modules with PlatyPS, psake, AppVeyor, GitHub and ReadTheDocs


Part 3: mkdocs & Release Preparations and Pushing the Release

Prepare /header-mkdocs.yml

As explained before, the /header-mkdocs.yml file is used to generate the /mkdocs.yml file which is by ReadTheDocs to create the documentation site.  For the most part you can take the /header-mkdocs.yml file from the AutoDocumentationExample project and modify it for your needs. Just remember that any changes you make to /mkdocs.yml will be overwritten by the build process. Any changes you want to make should be made to the /header-mkdocs.yml instead.

For full documentation on mkdocs.yml, you can read more here: http://www.mkdocs.org/user-guide/configuration/

If you just want to grab, modify, and go, here are some of the lines and what they mean
  • site_name is used to create the name or tile of the documentation site
  • repo_url contains the link to the project repository on GitHub
  • site_author is the Name of the person, persons, company, or organization responsible for the project
  • edit_url is a relative path from the URL defined in Repo_url to edit items. This will be used for construction “edit this page” type links. This is very useful for all of your pages that are not automatically generated by PlatyPS.
  • copyright contains a line about the copyright notices for the project and documentation.
site_nameAutoDocumentsExample - Write The FAQ ‘n Manual
repo_urlhttps://github.com/markekraus/AutoDocumentsExample
site_authorMark Kraus
edit_uriedit/master/docs/
copyright"AutoDocumentsExample is licensed under the <a href="https://github.com/markekraus/AutoDocumentsExample/raw/master/LICENS">MIT license</a>"


Themes

Themes can be used to change the look and feel of your documentation site. ReadTheDocs comes with 2 built-in themes: mkdocs and readthedocs. You can see the themes and read more about styling at http://www.mkdocs.org/user-guide/styling-your-docs/

For simplicity, you can choose one of the default themes by modifying theme in the /header-mkdocs.yml.
themereadthedocs

If you want to use someone else’s theme or create your own, you will need to include the theme folder in your project. Then instead of theme, you will need to use theme_dir. I recommend crating a /docs/themes/ folder and then adding the theme folder under there. For example, for a brief period I was using the PSinder theme on the PSMSGraph RedTheDocs site. I did this by placing the PSinder files in /docs/themes/psinder/ and then setting theme_dir to docs/themes/psinder in /header-mkdocs.yml.
theme_dirdocs/themes/psinder

You may want to play around with this a bit before committing to a theme. In my experience the readthedocs theme is the best in terms of working with large numbers of pages, though I’m not exactly thrilled with the aesthetics of the theme. The mkdocs and derivative themes like Cinder and PSinder do not work well with sections that contain a large number of pages. I found that many of my function pages were not selectable in the drop-down menus because they were displayed off screen with no scrolling available. If your project has only a few functions, this might not be an issue.



Additional Pages and Sections

You are not limited to the pages and sections provided here. It is entirely possible to extend this. The idea is that the Functions will be tacked on as individual pages under a Functions section. To add addition pages create them as .md files under /docs/. You can even create more folders under /docs/ to group similar pages or a section. Then just update the pages section in /header-mkdocs.yml

For example, I plan to add an Examples section to PSMSGraph. To do so I will create the /docs/Examples/ folder, add several files (/docs/Examples/example01.md, /docs/Examples/example02.md, etc). and then update /header-mkdocs.yml like so:

pages:
  - Homeindex.md
  - Examples:
    - Retrieving Organization DetailsExamples/example01.md
    - Uploading a file to OneDrive for BusinessExamples/example02.md
    - Adding a Calendar EventExamples/exampled03.md



Preparing for Release


Assuming you have updated your code, updated the relevant comment based help, and have your /header-mkdocs.yml configured to your liking, you should be ready to publish a release and deploy your module. Before that, you should update your release documentation.

There are two pieces to the release documentation: /RELEASE.md and /docs/ChangeLog.md. /RELEASE.md is intended to function as the Release Notes which document the changes and features added in the current release. /docs/ChangeLog.md is indented to house the current release notes and the release notes for all previous releases. Before you push your release, /RELEASE.md needs to be updated. You do not need to update /docs/ChangeLog.md as the build process will maintain it for you by prepending /RELEASE.md to it.

/RELEASE.md is a markdown file so you use markdown formatting or plain text. It will be included in the ReadTheDocs documentation site. It will also be added to the ReleaseNotes field in the module manifest which ultimately means it will also display in the PowerShell gallery if you are publishing there. Currently, the PowerShell gallery does not format the markdown in the release notes. With these in mind, here are some recommendations for formatting /RELEASE.md
  • Keep to simple formatting so it is still readable as plain text
  • The Version number and date are prepended to the file with a # heading. Use ## for major headers instead of # in your body
  • Use ### for subheadings
  • Consider using just URLs and not trying to create formatted links
  • Consider alternating bullet types for each indentation level:
* First Level
    - Second Level
        + Third Level


The formatting is really up to your preferences. The only hard recommendation I have is the one about the heading levels. The reason for this is that with the version being made the H1 header, the /docs/ChangeLog.md will create better sectioning with all the relevant changes for a specific version nested under the version header as H2 headings.



PowerShell Syntax Highlighting

Unfortunately, ReadTheDocs doesn’t really support PowerShell syntax highlighting for script blocks, but, GitHub does. Also, PowerShell Gallery does not do any formatting. It would probably be best to avoid putting script blocks in your /RELEASE.md so it has a somewhat consistent look across all three services. If you do add script blocks in any of your pages, consider using the following method:

```powershell
$Widgets = Get-Widget 
```

Using that will have the proper syntax highlighting on GitHub and on ReadTheDocs it will appear as a normal preformatted text block. If ReadTheDocs should add PowerShell syntax highlighting in the future, this should be forwards compatible.



Git: Stage, Commit, Push

At this point your code has been updated and your release has been prepped. It is time to work some Git magic. This part should be all too familiar at this point. There is one thing to which I wanted to draw attention and that is the commit message. Our build process can be skipped by including the following string anywhere in your commit message:
  • [ci skip]
  • [skip ci]
  • [skip appveyor]
Note that the square brackets must also be included. This is good for actions such as commits which only update the /README.md or staging the /RELEASE.md before merging code. Using these will result in the commit and push not triggering the build pipeline on AppVeyor and thus your documentation should remain unchanged. However, this will not stop the documentation build on ReadTheDocs. If you edit files in the /docs/ folder and push those changes, ReadTheDocs will build the changes to the documentation even if you include the skip tags in your commit message.

Anyway, assuming you are ready to stage/commit/push:
git add -A
git commit -m 'First Release!'
git push


Go back to Part 2
Continue to Part 4

Write The FAQ ‘n Manual (Part 2)

Automated Documentation in a CI/CD Pipeline for PowerShell Modules with PlatyPS, psake, AppVeyor, GitHub and ReadTheDocs



Part 2: GitHub Access Token, ReadTheDocs Account & Project, and Comment Based Help


Generate and Configure a GitHub Personal Access Token


Since our Documentation Build process is part of our Build automation we will need to have a way for AppVeyor (where our build process is running) to write the documentation back to GitHub (where our documentation is stored). Obviously, this means we need a way to authenticate from AppVeyor to GitHub to make those changes. We could throw caution to the wind and just hard code our GitHub username and password into /psake.ps1, however, we are security conscious coders and would never do such a thing.

Lucky for us security conscious coders, GitHub offers what they refer to as Personal Access Tokens. These are neat for a variety of reasons such as providing scoped access and the ability to create one for every need. Best practice is to use One Personal Access Token for a single purpose and create more for additional requirements. That means that if you already have a personal access token used for something else, you will need to create a new one for this pipeline.

To create a Personal Access Token follow the instructions here: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ For our purposes, the only scope that is required is public_repo. I do highly recommend using a very useful and descriptive name. I also highly recommend storing the token in a password manager.

Security conscious coders also care about layered security. We do not want put this token in plaintext in our build code. Once you have your token, you will need to log in to your AppVeyor account and create a secure string for your token (as a refresher, you can read https://www.appveyor.com/docs/build-configuration/#secure-variables ).

Once you have the secure string, update /appveyor.yml and modify the access_token environment variable.


environment:
  access_token:
    secure+mnipwj1c7UIzB4XZzoxTTZEnsN/i6M3MyskHX/4wQUYCrCL5yQNR/1Qf1ws21bu

 

Create your ReadTheDocs Account and Project


We need a ReadTheDocs account for us to build our documents. I could not find a premade ReadTheDocs tutorial so for the benefit of all, I’m making one here. The TL;DR is to create a ReadTheDocs account, link your GitHub account, Import your GitHub Project, and modify the documentation type to mkdocs.

For those who want pretty pictures, here you go.

Create Your Account


  1. Go To https://readthedocs.org/
  2. Click the Sign Up button
  3. On the Sign Up Page enter a username, email address, and password
  4. On the next page click Confirm to confirm your email address
  5. Check your email and follow the email confirmation instructions

 

Link Your GitHub Account


  1. On your landing page for your new account, Click the Connect You Accounts button
  2. Click the Connect to GitHub button
  3. Follow the instructions from GitHub for linking your account
  4. When you have finished you should see your GitHub account listed under Connected Services

 

Import Your GitHub Project and Modify the Documentation Type


  1. Got to your dashboard https://readthedocs.org/dashboard/
  2. Click the Import a Project button
  3. Under Import a Repository, click the Plus sign button next to your GitHub repo
  4. On the Project Details page Check the Edit advanced project options check box and click Next
  5. Locate the Documentation type dropdown box and choose Mkdocs (Markdown)
  6. Change the Programming Language to Other (☹ maybe PowerShell will be added someday)
  7. Modify the Project Home page (I use my GitHub repo as my project home page)
  8. Click the Finish button



Build your Comment Based Help


The source of all your function documentation will come from your comment based help. The build process will build your documentation directly from your comment based help. You should begin thinking of comment based help as a critical part of your code. If you want to ensure your documentation, you can pester test for it with  /Tests/Help.Tests.ps1 test included in the project will fail the build if:
  • HelpUri is missing from the function definition
  • There is not at least one .LINK entry
  • There is no .DESCRIPTION
  • There is no .EXAMPLE
  • There is not a .PARAMETER for every parameter
You should consider the following when writing your document based help
  • Check your spelling, punctuation and grammar
  • PlatyPS will mangle custom formatting, so keep it simple
  • PlatyPS will only create proper links when .LINK is a URL.
  • The first .LINK should be to the current functions own online documentation and should match the HelpUri
  • Include .LINK’s with the full URL to online documentation to related functions instead of just function names.
  • If your function calls another, add a .LINK to the called function’s online documentation
  • Include .LINK’s with the full URL to related documentation (API’s, MSDN, 3rd Party documentation, etc)
  • If one function has a .LINK to a second function, ensure the second function has a .LINK to the first function.
  • Add a .LINK to the GitHub page for the source code
  • Add a .OUTPUTS that contains a list of fully qualified type names of the objects types your function emits if any
  • Add a .INPUTS that contains a list of fully qualified type names of the objects types your function ingests if any
  • Be thorough in your description
  • Include at least one .EXAMPLE per parameter set
  • If your function includes pipeline support, include a .EXAMPLE for each
  • If your function includes positional parameter support, add a .EXAMPLE in addition to and not in replacement of a .EXAMPLE with fully named parameters
Most importantly, keep your comment based help updated.
  • If functionality changes update the .SYNOPSIS and .DESCRIPTION
  • If you add, remove, or rename a parameter, do the same with the .PARAMETER’s and .EXAMPLE’s
  • If you add a new function that will feed into or from another function update the other function’s .LINK’s.
  • If you rename the function, update the name in the .EXAMPLE
  • If a link changes for a function, update the related functions .LINK’s
You may have noticed this is very .LINK heavy. One of the benefits of online documentation is easily navigating to related documents. Normally, the .LINK’s are just the names of other functions. This is acceptable from the command line to see the related functions, but does us no good online. If you are not creating online documentation from the comment based help, the .LINK can be ignored, but for our purposes it becomes very important.

Comment Based Help for Get-Widget:

<#
    .SYNOPSIS
        Gets a  Widget from Widget store
    
    .DESCRIPTION
        Retrieves information about a widget from the widget store based on either ID or Name
    
    .PARAMETER Id
        GUID ID of the Widget
    
    .PARAMETER Name
        The Name of the Widget to retrieve from the Widget store
    
    .EXAMPLE
        PS C:\> Get-Widget -Id b54dfddd-f721-4d3a-ae8a-a1227315a66f
    
    .EXAMPLE
        PS C:\> Get-Widget -Name 'My Widget'
    
    .OUTPUTS
        widget, widget
    
    .NOTES
        Additional information about the function.
    
    .LINK
        http://autodocumentsexample.readthedocs.io/en/latest/functions/Get-Widget.md
    
    .LINK
        http://autodocumentsexample.readthedocs.io/en/latest/functions/Set-Widget.md
    
    .LINK
        https://github.com/markekraus/AutoDocumentsExample/blob/master/AutoDocumentsExample/Public/Get-Widget.ps1
    
    .LINK
        https://store.adatum-widgets.com/
#>




URL Notes

There are a few things to consider when you are creating your comment based help. First, PlatyPS will create the file names for the documentation passed on the function definition, not the name of the file in which the function is defined. Also, the URLs on ReadTheDocs are case sensitive. If you use a different casing strategy for the file names than you do for the actual function definition, this could lead to confusion. For example, if your function is in a file named Get-Widget.ps1 but the function definition has get-widget, then PlatyPS will create the file as get-widget.md.

Also, it is possible to create your .LINKs in the comment based help without creating the documentation first. The URL follows this convention:

<base ReadTheDocs domain>/en/latest/functions/<function name as defined in function definition>.md

To give you an idea of the URLs for a given function, here are some examples using the Get-Widget function from the AutoDocumentsExample project:


Go back to Part 1
Continue to Part 3