Microsoft Foundry Local - Disconnected

This post shows you how to deploy Microsoft Foundry Local onto a disconnected machine with no access to the internet. This requires having a machine with access to the internet to stage the model and installation media and a way of getting those items to the disconnected machine.

From a machine with access to the Internet:

  1. Install Foundry Local as normal.
  2. Download the required model via foundry model download qwen2.5-0.5b-instruct-generic-cpu:4 substituting qwen2.5-0.5b-instruct-generic-cpu:4 for the model of your choosing. Navigate to the Foundry cache location - this is discovered via running foundry cache location and defaults to c:\users\username\.foundry\cache\models where username is the currently logged in user, Copy foundry.modelinfo.json and the vendor\model folder to the transfer media (USB key or what you’re using to move it across the airgap). At this point, you would have the json file and a folder named Microsoft in the transfer media with a model folder in the Microsoft folder (such as \Microsoft\qwen2.5-0.5b-instruct-generic-cpu-4). Download the foundry local offline installer from Releases · microsoft/Foundry-Local and copy that to the transfer media.

On the air-gapped machine

  1. Install Foundry Local from the transfer media (for an x86 Windows machine, this would be within PowerShell by running Add-AppPackage -path FoundryLocal-x64-0.8.119.msix).
  2. Copy the foundry.modelinfo.json file and the vendor\model folder into a new folder on the disconnected machine such as c:\users\username\Models.
  3. Change the cache location via foundry cache cd c:\users\username\Models.
  4. You can then run the model normally from the cache using foundry model run qwen2.5-0.5b-instruct-generic-cpu:4.

Using Obsidian Copilot with a Local LLM

This is a quick document to walk through configuring Obsidian Copilot with LM Studio for local models and embeddings if you want to run the AI pieces fully locally.

  1. Install LM Studio from their website or via winget (winget install ElementLabs.LMStudio).
  2. Install the community plugin Obsidian Copilot.
  3. Launch LM Studio and click on the magnifying glass icon in the left hand blade called “Discover.”
  4. Search for a model that will fit in your computer’s memory capacity - for the remainder of this walk through, I will use the model gemma-3-4b-it-qat.
  5. Click on the console tab on the left hand blade called “Developer.” Click the button in the top middle of the page that says “Select a model to load (Ctrl +L)”. Load both the model you deployed in step 4 and the default embedding, text-embedding-nomic-embed-text-v1.5.
  6. Click on the button in the top left that says “Settings” and enable CORS. Click the slider next to “Status: Stopped” to start the local model server. After this, it should say “Status: Running.”
  7. In Obsidian, navigate to the Obsidian Copilot settings. Click on the tab marked “Model.” Uncheck all current models (unless you intend to use them) - this will eliminate some errors that would otherwise be displayed.
  8. Under “Chat Models”, click the button marked “+ Add Custom Model.”
    1. Give the model the name it’s listed as in LM Studio - in this case, it’s “gemma-3-4b-it-qat”.
    2. For provider, select “LM Studio.”
    3. Leave “Base URL” blank.
    4. For an API key, put in a few characters (it is unimportant in this case).
    5. Enable CORS, click “Verify” to validate that the configuration is correct, then click “Add Model.”
  9. Scroll down under “Embedding Models”, click on the button marked “+ Add Custom Model.”
    1. Give the model the name it’s listed as in LM Studio - in the default case, it’s “text-embedding-nomic-embed-text-v1.5”.
    2. For provider, select “LM Studio.”
    3. Leave “Base URL” blank.
    4. For an API key, put in a few characters (it is unimportant in this case).
    5. Enable CORS, click “Verify” to validate that the configuration is correct, then click “Add Model.”
  10. Return to the “Basic” tab in the settings panel and select the models you created for “Default Chat Model” and “Embedding Model.”

It's Always DNS - Resolve-DNSName

“If you’ve done any DNS work in the past you may have leveraged the tool nslookup. While this tool does perform DNS queries, it is not representative of how Windows resolves DNS queries.

NSlookup is a self-contained executable that does not leverage the Windows DNS client resolver. Its behavior doesn’t match the OS.

If you would like to perform DNS queries from the command line, I recommend using the PowerShell cmdlet, Resolve-DnsName which does use the native Windows DNS Client resolver.” - Introduction to Network Trace Analysis 4: DNS (it’s always DNS)

This was news to me - oddly enough, it came in handy less than a few days after that page was posted. I was troubleshooting an Azure VPN P2S DNS issue where NRPT was being used for resolution. Resolve-DNSName resolved properly, nslookup didn’t.