The BOSH CLI v2 greatly enhances the interaction and deployment of Cloud Foundry and other BOSH releases. The new release, at time of writing still in beta, is now written in Golang, which allows it to be distributed by a simply binary without dependencies. There is even support for Windows planned, though binaries are currently not distributed. There are quite a few changes in the CLI commands, check them out here. The CLI now also includes a mechanism to bootstrap the BOSH Director, which was done previously by using bosh-init or for bosh-lite with a Vagrantfile.
This post will walk you through the required steps to deploy bosh-lite on Virtualbox and deploy Cloud Foundry. Bosh-lite is a single VM development environment which deploys bosh releases in garden containers.
Prerequisites
- git
- Virtualbox
- 8GB memory recommended
- Linux or Mac
- Windows is coming
- cf cli
BOSH CLI v2
First of all we need to download the latest BOSH CLI v2. You can find the latest version here.
After than we make the CLI globally accessible:
chmod +x ~/Downloads/bosh-cli-* sudo mv ~/Downloads/bosh-cli-* /usr/local/bin/bosh
BOSH Director Deployment
We need the bosh-deployment repository which holds a collection of BOSH manifests. Furthermore we are creating a dirctory to hold some deployment specific configuration files.
git clone https://github.com/cloudfoundry/bosh-deployment ~/workspace/bosh-deployment mkdir -p ~/deployments/vbox cd ~/deployments/vbox
The next step is already to bootstrap the bosh-lite director to your local Virtualbox installation. You can modify the network information if you like.
bosh create-env ~/workspace/bosh-deployment/bosh.yml \ --state ~/deployments/vbox/state.json \ -o ~/workspace/bosh-deployment/virtualbox/cpi.yml \ -o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \ -o ~/workspace/bosh-deployment/bosh-lite.yml \ -o ~/workspace/bosh-deployment/bosh-lite-runc.yml \ -o ~/workspace/bosh-deployment/jumpbox-user.yml \ --vars-store ~/deployments/vbox/creds.yml \ -v director_name="Bosh Lite Director" \ -v internal_ip=192.168.50.6 \ -v internal_gw=192.168.50.1 \ -v internal_cidr=192.168.50.0/24 \ -v outbound_network_name=NatNetwork
Lastly we are creating an bosh environment alias and a few environmental variables to reduce the amount to type. You also want to recreate these variables if you create a new shell.
bosh -e 192.168.50.6 --ca-cert <(bosh int ~/deployments/vbox/creds.yml --path /director_ssl/ca) alias-env vbox export BOSH_CA_CERT=$(bosh int ~/deployments/vbox/creds.yml --path /director_ssl/ca) export BOSH_CLIENT=admin export BOSH_CLIENT_SECRET=$(bosh int ~/deployments/vbox/creds.yml --path /admin_password) export BOSH_ENVIRONMENT=vbox
Cloud Foundry Deployment
We are going to use cf-deployment for deploying Cloud Foundry. The repository is still in its early stages but allows us to easily deploy Cloud Foundry, using Diego, in one simple step. It looks like it will be the successor of cf-release and diego-release .
First we are cloning the repository:
git clone https://github.com/cloudfoundry/cf-deployment ~/workspace/cf-deployment cd ~/workspace/cf-deployment
The next step is to upload the required bosh-lite stemcell., which is referenced at the bottom of cf-deployment.yml .
At the time of writing:
stemcells: - alias: default os: ubuntu-trusty version: "3421.11"
Using bosh interpolate we can extract the stemcell version and upload it:
export STEMCELL_VERSION=$(bosh int ~/workspace/cf-deployment/cf-deployment.yml --path /stemcells/alias=default/version) bosh upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent?v=$STEMCELL_VERSION
We are using newer cloud config model, which allows the separation of IaaS specific configuration from the deployment manifest.
bosh update-cloud-config ~/workspace/cf-deployment/iaas-support/bosh-lite/cloud-config.yml
The last step is to create and deploy our cloud foundry release. This may take long time, mostly depending on your internet connection.
bosh -d cf deploy ~/workspace/cf-deployment/cf-deployment.yml -o ~/workspace/cf-deployment/operations/bosh-lite.yml --vars-store ~/deployments/vbox/deployment-vars.yml -v system_domain=bosh-lite.com
To speed up deployment you can use precompiled releases by adding following operations file:
-o ~/workspace/cf-deployment/operations/use-compiled-releases.yml
If nothing failed, you have successfully deployed Cloud Foundry! Yey!
The last step is to create a local route, to be able to access the Cloud Foundry Environment from your host.
Linux:
sudo route add -net 10.244.0.0/16 gw 192.168.50.6
Mac:
sudo route add -net 10.244.0.0/16 192.168.50.6
Configure Cloud Foundry
First we need to log-in into cloud foundry. The default username is admin and the password can be found in the deployment-vars.yml , which was generated by bosh deploy in the previous steps.
The following command will log you in and read the password from the file.
cf login -a https://api.bosh-lite.com --skip-ssl-validation -u admin -p $(bosh interpolate ~/deployments/vbox/deployment-vars.yml --path /cf_admin_password)
Lastly we have to create an organization and a space:
cf create-org cloudfoundry cf target -o cloudfoundry cf create-space development cf target -o cloudfoundry -s development
You are now ready to deploy your first app!
Deploy to Cloud Foundry
I’ve created a very simple python application which can be used for testing.
git clone https://github.com/vchrisb/cf-helloworld ~/workspace/cf-helloworld cd ~/workspace/cf-helloworld cf push
You should now be able to access the app locally via http://cf-helloworld.bosh-lite.com .
To try out Docker support, we first need to enable docker support and push a docker image.
cf enable-feature-flag diego_docker cf push test-app -o cloudfoundry/test-app
Suspend Environment
Please do not reboot or shutdown your BOSH Director VM, it won’t come up properly again!
To be able to save your state, you can can suspend and resume your BOHS Director with following commands:
vboxmanage controlvm $(bosh int ~/deployments/vbox/state.json --path /current_vm_cid) savestate vboxmanage startvm $(bosh int ~/deployments/vbox/state.json --path /current_vm_cid) --type headless
Delete Environment
Your BOSH environment can be removed with following command:
bosh delete-env ~/workspace/bosh-deployment/bosh.yml \ --state ~/deployments/vbox/state.json \ -o ~/workspace/bosh-deployment/virtualbox/cpi.yml \ -o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \ -o ~/workspace/bosh-deployment/bosh-lite.yml \ -o ~/workspace/bosh-deployment/bosh-lite-runc.yml \ -o ~/workspace/bosh-deployment/jumpbox-user.yml \ --vars-store ~/deployments/vbox/creds.yml \ -v director_name="Bosh Lite Director" \ -v internal_ip=192.168.50.6 \ -v internal_gw=192.168.50.1 \ -v internal_cidr=192.168.50.0/24 \ -v outbound_network_name=NatNetwork rm ~/deployments/vbox/*
SSH into BOSH Director
If you would like to get a console on the BOSH Director, you can use following commands:
umask 077; touch ~/deployments/vbox/director_priv.key bosh int ~/deployments/vbox/creds.yml --path /jumpbox_ssh/private_key > ~/deployments/vbox/director_priv.key ssh jumpbox@192.168.50.6 -i ~/deployments/vbox/director_priv.key
Conclusion
It is straight forward and easy to deploy Cloud Foundry with BOSH CLI v2 and cf-deployment to your localhost. The new cli greatly enhances the usability and deployment efforts.
Happy pushing!
Thanks for this very handy guide. I found one minor mistake in the Configure Cloud Foundry section, the path to bosh interpolate should be ~/deployments/vbox/deployment-vars.yml instead of the cf-deployment path.
Thank you! I’ve fixed it.
Hi~, I am a Chinese dev, your article is very helpful for me. And I want to references your words to my [blog](http://edwardesire.com).
Pingback: Cloud Foundry Networking on Bosh-lite - christopherBANCK
Hi, thank you for the article – I tried it but I get always “Error: Timed out pinging to xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx after 600 seconds” for all vms during deploy step “Compiling packages” … do you have tips to pin down the error?
I thought it is my environment: proxy and mitm but also with direct internet access there are the same errors …
has been fixed: https://github.com/cloudfoundry/bosh-deployment/commit/cdf37d865e91a79311eee60c9b8c0a2f5d553c07
Hi Christopher Banck,
Thanks for such great doc,
one thing is updated in BOSH CLI-v2, that the password for login to cf is updated to field name cf_admin_password instead of uaa_scim_users_admin_password.
So the command will be now :
cf login -a https://api.bosh-lite.com –skip-ssl-validation -u admin -p $(bosh interpolate ~/deployments/vbox/deployment-vars.yml –path /cf_admin_password)
thank you! Fixed that!
Nice post.
I followed all the steps with default settings, but failed when updating cf instances at last. The error messages are as below:
17:39:06 | Updating instance api: api/7329016e-eb99-4793-bc8e-ef6360f6cd55 (0) (canary) (00:26:46)
L Error: ‘api/0 (7329016e-eb99-4793-bc8e-ef6360f6cd55)’ is not running after update. Review logs for failed jobs: cloud_controller_ng, routing-api
17:50:12 | Updating instance uaa: uaa/a8980f34-30cd-4d8c-9939-f8f71551d02e (0) (canary) (00:37:53)
L Error: Action Failed get_task: Task 00f0f09b-dcad-4b78-7381-e9593e70c5f3 result: 1 of 1 post-start scripts failed. Failed Jobs: uaa.
17:50:12 | Error: ‘api/0 (7329016e-eb99-4793-bc8e-ef6360f6cd55)’ is not running after update. Review logs for failed jobs: cloud_controller_ng, routing-api
After checking, it looks like a uaa problem:
uaa post-start.stderr.log :
Failed to connect to 127.0.0.1 port 8989: Connection refused
uaa monit.log:
‘uaa’ failed, cannot open a connection to INET[localhost:8989/healthz] via TCP
Is there any advises?
Thanks a lot.
Thank you for the detailed guide. I believe there is a minor mistake in the cf deployment section. Path to cloud-config.yml should be workspace/cf-deployment/iaas-support/bosh-lite/cloud-config.yml.
The new command will be:
bosh update-cloud-config ~/workspace/cf-deployment/iaas-support/bosh-lite/cloud-config.yml
Thank you!
Yes, it looks like the cf-deployment repo was refactored.
Hey Chris, Your article is very helpful.
I got a question.. why are we using the following command? where are we uploading stemcell?
bosh upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent?v=$STEMCELL_VERSION
The stemcell is uploaded to the bosh director. It will be normally used to spin up VMs, but in this test setup it is using “warden”.
Hi,
Thanks for the detailed steps of installation, this has helped us a ton. Thank you so much.
But one issue, after the cf deployment we are executing the command “cf marketplace” , but it is not listing down any services..
Can you help me in getting the major services.
Thanks,
Deekshit
Slight improvement to stemcell upload
export STEMCELL_OS=$(bosh int ~/workspace/cf-deployment/cf-deployment.yml –path /stemcells/alias=default/os)
export STEMCELL_VERSION=$(bosh int ~/workspace/cf-deployment/cf-deployment.yml –path /stemcells/alias=default/version)
bosh upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-${STEMCELL_OS}-go_agent?v=${STEMCELL_VERSION}
Getting this error consistently during bosh cf deploy
Task 31
Task 31 | 13:06:48 | Preparing deployment: Preparing deployment (00:00:05)
Task 31 | 13:07:01 | Preparing package compilation: Finding packages to compile (00:00:01)
Task 31 | 13:07:02 | Creating missing vms: nats/7c8218d3-8566-4aeb-bc68-602480e6426a (0)
Task 31 | 13:07:02 | Creating missing vms: adapter/f57be63d-cdab-484d-b343-311f7d01a54f (0)
Task 31 | 13:07:02 | Creating missing vms: singleton-blobstore/371e7c08-e9bb-469c-82bb-a156eef76e15 (0)
Task 31 | 13:07:02 | Creating missing vms: diego-api/5f2f4d54-5644-4dba-80ea-9d4698c7d413 (0)
Task 31 | 13:07:02 | Creating missing vms: uaa/23b3ecc0-018c-4b6a-8285-6cf5afd6b86a (0)
Task 31 | 13:07:02 | Creating missing vms: api/5777f8c6-a507-474c-8b36-ab47ba0010e2 (0)
Task 31 | 13:07:02 | Creating missing vms: database/cb745bea-6700-4b14-bbba-95525fd02c81 (0)
Task 31 | 13:07:02 | Creating missing vms: cc-worker/d2b999b8-182d-4c65-b21a-8810cbc05104 (0)
Task 31 | 13:07:02 | Creating missing vms: scheduler/89981132-488b-41e6-a288-b2cbe262fb1b (0)
Task 31 | 13:07:02 | Creating missing vms: router/581a27e9-5b5e-4956-8b16-efc91aa665f1 (0)
Task 31 | 13:07:02 | Creating missing vms: tcp-router/3cf474b9-f36f-4901-8ba5-becc7cbd5444 (0)
Task 31 | 13:07:02 | Creating missing vms: doppler/5707c1a4-328f-4eba-835e-6c7e6c75f914 (0)
Task 31 | 13:07:02 | Creating missing vms: credhub/94a1adc7-ec62-43e2-899e-680b13098361 (0)
Task 31 | 13:07:02 | Creating missing vms: diego-cell/be1f3be0-1583-4ead-aea3-88ea7132b7a0 (0)
Task 31 | 13:07:02 | Creating missing vms: log-api/5a1d69e3-d7f6-452e-99cd-b840fbcb4990 (0) (00:00:55)
Task 31 | 13:07:59 | Creating missing vms: uaa/23b3ecc0-018c-4b6a-8285-6cf5afd6b86a (0) (00:00:57)
Task 31 | 13:08:00 | Creating missing vms: tcp-router/3cf474b9-f36f-4901-8ba5-becc7cbd5444 (0) (00:00:58)
Task 31 | 13:08:00 | Creating missing vms: adapter/f57be63d-cdab-484d-b343-311f7d01a54f (0) (00:00:58)
Task 31 | 13:08:01 | Creating missing vms: singleton-blobstore/371e7c08-e9bb-469c-82bb-a156eef76e15 (0) (00:00:59)
Task 31 | 13:08:01 | Creating missing vms: diego-api/5f2f4d54-5644-4dba-80ea-9d4698c7d413 (0) (00:00:59)
Task 31 | 13:08:01 | Creating missing vms: database/cb745bea-6700-4b14-bbba-95525fd02c81 (0) (00:00:59)
Task 31 | 13:08:02 | Creating missing vms: scheduler/89981132-488b-41e6-a288-b2cbe262fb1b (0) (00:01:00)
Task 31 | 13:08:02 | Creating missing vms: credhub/94a1adc7-ec62-43e2-899e-680b13098361 (0) (00:01:00)
Task 31 | 13:08:02 | Creating missing vms: doppler/5707c1a4-328f-4eba-835e-6c7e6c75f914 (0) (00:01:00)
Task 31 | 13:08:03 | Creating missing vms: nats/7c8218d3-8566-4aeb-bc68-602480e6426a (0) (00:01:01)
Task 31 | 13:08:03 | Creating missing vms: router/581a27e9-5b5e-4956-8b16-efc91aa665f1 (0) (00:01:01)
Task 31 | 13:08:03 | Creating missing vms: cc-worker/d2b999b8-182d-4c65-b21a-8810cbc05104 (0) (00:01:01)
Task 31 | 13:08:03 | Creating missing vms: diego-cell/be1f3be0-1583-4ead-aea3-88ea7132b7a0 (0) (00:01:01)
Task 31 | 13:08:04 | Creating missing vms: api/5777f8c6-a507-474c-8b36-ab47ba0010e2 (0) (00:01:02)
Task 31 | 13:08:05 | Updating instance adapter: adapter/f57be63d-cdab-484d-b343-311f7d01a54f (0) (canary)
Task 31 | 13:08:05 | Updating instance nats: nats/7c8218d3-8566-4aeb-bc68-602480e6426a (0) (canary) (00:00:40)
Task 31 | 13:08:46 | Updating instance adapter: adapter/f57be63d-cdab-484d-b343-311f7d01a54f (0) (canary) (00:00:41)
Task 31 | 13:08:46 | Updating instance database: database/cb745bea-6700-4b14-bbba-95525fd02c81 (0) (canary) (00:01:43)
Task 31 | 13:10:29 | Updating instance diego-api: diego-api/5f2f4d54-5644-4dba-80ea-9d4698c7d413 (0) (canary) (00:20:14)
L Error: ‘diego-api/5f2f4d54-5644-4dba-80ea-9d4698c7d413 (0)’ is not running after update. Review logs for failed jobs: bbs, silk-controller, locket
Task 31 | 13:30:43 | Error: ‘diego-api/5f2f4d54-5644-4dba-80ea-9d4698c7d413 (0)’ is not running after update. Review logs for failed jobs: bbs, silk-controller, locket
Task 31 Started Fri Oct 12 13:06:48 UTC 2018
Task 31 Finished Fri Oct 12 13:30:43 UTC 2018
Task 31 Duration 00:23:55
Task 31 error
Updating deployment:
Expected task ’31’ to succeed but state is ‘error’
Exit code 1
Hi,
The
vboxmanage controlvm $(bosh int ~/deployments/vbox/state.json –path /current_vm_cid) savestate
vboxmanage startvm $(bosh int ~/deployments/vbox/state.json –path /current_vm_cid) –type headless
Is a great workaround if it works. the savestate leaves the VM in a Saved state. But.. when I want to start it again, I have the following error:
Failed to open a session for the virtual machine vm-fc4fe56d-a93a-41bd-5a98-33b8454993a2.
Failed to load unit ‘lsilogicscsi’ (VERR_SSM_LOADED_TOO_LITTLE).
Result Code: NS_ERROR_FAILURE (0x80004005)
Component: ConsoleWrap
Interface: IConsole {872da645-4a9b-1727-bee2-5585105b9eed}
So now I’m in a deadlock, unable to a delete-env, because the vm is in a wrong state. Any suggestions?