Since I'm leaving for one week vacation, I think I may write down current status of our new OpenStack instance and write down TODO list. Just in case someone is desperate enough to do some fixes.
I updated docs.git/cloud.txt - mainly which playbooks we use right now and where to write down IP, when you add new compute node.
Controller - should be OK. At least I see no problems there right now. Network is stable. I can log to EqualLogic (credentials are at bottom of cinder.conf). Volumes are created correctly. I can reach compute nodes. AMQP works and is reachable from Compute nodes (do not try to play with SSL&RabbitMQ it will never work on RHEL7). Horizon works (over https).
Compute nodes - it looks good until you try to start VM. :) I fixed several problems, but new ones still pop ups.
If you want to debug it, just go to dashboard and start new VM (note that m1.tiny is too small for Fedora image) and on controller do: tail -f /var/log/nova/nova-scheduler.log And look for something like: Choosing host WeighedHost [host: fed-cloud13.cloud.fedoraproject.org, weight: 1.0] for instance 75f1b5ca-88d5-4e57-8c18-8d6554e1f2bc
then log to that instance (right now root@fed-cloud09 can ssh directly as root@fed-cloudXX) and tail -f /var/log/nova/nova-compute.log /var/log/neutron/openvswitch-agent.log When spin up of VM fail, then controller try 2 next machines before giving up.
Right now there is some error: TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\n" which is new to me and which I will not manage to fix before I will leave today. It may be last one problem or they may be dozen other still waiting in queue. It's hard to tell.
Smaller fixes to do: * playbook hosts/fed-cloud09.cloud.fedoraproject.org.yml can be enhanced that after packstack execution the machine should be restarted. Right now I am waiting for first error after packstack and then I restart the machine manualy and re-run playbook again. This is last manual workaround. Everything else was already automated. * routing between compute nodes and controller using public IP does not work. Not fatal right now, but nice to have.
On Fri, 20 Feb 2015 15:32:15 +0100 Miroslav Suchý msuchy@redhat.com wrote:
Since I'm leaving for one week vacation, I think I may write down current status of our new OpenStack instance and write down TODO list. Just in case someone is desperate enough to do some fixes.
I poked at it some (with help) and made a bit more progress...
I updated docs.git/cloud.txt - mainly which playbooks we use right now and where to write down IP, when you add new compute node.
Controller - should be OK. At least I see no problems there right now. Network is stable. I can log to EqualLogic (credentials are at bottom of cinder.conf). Volumes are created correctly. I can reach compute nodes. AMQP works and is reachable from Compute nodes (do not try to play with SSL&RabbitMQ it will never work on RHEL7). Horizon works (over https).
Compute nodes - it looks good until you try to start VM. :) I fixed several problems, but new ones still pop ups.
...snip...
Right now there is some error: TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\n" which is new to me and which I will not manage to fix before I will leave today. It may be last one problem or they may be dozen other still waiting in queue. It's hard to tell.
We got past that and git instances to spin up. Seems like it was just needing a restart on the compute node (of compute and ovs).
Then there was an issue of routing for the external ips. That needed an additional rule on the compute nodes in iptables. I added that to playbooks.
I also added nameservers to allow instances to get dns correctly.
Smaller fixes to do:
- playbook hosts/fed-cloud09.cloud.fedoraproject.org.yml can be
enhanced that after packstack execution the machine should be restarted. Right now I am waiting for first error after packstack and then I restart the machine manualy and re-run playbook again. This is last manual workaround. Everything else was already automated.
I don't know that we want a reboot in playbook, it should be idempotent, ie, we should be able to run it and reach the desired state, then re-run and 0 changes. I guess it it only rebooted after packstack first runs it could work.
- routing between compute nodes and controller using public IP does
not work. Not fatal right now, but nice to have.
Yeah, not sure about that...
Other things:
https for keystone endpoint would be nice.
vnc consoles aren't working right.
Need to make sure we can get our ansible to spin up and manage instances, etc.
Perhaps we could spin up a dev copr on it to test... and if all looks well do another reinstall/reconfigure cycle and start using it. ;)
kevin
On 03/02/2015 04:00 AM, Kevin Fenzi wrote:
I guess it it only rebooted after packstack first runs it could work.
That is what I meant. Only needed once, but still nice to have it automated.
- routing between compute nodes and controller using public IP does
not work. Not fatal right now, but nice to have.
Yeah, not sure about that...
Other things:
https for keystone endpoint would be nice.
Not sure if this possible. All those services listen on specified port. E.g: http://fed-cloud09.cloud.fedoraproject.org:5000/v2.0 And I doubt that it can understand plain http and https on the same port. Hmm. I see in /etc/keystone/keystone.conf [ssl] enable=False
I will investigate what needs to be done to enable it.
Need to make sure we can get our ansible to spin up and manage instances, etc.
Perhaps we could spin up a dev copr on it to test... and if all looks well do another reinstall/reconfigure cycle and start using it. ;)
I will try to migrate copr-fe-dev.
All services are using SSL but novncproxy, which does not worked for me and according some random notes on internet does not work over SSL due some bugs. But novncproxy does not work for me even over plain http. And I do not know why. If somebody else can check it, it would be great. Strange thing is that telnet fed-cloud09.cloud.fedoraproject.org 6080 from my workstation is rejected, while on fed-cloud09 it pass. And iptable allows port 6080. Strange.
I tried to automatize adding of SSH keys using this:
TASK: [shell source /root/keystonerc_admin && F=$(mktemp) && {{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas msuchy') }}> "$F" && nova --os-username msuchy --os-password {{msuchy_password}} --os-tenant-name copr keypair-list | ( grep msuchy || nova --os-username msuchy --os-password {{msuchy_password}} --os-tenant-name copr keypair-add --pub_key "$F" msuchy ); rm -f "$F"] ***
which does not work. While executing this from shell:
source /root/keystonerc_admin && F=$(mktemp) && cat id_rsa.pub > "$F" && nova --os-username msuchy --os-password "$PASSWORD" --os-tenant-name copr keypair-list | ( grep msuchy || nova --os-username msuchy --os-password "$PASSWORD" --os-tenant-name copr keypair-add --pub_key "$F" msuchy ); rm -f "$F"
works. So probably problem is in that lookup() and again I do not know why.
Anyway, I am able (again) to start VM and log to those VM.
My plan for next week is to migrate dev instance to new OpenStack (before it will be re-provisioned) and see what needs to be changed.
On 03/06/2015 04:02 PM, Miroslav Suchý wrote:
I tried to automatize adding of SSH keys using this:
TASK: [shell source /root/keystonerc_admin && F=$(mktemp) && {{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas msuchy') }}> "$F" && nova --os-username msuchy --os-password {{msuchy_password}} --os-tenant-name copr keypair-list | ( grep msuchy || nova --os-username msuchy --os-password {{msuchy_password}} --os-tenant-name copr keypair-add --pub_key "$F" msuchy ); rm -f "$F"] ***
which does not work. While executing this from shell:
source /root/keystonerc_admin && F=$(mktemp) && cat id_rsa.pub > "$F" && nova --os-username msuchy --os-password "$PASSWORD" --os-tenant-name copr keypair-list | ( grep msuchy || nova --os-username msuchy --os-password "$PASSWORD" --os-tenant-name copr keypair-add --pub_key "$F" msuchy ); rm -f "$F"
works. So probably problem is in that lookup() and again I do not know why.
Ok, I just find that there is ansible module for that. And it works fine.
On Fri, 06 Mar 2015 16:02:39 +0100 Miroslav Suchý msuchy@redhat.com wrote:
All services are using SSL but novncproxy, which does not worked for me and according some random notes on internet does not work over SSL due some bugs. But novncproxy does not work for me even over plain http. And I do not know why. If somebody else can check it, it would be great. Strange thing is that telnet fed-cloud09.cloud.fedoraproject.org 6080 from my workstation is rejected, while on fed-cloud09 it pass. And iptable allows port 6080. Strange.
I got this all fixed up and updated ansible.
Basically three issues:
1. novncproxy was listening only on the internal ip, so it wasn't answering for external people using the web browser. 2. It was not able to talk to vnc on the compute nodes due to firewall. 3. It was not using https links in nova config and in novncproxy sysconfig.
All thats set and I can see console in the web dash again just fine for any of the instances I tried, and they are all https using only.
I tried to automatize adding of SSH keys using this:
I wonder if we shouldn't have something to update/upload everyones ssh keys. Might be handy but of course it's not a blocker/that important. We could even look at just tieing into our existing fedmsg listener (when someone with a cloud account changes ssh key, update the cloud).
Anyway, I am able (again) to start VM and log to those VM.
Me too. I uploaded the F22 Alpha cloud image and it worked fine. (aside cloud-init taking about 35 seconds to run. It seemed to be timing out on some metadata ?)
We should look at hooking our cloud image upload service into this soon so we can get images as soon as they are done.
My plan for next week is to migrate dev instance to new OpenStack (before it will be re-provisioned) and see what needs to be changed.
Sounds good!
I think:
* We will of course need to change the variables it uses to point to the new cloud (credentials, ips, etc). * We will need to adapt to not giving every instance a floating ip. For copr, I think this would be fine, as you don't care that they have external ips they only need to talk to the backend right? * Might be a good time to look at moving copr to f21? and builders also to be f21? (they should come up faster and in general be better than the el6 ones currently used, IMHO) * Can we adjust the default tennat quotas in the playbooks? They seem a bit low to me given the amount of resources we have. * Right now ansible on lockbox01 is using euca2ools to manage cloud instances, perhaps we could/should just move to nova now? Or this could perhaps wait for us to move lockbox01 to rhel7.
Anyhow, I think we are making real progress now, lets keep it going!
kevin
Oh a few more minor things before we bring new cloud live:
* We should decide on a name and get a real ssl cert. currently, cloud.fedoraproject.org goes to a doc page about Fedora cloud stuff. We could use 'openstack.fedoraproject.org' or 'private-cloud' or ? We could also add alternate CN's on it so it will work for fed-cloud09 too (and other nodes in case we move the controller)
* I see that the tenants have the same internal 172.16.0.0 net right now, can we make sure we seperate them from each other? ie, I don't want a infrastructure instance being able to talk to a copr builder if we can avoid it.
* Do we want to also revisit flavors available? Perhaps drop the builder one and just use m1.large for it? we should have resources to use more cpus/mem and should make copr builds faster/better.
* Is there any way to see how much space is available on the equalogics aside from just logging into it via ssh?
kevin
On 03/07/2015 07:29 PM, Kevin Fenzi wrote:
- I see that the tenants have the same internal 172.16.0.0 net right now, can we make sure we seperate them from each other? ie, I don't want a infrastructure instance being able to talk to a copr builder if we can avoid it.
Are you sure? From: playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml # 172.16.0.1/12 -- 172.21.0.1/12 - Free to take # 172.23.0.1/12 - free (but used by old cloud) # 172.24.0.1/12 - RESERVED it is used internally for OS # 172.25.0.1/12 - Cloudintern # 172.26.0.1/12 - infrastructure # 172.27.0.1/12 - persistent # 172.28.0.1/12 - transient # 172.29.0.1/12 - scratch # 172.30.0.1/12 - copr # 172.31.0.1/12 - Free to take And checking dashboard I see infra in .26 network and copr in .16. Hmm that is different one, but copr should have .30. Playbook seems to be correct. Strange.
- Do we want to also revisit flavors available? Perhaps drop the builder one and just use m1.large for it? we should have resources to use more cpus/mem and should make copr builds faster/better.
80GB is too much, and 4 VCPU too. I think having extra flavor for builder is nice as we can change it any time without affecting other instances/tenants.
- Is there any way to see how much space is available on the equalogics aside from just logging into it via ssh?
Unfortunately no. I reported it as RFE some time ago. https://bugs.launchpad.net/cinder/+bug/1380555 You can only amount of used space using cinder list && cinder show volume-id
On Mon, 09 Mar 2015 10:29:36 +0100 Miroslav Suchý msuchy@redhat.com wrote:
On 03/07/2015 07:29 PM, Kevin Fenzi wrote:
- I see that the tenants have the same internal 172.16.0.0 net right now, can we make sure we seperate them from each other? ie, I
don't want a infrastructure instance being able to talk to a copr builder if we can avoid it.
Are you sure? From: playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml # 172.16.0.1/12 -- 172.21.0.1/12 - Free to take # 172.23.0.1/12 - free (but used by old cloud) # 172.24.0.1/12 - RESERVED it is used internally for OS # 172.25.0.1/12 - Cloudintern # 172.26.0.1/12 - infrastructure # 172.27.0.1/12 - persistent # 172.28.0.1/12 - transient # 172.29.0.1/12 - scratch # 172.30.0.1/12 - copr # 172.31.0.1/12 - Free to take And checking dashboard I see infra in .26 network and copr in .16. Hmm that is different one, but copr should have .30. Playbook seems to be correct. Strange.
Yeah, I saw those comments, was looking at the dashboard:
https://fed-cloud09.cloud.fedoraproject.org/dashboard/admin/networks/
login as admin and see that page...
copr-subnet 172.16.0.0/12 infrastructure-subnet 172.16.0.0/12
Not sure if thats just because they are all in the same /12?
- Do we want to also revisit flavors available? Perhaps drop the builder one and just use m1.large for it? we should have
resources to use more cpus/mem and should make copr builds faster/better.
80GB is too much, and 4 VCPU too. I think having extra flavor for builder is nice as we can change it any time without affecting other instances/tenants.
ok. I think more cpus (to make builds faster in many cases) would still be welcome thought. As well as more memory. Disk I don't think matters as much.
- Is there any way to see how much space is available on the
equalogics aside from just logging into it via ssh?
Unfortunately no. I reported it as RFE some time ago. https://bugs.launchpad.net/cinder/+bug/1380555 You can only amount of used space using cinder list && cinder show volume-id
ok.
kevin
On 03/09/2015 10:29 AM, Miroslav Suchý wrote:
On 03/07/2015 07:29 PM, Kevin Fenzi wrote:
- I see that the tenants have the same internal 172.16.0.0 net right now, can we make sure we seperate them from each other? ie, I don't want a infrastructure instance being able to talk to a copr builder if we can avoid it.
Are you sure? From: playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml # 172.16.0.1/12 -- 172.21.0.1/12 - Free to take # 172.23.0.1/12 - free (but used by old cloud) # 172.24.0.1/12 - RESERVED it is used internally for OS # 172.25.0.1/12 - Cloudintern # 172.26.0.1/12 - infrastructure # 172.27.0.1/12 - persistent # 172.28.0.1/12 - transient # 172.29.0.1/12 - scratch # 172.30.0.1/12 - copr # 172.31.0.1/12 - Free to take And checking dashboard I see infra in .26 network and copr in .16. Hmm that is different one, but copr should have .30. Playbook seems to be correct. Strange.
Ah. Of course /12 is mistake. There should be /16. However when I see that with /16 we have only 7 free subnets. I would rather use /20 subnets, which would give us 4094 IPs per one subnet. That should be enough and it gives us plenty of subnets for use.
So it would be: # 172.16.0.1/16 -- 172.21.0.1/20- Free to take # 172.23.0.1/16 - free (but used by old cloud) # 172.24.0.1/24 - RESERVED it is used internally for OS # 172.25.0.1/20 - Cloudintern (172.25.0.1 - 172.25.15.254) # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254) # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254) # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254) # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254) # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254) # 172.25.96.1/20 -- 172.25.240.1/20 - free # 172.26.0.1/16 -- 172.31.0.1/16 - free
Comments?
Hi guys :),
On Mon, Mar 9, 2015 at 2:39 PM, Miroslav Suchý msuchy@redhat.com wrote:
So it would be: # 172.16.0.1/16 -- 172.21.0.1/20- Free to take # 172.23.0.1/16 - free (but used by old cloud) # 172.24.0.1/24 - RESERVED it is used internally for OS # 172.25.0.1/20 - Cloudintern (172.25.0.1 - 172.25.15.254) # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254) # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254) # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254) # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254) # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254) # 172.25.96.1/20 -- 172.25.240.1/20 - free # 172.26.0.1/16 -- 172.31.0.1/16 - free
Comments?
Seems like you forgot the 172.22.0.1/16 class and also on the 172.21.0.1/16 class, having put a /20 on the 172.21.0.1, you are leaving behind the classes from the 172.21.16.1/20 to the 172.21.240.1/20.
Fabio
On Mon, 09 Mar 2015 14:39:52 +0100 Miroslav Suchý msuchy@redhat.com wrote:
So it would be: # 172.16.0.1/16 -- 172.21.0.1/20- Free to take # 172.23.0.1/16 - free (but used by old cloud) # 172.24.0.1/24 - RESERVED it is used internally for OS # 172.25.0.1/20 - Cloudintern (172.25.0.1 - 172.25.15.254) # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254) # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254) # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254) # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254) # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254) # 172.25.96.1/20 -- 172.25.240.1/20 - free # 172.26.0.1/16 -- 172.31.0.1/16 - free
Comments?
Sounds good to me.
When we migrate from old->new we are going to have to deal with the floating ips. I guess we could make the new openstack have the entire range, then move those instances that expect to be at specific ips (and they can claim them), then move the rest and give them just 'next ip' in the external.
Also, I'd like to reserve some externals for the other cloud. ie, once we move to this new cloud, I want to keep say fed-cloud01/02 out and redo them with juno or kilo or whatever so we can more quickly move to a new cloud version if needed.
I guess that should be something like:
209.132.184:
.1 to .25 reserved for hardware nodes 26 to 30 reserved for 'test openstack' 31-250 reserved for 'production openstack'
(and of course some instances may have specific ips in the production range).
kevin
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
All thats set and I can see console in the web dash again just fine for any of the instances I tried, and they are all https using only.
Works for me too. Nice. Thanks.
I tried to automatize adding of SSH keys using this:
I wonder if we shouldn't have something to update/upload everyones ssh keys. Might be handy but of course it's not a blocker/that important. We could even look at just tieing into our existing fedmsg listener (when someone with a cloud account changes ssh key, update the cloud).
Done. Search for "upload SSH keys for users" action. However it work only initially. Once user alter his password it will fail. I ignore those cases with "ignore_errors: yes" though. I have pending RFE for OpenStack so admin is able to upload ssh keys to user.
I skipped (commented out) users: * twisted * cockpit as I do not know which ssh keys they use. Can somebody put there right values?
Anyway, I am able (again) to start VM and log to those VM.
Me too. I uploaded the F22 Alpha cloud image and it worked fine. (aside cloud-init taking about 35 seconds to run. It seemed to be timing out on some metadata ?)
We should look at hooking our cloud image upload service into this soon so we can get images as soon as they are done.
I will leave this one for somebody else.
My plan for next week is to migrate dev instance to new OpenStack (before it will be re-provisioned) and see what needs to be changed.
Sounds good!
I think:
- Might be a good time to look at moving copr to f21? and builders also to be f21? (they should come up faster and in general be better than the el6 ones currently used, IMHO)
I will start by moving builder to F21 (this really limit us) and once it will be finished I move backend and fronted. I'm afraid that by that time I will move them directly to F22 :)
- Right now ansible on lockbox01 is using euca2ools to manage cloud instances, perhaps we could/should just move to nova now? Or this could perhaps wait for us to move lockbox01 to rhel7.
I learned (the hard way) that nova/cider/neutron etc. commands are deprecated. The new preferred way is command "openstack" from python-openstackclient. However Icehouse use 0.3 version and you should not think about using this command unless you have 1.0 version available (Juno or Kilo, not sure). It probably does not matter if you use ansible modules, but you may consider it if you are calling commands directly. #justsaying
On Mon, 09 Mar 2015 11:25:20 +0100 Miroslav Suchý msuchy@redhat.com wrote:
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
All thats set and I can see console in the web dash again just fine for any of the instances I tried, and they are all https using only.
Works for me too. Nice. Thanks.
Cool.
I tried to automatize adding of SSH keys using this:
I wonder if we shouldn't have something to update/upload everyones ssh keys. Might be handy but of course it's not a blocker/that important. We could even look at just tieing into our existing fedmsg listener (when someone with a cloud account changes ssh key, update the cloud).
Done. Search for "upload SSH keys for users" action. However it work only initially. Once user alter his password it will fail. I ignore those cases with "ignore_errors: yes" though. I have pending RFE for OpenStack so admin is able to upload ssh keys to user.
I skipped (commented out) users:
- twisted
- cockpit
as I do not know which ssh keys they use. Can somebody put there right values?
Will have to find out. Those groups aren't from fas...
Anyway, I am able (again) to start VM and log to those VM.
Me too. I uploaded the F22 Alpha cloud image and it worked fine. (aside cloud-init taking about 35 seconds to run. It seemed to be timing out on some metadata ?)
We should look at hooking our cloud image upload service into this soon so we can get images as soon as they are done.
I will leave this one for somebody else.
Yeah, will ping oddshocks on it, but possibly wait until our final re-install.
- Might be a good time to look at moving copr to f21? and builders
also to be f21? (they should come up faster and in general be better than the el6 ones currently used, IMHO)
I will start by moving builder to F21 (this really limit us) and once it will be finished I move backend and fronted. I'm afraid that by that time I will move them directly to F22 :)
Hopefully we can get there before then. ;)
- Right now ansible on lockbox01 is using euca2ools to manage cloud instances, perhaps we could/should just move to nova now? Or this could perhaps wait for us to move lockbox01 to rhel7.
I learned (the hard way) that nova/cider/neutron etc. commands are deprecated. The new preferred way is command "openstack" from python-openstackclient. However Icehouse use 0.3 version and you should not think about using this command unless you have 1.0 version available (Juno or Kilo, not sure). It probably does not matter if you use ansible modules, but you may consider it if you are calling commands directly. #justsaying
ok. We may have to do some trial and error.
nova commands worked fine from here, but I didn't really try and do anything fancy. We could see if the euca stuff will just keep working for us for now.
kevin
On 03/09/2015 01:00 PM, Kevin Fenzi wrote:
nova commands worked fine from here, but I didn't really try and do anything fancy. We could see if the euca stuff will just keep working for us for now.
It works fine. It is just that if you miss some functionality (and I miss a lot) and file RFE, it will be likely rejected that you should now use openstack command.
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
- Can we adjust the default tennat quotas in the playbooks? They seem a bit low to me given the amount of resources we have.
I put (and tested) the quota for Copr (it is on bottom of playbook). Can you please write quotas for other tenants (or you can post it to me). I have no idea what are needs of those tenants.
On Mon, 09 Mar 2015 13:00:20 +0100 Miroslav Suchý msuchy@redhat.com wrote:
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
- Can we adjust the default tennat quotas in the playbooks? They
seem a bit low to me given the amount of resources we have.
I put (and tested) the quota for Copr (it is on bottom of playbook). Can you please write quotas for other tenants (or you can post it to me). I have no idea what are needs of those tenants.
True, it could vary.
Alright, lets just leave the rest default and we can adjust as we go.
The new cloud should have a good deal more cpus and mem than the old one, but we will also need to see if quota bugs are fixed (in old cloud it would miscount things pretty badly sometimes).
kevin
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
- We will need to adapt to not giving every instance a floating ip. For copr, I think this would be fine, as you don't care that they have
*nod* I was not sure how VM behave when does not have public IP so I tested it. It is basicaly behind NAT and all internet is accessible. Therefore yes, Copr builders do not need floating ip.
However this instance of OpenStack behave differently from the old one. When you start up VM, you do not get public IP automatically.
On Mon, 09 Mar 2015 13:48:49 +0100 Miroslav Suchý msuchy@redhat.com wrote:
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
- We will need to adapt to not giving every instance a floating ip.
For copr, I think this would be fine, as you don't care that they have
*nod* I was not sure how VM behave when does not have public IP so I tested it. It is basicaly behind NAT and all internet is accessible. Therefore yes, Copr builders do not need floating ip.
Right. In fact it's nicer as they are no longer exposed on the net at all.
However this instance of OpenStack behave differently from the old one. When you start up VM, you do not get public IP automatically.
Yes. We changed that behavior deliberately. We thought it would be good to make sure all instances got an external floating ip. In retrospect this just caused us problems, so I think the default behavior in the new cloud is better. It does mean we may need to adjust some ansible scripts to make sure they request a floating ip once we move things over.
kevin
infrastructure@lists.fedoraproject.org