# Devstack w/OpenDaylight: Cirros DHCP not working

Additional info on this problem: It appears that the DHCP agent interface is in the down state on br-int (along with br-int itself),this then blocks the flow updates. The data is at the bottom of the details section. Can anyone explain why the interface is down? (DHCP process is up) There's a "no such device" message in the ovs-switchd.log (note: The additional data was captured after unstack/stack so the interface names changed; however no changes were made between runs so the problem is identical)

Original issue: I'm using an all-in-one configuration of DevStack w/Lithium and the Cirros VMs do not receive an IP from DHCP. The stack process appears to complete successfully and when the VMs are instantiated they appear to bind to br-int properly. The DHCP process is running and is bound into bri-int as well. Now when I put wireshark on the various taps, I can see the request leave the VM but it never goes on the tap towards the DHCP server. Therefore I believe the problem is in the OpenFlow processing of the broadcast but I'm at a loss as to what to check (or what I missed in the devstack setup)

Here's the openvswitch and the local.conf

[stack@localhost Desktop]$sudo ovs-vsctl show 28cfb1fe-2d87-4952-9e29-12ebf07930e3 Manager "tcp:127.0.0.1:6640" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6653" is_connected: true fail_mode: secure Port "tapb6b82370-c4" Interface "tapb6b82370-c4" Port br-int Interface br-int type: internal Port patch-ext Interface patch-ext type: patch options: {peer=patch-int} Port "tap9ebd858f-2e" Interface "tap9ebd858f-2e" Port "tap421acffe-e3" Interface "tap421acffe-e3" type: internal Bridge br-ex Controller "tcp:127.0.0.1:6653" is_connected: true fail_mode: secure Port br-ex Interface br-ex type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-ext} ovs_version: "2.4.0" ---- [[local|localrc]] disable_service n-net disable_service cinder disable_service swift enable_service n-cpu enable_service n-cond enable_service n-novnc enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service tempest enable_service odl-server odl-compute # ODL WITH ML2 #enable_plugin networking-odl https://github.com/stackforge/networking-odl enable_plugin networking-odl http://git.openstack.org/openstack/networking-odl Q_PLUGIN=ml2 Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight ODL_MODE=allinone ODL_MGR_IP=127.0.0.1 ENABLE_TENANT_TUNNELS=True ODL_L3=True [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]] [agent] minimize_polling=True [ml2_odl] url=http://$ODL_MGR_IP:8181/controller/nb/v2/neutron


[stack@localhost devstack]$sudo ovs-ofctl show br-int -O openflow13 OFPT_FEATURES_REPLY (OF1.3) (xid=0x2): dpid:0000a2f784af4747 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS OFPST_PORT_DESC reply (OF1.3) (xid=0x3): 1(tap19caa118-09): addr:00:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max 2(patch-ext): addr:2e:06:17:41:06:6e config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-int): addr:a2:f7:84:af:47:47 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (OF1.3) (xid=0x5): frags=normal miss_send_len=0 2016-01-08 17:39:53,767 | INFO | ntDispatcherImpl | GatewayMacResolverService | 261 ... edit retag close merge delete ## Comments Take a look at flows and their stat counters to determine which flows are being hit. This will help figure out where the pkts are lost in pipeline. ( 2016-01-08 01:45:16 -0700 )edit I had the same problem and for me the solution was to set the local_ip up, try to use the following command: ovs-vsctl set Open_vSwitch <your_switch_uuid> other_config:local_ip=<your_data_ip> , and then check out if you have IP'S now from the DHCP ( 2016-01-11 00:42:12 -0700 )edit Your additional output seems to be from a different run/setup, so doesn't help. No sure about PORT_DOWN, I think it is always DOWN for tap ports in a namespace. You mentioned 'no such device' msg in dhcp log, what is the device name giving error? ( 2016-01-12 03:20:35 -0700 )edit ## 1 answer Sort by » oldest newest most voted I found the cause of the problem. I don't know if it's coming from OpenStack or OpenDaylight, but the issue is the transaction to setup IPv6 flows for the VM throws exceptions. You can see it in the OpenDaylight client java.lang.IllegalArgumentException: Supplied value "fd44:f1e1:98ba:0:f816:3eff:fe1e:2738/32" does not match required pattern "^(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5]).){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))$"

Now the subnet that OpenStack is requesting is the Private one built by default (by devstack) so it has both an IPv4 and IPv6 address to it. Either the request contains both IPv4 and IPv6 and is being processed as only IPv4 by OpenDayLight or the formatting is something OpenDayLight isn't expecting. Either way the transition blows up and the flows are not setup, however the VM is bound into the switch (so it shows success on OpenStack)

I built a completely different subnet for the VM using only IPv4 and it spins up and the DHCP request works

more

I use following two settings in my devstack, should be self explanatory: IP_VERSION=4 NEUTRON_CREATE_INITIAL_NETWORKS=False