Jump to content


0
[Req]

Anyone has the vMX 16.2R2 just released in July?



38 replies to this topic

#29 qianxingnan

qianxingnan

    Junior Member

  • Members
  • PipPip
  • 5 posts
  • 105 thanks

Posted 26 July 2017 - 01:59 PM

View PostHermanToothrot, on 26 July 2017 - 06:28 AM, said:

Well, crap. I missed that somehow. I grabbed 16.2R1 for you yesterday. Sorry about that!  

I wish Juniper would get around to releasing newer versions of vQFX, especially since the KVM version of the newer vRE was busted, and would crash for everybody.  We had to resort to pulling out the .,vmdk from the Vagrant version, to get it working again.  And it's still the same original vPFE.  *harumph!*

No worry Herman. I tried the 'older' version you uploaded anyway and it is already working better than the v17 release - interface came up the very first boot and no more those annoying console messages. I ran it for a couple hours with no problem at all and I think I will stick to the v16 from now on. If you have a chance to download the current v16 release can you share a copy with me? I would love to try that one out as well.

I haven't havd any chance to try vQFX yet. Is the setup process similar to vMX VCP/VFP?

Thanks,
Daniel

#30 HermanToothrot

HermanToothrot

    Member

  • Members
  • PipPip
  • 32 posts
  • 17586 thanks
  • LocationMonkey Island

Posted 27 July 2017 - 09:38 PM

Yeah, I can grab 16.2R2 sometime tonight, and upload it to your server. I need to stay up all night finishing something urgent anyway.

vQFX is an awful lot like vMX, in that we have two VMs that we need to connect together.  The vRE runs Junos (I'm curious if I can run 2 Junos instances at once, since that's standard on the physical switches), and then we;ve also got a vPFE that run Wind River Yocto Linux.

Here's what the software setup looks like, on the real QFX-10K switches:

Posted Image

Remember how I said in that setup guide for GNS3, to leave the interfaces set to Eth0, Eth1,Eth2, etc..., since GNS3's interface naming schema gets weird, when you have edge cases like vMX and vQFX?

You still need to connect Eth0 of both VMs to a dumb switch, since that's the mgmt interface. Then, you also cross-connect Eth1 of both VMs to each other, since that's the internal em1 interface that the routing engine and packet forwarding engine still use to communicate with each other.   This time, though, we also need to leave Eth2 alone, since it's the internal em2 interface, which the active and standby Junos instances use to talk to each other. You connect the topology devices to the vRE starting with Eth3, which is xe-0/0/0, Eth4 is xe-0/0/1, and so on. I only assign the vPFE 2 interfaces, and give the vRE at least 13.  We don't have to use virtio-net-pci this time, though.

Here's the setup I used (with KVM):

vPFE - 2 adapters, 2GB RAM, x86_64 qemu, only 1 vCPU.   That pfe vmdk file from June 9th of 2016 still works.

vRE - 15 adapters, 1GB RAM, x86_64 qemu, and 2 vCPUs.  You can either stick to the original 15.1X53-D60 KVM release, or get the Vagrant file of 15-1X53-D63, and double-extract it with 7zip, to get a usable vmdk file. The KVM file version of 15-1X53-D63 is/was busted, and used to crash constantly.  You can always use the two original KVM files, if you feel like. I already have the three files, so I can upload those to your server, too.

I'd honestly increase the RAM and vCPU allocations for both VMs though, and definitely set the pfe to lite-mode.  This thing takes a LOT longer than vMX to load up, and it always has. We're talking "start it up, and either walk the dogs, or go eat a sandwich".

You'll also likely need to delete dhcp from the xe-0/0/x interfaces, since I think it's set by default.


EDIT:  One of these days, I'm going to see if I can get Master and Standby Junos instances running in both vMX and vQFX.  I helped someone get two logical systems to connect and talk to each other, in the same VM a couple of weeks ago.  I might need to make an Ubuntu server VM, and compile and install vMX in it, instead of running it in GNS3 via Qemu, since the images folder has a TON of files in there.

Edited by HermanToothrot, 27 July 2017 - 09:41 PM.


#31 qianxingnan

qianxingnan

    Junior Member

  • Members
  • PipPip
  • 5 posts
  • 105 thanks

Posted 28 July 2017 - 03:14 AM

View PostHermanToothrot, on 27 July 2017 - 09:38 PM, said:

Yeah, I can grab 16.2R2 sometime tonight, and upload it to your server. I need to stay up all night finishing something urgent anyway.

vQFX is an awful lot like vMX, in that we have two VMs that we need to connect together.  The vRE runs Junos (I'm curious if I can run 2 Junos instances at once, since that's standard on the physical switches), and then we;ve also got a vPFE that run Wind River Yocto Linux.

Here's what the software setup looks like, on the real QFX-10K switches:

Posted Image

Remember how I said in that setup guide for GNS3, to leave the interfaces set to Eth0, Eth1,Eth2, etc..., since GNS3's interface naming schema gets weird, when you have edge cases like vMX and vQFX?

You still need to connect Eth0 of both VMs to a dumb switch, since that's the mgmt interface. Then, you also cross-connect Eth1 of both VMs to each other, since that's the internal em1 interface that the routing engine and packet forwarding engine still use to communicate with each other.   This time, though, we also need to leave Eth2 alone, since it's the internal em2 interface, which the active and standby Junos instances use to talk to each other. You connect the topology devices to the vRE starting with Eth3, which is xe-0/0/0, Eth4 is xe-0/0/1, and so on. I only assign the vPFE 2 interfaces, and give the vRE at least 13.  We don't have to use virtio-net-pci this time, though.

Here's the setup I used (with KVM):

vPFE - 2 adapters, 2GB RAM, x86_64 qemu, only 1 vCPU.   That pfe vmdk file from June 9th of 2016 still works.

vRE - 15 adapters, 1GB RAM, x86_64 qemu, and 2 vCPUs.  You can either stick to the original 15.1X53-D60 KVM release, or get the Vagrant file of 15-1X53-D63, and double-extract it with 7zip, to get a usable vmdk file. The KVM file version of 15-1X53-D63 is/was busted, and used to crash constantly.  You can always use the two original KVM files, if you feel like. I already have the three files, so I can upload those to your server, too.

I'd honestly increase the RAM and vCPU allocations for both VMs though, and definitely set the pfe to lite-mode.  This thing takes a LOT longer than vMX to load up, and it always has. We're talking "start it up, and either walk the dogs, or go eat a sandwich".

You'll also likely need to delete dhcp from the xe-0/0/x interfaces, since I think it's set by default.


EDIT:  One of these days, I'm going to see if I can get Master and Standby Junos instances running in both vMX and vQFX.  I helped someone get two logical systems to connect and talk to each other, in the same VM a couple of weeks ago.  I might need to make an Ubuntu server VM, and compile and install vMX in it, instead of running it in GNS3 via Qemu, since the images folder has a TON of files in there.

Thanks for the detailed explanation Herman. Looks like the process of setting up vQFX is very similar to vMX. I am surprised to see vPFE only needs 2GB RAM and has only 2 adapter while the vRE has 15 adapter. This is quite different from vMX and I would definitely look more into it when I have the time.

Recently I have been running everything on KVM within a VMWare guest - a Nested Virtualizaion environment. It works great for me so far and with the nice user interface from Unetlab everything becomes so easy.

#32 HermanToothrot

HermanToothrot

    Member

  • Members
  • PipPip
  • 32 posts
  • 17586 thanks
  • LocationMonkey Island

Posted 28 July 2017 - 05:56 AM

I've been running it both via KVM in Workstation Pro in Win7, and natively in Linux.  vQFX just loads slow for me, even when I increase the resource allocations.  

I'm hoping that the workstation I'm wanting to make, just to run GNS3 in ESXi (I'd have 4 times the amounts of cores and threads, 4 times the amount of much faster RAM, and still have plenty of room left over for some fast SSDs, and a few fat HDDs for backups), will help speed things up, as well as letting me run larger topologies than I can right now.

BTW, can we run any of those ASR 1000 images, like an ASR 1000v instance?  Cisco mentioned it in their SD-WAN, NFV, SDN, ACI, and other stuff, but I've caught them contradicting themselves several times in the documentation already...  :/

Once I finish dpwnloading that new NX-OSv 9000 image (and maybe an ASR 1000 image), I"ll start uploading the vMX and vQFX-10K images to your server.   Just a heads up, but vQFX already comes with a root login pre-applied.  It's "root/Juniper".

#33 HermanToothrot

HermanToothrot

    Member

  • Members
  • PipPip
  • 32 posts
  • 17586 thanks
  • LocationMonkey Island

Posted 28 July 2017 - 09:15 AM

Ah! I figured out how you were able to see a more recent version of 16.2 than I could.  I started monkeying around with the Download option at the top of the vMX page, before I log in. Provided I picked vMX by either selection or type, I had an extra "vMX SR" in my drop down menu.  Once I did it that way. I found 16.2R2, and 17.2R1-S1 through 17.2R1-S4.  The system would always reject my attempts to actually download them, though.  I checked the help desk, and they're shut down for  the day (but i don't think I have those privileges anyway).

Another super-weird thing, was that I found a page listing all the updates for FreeBSD, and their Junos equivalents. I was locked out of grabbing those, or trying to update Wind River Linux, either.

Basically, all I can do at this point, is upload those 3 vQFX files (you can either stick with the two that have the same version number, or use the routing engine VM with the newest number, with the only pfe image available),  I've also got a bunch of Juniper specific books uploading too, since I'm catching a nap in a few.

Sorry I couldn't grab 16.2R2, but I tried multiple times, and could see it, but would always get an error about being unauthorized.  The help desk was closed today, so I couldn't nag anyone there, either.

EDIT: If I get the time tomorrow, I'll try one of those ASR1000 images I grabbed. It likely won't work, but I should be able to use xrv-k9-demo-6.1.2 in it's place (or another CSR-1000v).

Edited by HermanToothrot, 28 July 2017 - 09:18 AM.


#34 qianxingnan

qianxingnan

    Junior Member

  • Members
  • PipPip
  • 5 posts
  • 105 thanks

Posted 28 July 2017 - 11:19 PM

A lot of really good stuff you uploaded Herman.

I think anything Cisco or Juniper mentions a KVM version of should be able to run within a Linux VM. I have tried Juniper vMX, vSRX, Cisco XRv, XRv9K, CSR1000v, NS-OS9K.

I did not know there is a ASR1000v. Is it big? I will try it out if you can upload it to the FTP server.

Thanked by 1 Member:
HermanToothrot

#35 HermanToothrot

HermanToothrot

    Member

  • Members
  • PipPip
  • 32 posts
  • 17586 thanks
  • LocationMonkey Island

Posted 29 July 2017 - 03:09 AM

There's no ASR1000v, per se. I just saw someone mention it in that thread, and thought I'd download two or three images, and give it a shot. My assumption, is that we're stuck with just ASAv, CSR1000v, XRv 9000, XRv-k9-demo, NX-OSv 9000, that terrible Titanium NX-OS image from VIRL, and the vIOS and vIOS-L2 images that have been getting better.   I did read in a few places on Cisco's website, where they used the words "ASR 1000v", but it was either a mistake (I've seen them contradict themselves plenty of times), or they're referring to that Online Virtual Router demo you can sign up for, and play with, kinda like I think you can do something similarly with  SD-WAN either from Cisco or Juniper.

Still, I'll find out one way or the other, if those ASR1000 images will actually work.

EDIT: Right now, I'm more worried about VMware pushing out ESXi updates for Epyc and Threadripper, since Supermicro just announced their first two servers for the chip. Sadly, they aren't 2 socket motherboards, and even the latest "Scalable Xeons" need ESxi to be updated, to be guaranteed to work.

I had someone nag at me earlier in the week, about how I should either "just chance it with linux" (Um, not after the way the linux Ryzen launch went, no.), or get a "cheap Intel server".

Well, I've checked on multiple sites (Tyan, Penguin Computing, Supermicro, Dell, HPE, and NewEgg), and not only are they wrong on somehow making the equivalent of a TR workstation using older E5 2xxx v4/v3 Xeons (I even looked at getting two, just to attempt at keeping the price low).  The only way to compete with a 2 processor Epyc server (in core/thread count), is to use at least 4 of the newer "Scalable Xeons".

Edited by HermanToothrot, 29 July 2017 - 03:28 AM.


#36 qianxingnan

qianxingnan

    Junior Member

  • Members
  • PipPip
  • 5 posts
  • 105 thanks

Posted 31 July 2017 - 03:51 AM

View PostHermanToothrot, on 29 July 2017 - 03:09 AM, said:

There's no ASR1000v, per se. I just saw someone mention it in that thread, and thought I'd download two or three images, and give it a shot. My assumption, is that we're stuck with just ASAv, CSR1000v, XRv 9000, XRv-k9-demo, NX-OSv 9000, that terrible Titanium NX-OS image from VIRL, and the vIOS and vIOS-L2 images that have been getting better.   I did read in a few places on Cisco's website, where they used the words "ASR 1000v", but it was either a mistake (I've seen them contradict themselves plenty of times), or they're referring to that Online Virtual Router demo you can sign up for, and play with, kinda like I think you can do something similarly with  SD-WAN either from Cisco or Juniper.

Still, I'll find out one way or the other, if those ASR1000 images will actually work.

EDIT: Right now, I'm more worried about VMware pushing out ESXi updates for Epyc and Threadripper, since Supermicro just announced their first two servers for the chip. Sadly, they aren't 2 socket motherboards, and even the latest "Scalable Xeons" need ESxi to be updated, to be guaranteed to work.

I had someone nag at me earlier in the week, about how I should either "just chance it with linux" (Um, not after the way the linux Ryzen launch went, no.), or get a "cheap Intel server".

Well, I've checked on multiple sites (Tyan, Penguin Computing, Supermicro, Dell, HPE, and NewEgg), and not only are they wrong on somehow making the equivalent of a TR workstation using older E5 2xxx v4/v3 Xeons (I even looked at getting two, just to attempt at keeping the price low).  The only way to compete with a 2 processor Epyc server (in core/thread count), is to use at least 4 of the newer "Scalable Xeons".


I have been using an used HP Proliant server because I know it is a stable server from past experience. I needed to get the lab environment up and running quickly so I just installed VMWare as Unetlab has a VMWare ova template to be imported for easy setup. Besides VMWare has a 'special' release to these HP servers so installing EXSi on the server is a breeze.  Later I found out the free version of new EXSi only allows a maxinum of 8 CPU cores now but the server has 16 of them. That is OK for now as my labs have no more than 30 devices and they seem to be working fine so far. There is one thing always in my mind is one of these days I will replace VMWare with KVM so that there will be no license restrictions regarding hardware resource. That will take much longer to set up the environment to the state I am running now mainly because I can't use a ready VMWare OVA template to deploy the unetlab VM. Instead it will be a manul installation of unetlab on a standard Ubuntu Linux.

Thanks,
Daniel

Edited by qianxingnan, 31 July 2017 - 03:53 AM.


#37 HermanToothrot

HermanToothrot

    Member

  • Members
  • PipPip
  • 32 posts
  • 17586 thanks
  • LocationMonkey Island

Posted 31 July 2017 - 08:27 PM

There's a crack in the VMware section, that's supposed to enable all the cores/memory your machine has. I haven't tried it out, since I don't have a spare machine to install it on.  That massive VMware thread also includes patches, etc...  for the HPE versions.

#38 qianxingnan

qianxingnan

    Junior Member

  • Members
  • PipPip
  • 5 posts
  • 105 thanks

Posted 31 July 2017 - 08:44 PM

View PostHermanToothrot, on 31 July 2017 - 08:27 PM, said:

There's a crack in the VMware section, that's supposed to enable all the cores/memory your machine has. I haven't tried it out, since I don't have a spare machine to install it on.  That massive VMware thread also includes patches, etc...  for the HPE versions.

It would be nice if I can lift the resource limit on CPU. Thank Herman for giving me a hope. I will check out that thread.

Daniel

#39 HermanToothrot

HermanToothrot

    Member

  • Members
  • PipPip
  • 32 posts
  • 17586 thanks
  • LocationMonkey Island

Posted 19 August 2017 - 11:35 PM

Bumping this, since I'd love to see a more recent version of vMX, beside 17.2R1.   I found a goofy work around (I forget how I did it), that would let me *see* newer versions, but I lack the permissions to download them.  Same applies for vQFX-10K.

If anyone actually does have the proper permissions, if you could please post the KVM and ESXi versions of both vMX and vQFX-10K, I'm sure we'd all be eternally grateful.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

Organization

Community

Downloads

Test Providers

Site Info


Go to top