I am installing ESXi 6.0 on 6 Dell PowerEdge R440's They all have 2 default onboard 1GB nics installed as well as Broadcom 57412 Dual Port 10GB SFP+ PCIe Adapter and Broadcom 57416 Dual Port 10 GB E BaseT Network LOM Mezz Card. Both are on the HCL for this version of ESXi.
After installing 6.0 on the hardware, I went in to configure the network and the only NICs I see are the onboard 1GB nics. From vCenter all I can see are the 2 onboard nics. I thought maybe it was a Driver issue so I researched and found the vib file from VM to install the driver on my host. When I run the esxcli software vib command, it returned that it skipped the vib for the driver.
I run lspci and I find all 6 nic's in the list. I run esxcli network nic list, but only see the 2 onboard nics. If I go into my vCenter, I can find the additional 4 cards when I go to the host\Manage\ Hardware\PCI Devices and Edit PCI Device Availability, but that enables passthrough so that the VM's can directly access the ports, but that is not what I want. I want them in the vSwitch so all VM's use those ports instead of the 1GB ports.
Any suggestions\help would be greatly appreciated.
In my customer environment, we have a 100GB LUN specified solely for swapping. However it is now full, with only 322 MB remaining - and right now it doesn't allow any host vMotion saying that there isn't enough free space. When I browse the relevant data store I can see a lot of .vswp files ans some .swp files ( I am attching the data store ). As far as I know a VM seperates out for itself a capacity equal to it's guest memory size on the swap space. What would be the best way to figure out which .vswp files are not required, and which are ? I can easily expand this LUN on the VNX, but would that be god administering - anyways on the LUN properties, I can see that it is only using 4+% - How does that figure with what is told me in the VMware front ( I would be asking this in an EMC forum seperately ) ?
Basically is there any adminstration guide with regards to virtual memory usage in VMware ? Or should I simply expand the LUN and add the expanded capacity on the VMware side ? What is the difference between vmx-XXXX.vswp, XXXX.vswp and sysswp-xyz.swp , where XXXX presents VMname ?
Today I installed a fresh homelab with esxi 6.5. All with default settings.
After created successfully an Ubuntu server and Windows 10 workstation I was ready to setup a new Windows 2016 server. Right after starting up the server it comes with:
I thought I did something wrong with selecting the SCSI Controller. What ever I choose to use I always get this message.
Tried the paravirtual SCSI and browsed the VMware tools to the driver, but same error.
4 working ESXi VM Hosts (but only one powered on for this issue) 2 with ESXi 6.5, 2 with ESXi 6.7.
One Windows 10 Pro PC - with installed VMWare Workstation 14 running vCenter VM and Content Library NAS (FreeNAS) VM
2 other Windows 10 Pro PCs
All static IP addresses; No Windows Servers (no active directory), DHCP, etc. BIND for DNS service (needed by vCenter)
All VMWare products are currently evaluation licenses.
Note: newbie VMWare user...
Description:
Wanted to perform a disk test to compare between a physical PC and a VM, so I wrote an application to append the contents of one file (75 GB) to the end of an existing one (5 GB).
I ran the test app on a older and slower (as compared to the one for the ESXi Host) Windows 10 Pro physical PC (SATA IDE) - took 15 mins to complete.
The same test on a Windows 10 Pro VM running in an ESXi 6.7 Host (SATA AHCI) - took 37 minutes.
Basically, it's just this VM Host with the one VM running.
Why would there would be such a big difference in time?
Hi, this issue started 2 days ago when I attempted to update from to "VC-6.5.0U2b-Appliance-FP" (6.5.0.21000 Build Number 8815520) in the vSphere Appliance Management. It stopped at 60% and prompted with an error code about "not being able to contact the server...". I then restarted the Appliance and from there it went down hill.
I figured out if I ran the command "Service vami-lighttp start" and it came up again, but after some time I couldn't authenticate etc.. seems as some services aren't running.
I will post the same output and hopefully figure this out. (I did notice those 4 commands, but haven't used those yet since I don't quite understand what they do)
service-control --start vmware-vpxd-svcs
Perform start operation. vmon_profile=None, svc_names=['vmware-vpxd-svcs'], include_coreossvcs=False, include_leafossvcs=False
2018-07-18T17:45:49.534Z Service vpxd-svcs state STARTED
Successfully started service vpxd-svcs
As you can see I find the ID shown above splitted in two parts inside of the value of guestCPUID.1 (marked in bold).
I need to change this ID to 0FEBFBFF000006F1 If I manipulate the guestCPUID.1 value according to the ID I need I get the following result: After PowerOn the VM again the value gets instantly overwritten by the original value. I assume that the other part of the value and maybe also guestCPUID.0 are also related in some way.
Unfortunally I can´t find anything in the documation / google about this paramaters.
in my environment i have esx 3.5.0 (64607),4.0,4.1 hosts, just want to know is there any impact if i restart management service on production esx hosts 3.5.0, 4.0,4.1
be fore restarting this service do i need to take check anything, is there any pre-requisite ?????
I have several ESXi 5.1 hosts in a cluster. There is one vm that I can't add to inventory. I have followed several kb articles and even removed the ESXi host from the cluster completely after verifying that it was the ESXi host causing the lock by finding the MAC address. I am unable to view the vmware.log file for this virtual machine. I get an "invalid argument" error when trying to cat the vmware.log or the .vmx file. The lock file is "vmname.vmx.lck". I've rebooted and restarted the management agents several times. I'm just not sure where to proceed from here as I've been reading about how to resolve this for about 3 hours and have yet to find anything that works. The contents of the directory contain the following files if this helps at all:
I'm trying to mount an NFS volume on ESXI 6, but keep running into this error. Googling about hasn't helped, so here I am. Error:
Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "192.168.xx.xx" failed.
Operation failed, diagnostics report: Unable to get console path for volume, sample name.
The NFS share is located on a synology nas. I've checked permissions and configuration. Everything looks correct based on the various tips and KB articles.
Can the vSphere 6.5 support the auto reclaimed space of datastore? e.g. If I write and delete the 100GB files on one of VM, the datastore can reclaim the space of datastore?
Can the vSphere 5.5 / 6 support the manual reclaim space? Where to run the reclaiming function through GUI? Need command base to do that?