repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
pytorch/examples | 196 | pytorch__examples-196 | [
"195"
] | fb9ca4d975f0fb2a5289ed67da6d29c83efae80d | diff --git a/imagenet/main.py b/imagenet/main.py
--- a/imagenet/main.py
+++ b/imagenet/main.py
@@ -15,6 +15,9 @@
import torchvision.datasets as datasets
import torchvision.models as models
+model_names = sorted(name for name in models.__dict__
+ if name.islower() and not name.startswith("__")
+ and callable(models.__dict__[name]))
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('data', metavar='DIR',
| Imagenet example broken
Looks like the following lines got deleted
```python3
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
```
Causing an undefined variable error.
```bash
Traceback (most recent call last):
File "main.py", line 25, in <module>
choices=model_names,
NameError: name 'model_names' is not defined
```
| 2017-08-08T00:16:56 |
||
pytorch/examples | 228 | pytorch__examples-228 | [
"225"
] | 5f24730b505409624dc1d066790cafcb8ba67bd9 | diff --git a/vae/main.py b/vae/main.py
--- a/vae/main.py
+++ b/vae/main.py
@@ -4,6 +4,7 @@
import torch.utils.data
from torch import nn, optim
from torch.autograd import Variable
+from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
@@ -77,18 +78,15 @@ def forward(self, x):
if args.cuda:
model.cuda()
-reconstruction_function = nn.BCELoss()
-
def loss_function(recon_x, x, mu, logvar):
- BCE = reconstruction_function(recon_x, x.view(-1, 784))
+ BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784))
# see Appendix B from VAE paper:
# Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
- KLD_element = mu.pow(2).add_(logvar.exp()).mul_(-1).add_(1).add_(logvar)
- KLD = torch.sum(KLD_element).mul_(-0.5)
+ KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
# Normalise by same number of elements as in reconstruction
KLD /= args.batch_size * 784
@@ -131,8 +129,11 @@ def test(epoch):
recon_batch, mu, logvar = model(data)
test_loss += loss_function(recon_batch, data, mu, logvar).data[0]
if i == 0:
- save_image(recon_batch.data.cpu().view(args.batch_size, 1, 28, 28),
- 'reconstruction_' + str(epoch) + '.png')
+ n = min(data.size(0), 8)
+ comparison = torch.cat([data[:n],
+ recon_batch.view(args.batch_size, 1, 28, 28)[:n]])
+ save_image(comparison.data.cpu(),
+ 'results/reconstruction_' + str(epoch) + '.png', nrow=n)
test_loss /= len(test_loader.dataset)
print('====> Test set loss: {:.4f}'.format(test_loss))
@@ -145,4 +146,5 @@ def test(epoch):
if args.cuda:
sample = sample.cuda()
sample = model.decode(sample).cpu()
- save_image(sample.data.view(64, 1, 28, 28), 'sample_' + str(epoch) + '.png')
+ save_image(sample.data.view(64, 1, 28, 28),
+ 'results/sample_' + str(epoch) + '.png')
| VAE loss
According to the expression in line 95, the KL-divergence term is calculated from
`0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)`
but I think the code in line 96-97 represents
`0.5 * sum(1 + log(sigma^2) - mu^2 - sigma)`
This might not be essential because whether the last term is squared or not, the loss descending behavior stays unchanged.
| 2017-10-01T22:31:43 |
||
pytorch/examples | 229 | pytorch__examples-229 | [
"204"
] | 9fe431ed5be2ebe43d08d5506a8f8eb690399a80 | diff --git a/time_sequence_prediction/generate_sine_wave.py b/time_sequence_prediction/generate_sine_wave.py
--- a/time_sequence_prediction/generate_sine_wave.py
+++ b/time_sequence_prediction/generate_sine_wave.py
@@ -1,12 +1,13 @@
-import math
import numpy as np
import torch
+
+np.random.seed(2)
+
T = 20
L = 1000
N = 100
-np.random.seed(2)
+
x = np.empty((N, L), 'int64')
-x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)
+x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)
data = np.sin(x / 1.0 / T).astype('float64')
torch.save(data, open('traindata.pt', 'wb'))
-
| Unused import of math in time_sequence_prediction example
The generate_sine_wave.py module imports math on the first line, but doesn't use it. This import should be removed.
| 2017-10-01T22:41:53 |
||
pytorch/examples | 353 | pytorch__examples-353 | [
"352"
] | eee5ca3b68d59aa2d6f33d1499499ee5c0963300 | diff --git a/mnist/main.py b/mnist/main.py
--- a/mnist/main.py
+++ b/mnist/main.py
@@ -6,48 +6,6 @@
import torch.optim as optim
from torchvision import datasets, transforms
-# Training settings
-parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
-parser.add_argument('--batch-size', type=int, default=64, metavar='N',
- help='input batch size for training (default: 64)')
-parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
- help='input batch size for testing (default: 1000)')
-parser.add_argument('--epochs', type=int, default=10, metavar='N',
- help='number of epochs to train (default: 10)')
-parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
- help='learning rate (default: 0.01)')
-parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
- help='SGD momentum (default: 0.5)')
-parser.add_argument('--no-cuda', action='store_true', default=False,
- help='disables CUDA training')
-parser.add_argument('--seed', type=int, default=1, metavar='S',
- help='random seed (default: 1)')
-parser.add_argument('--log-interval', type=int, default=10, metavar='N',
- help='how many batches to wait before logging training status')
-args = parser.parse_args()
-use_cuda = not args.no_cuda and torch.cuda.is_available()
-
-torch.manual_seed(args.seed)
-
-device = torch.device("cuda" if use_cuda else "cpu")
-
-
-kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
-train_loader = torch.utils.data.DataLoader(
- datasets.MNIST('../data', train=True, download=True,
- transform=transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize((0.1307,), (0.3081,))
- ])),
- batch_size=args.batch_size, shuffle=True, **kwargs)
-test_loader = torch.utils.data.DataLoader(
- datasets.MNIST('../data', train=False, transform=transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize((0.1307,), (0.3081,))
- ])),
- batch_size=args.test_batch_size, shuffle=True, **kwargs)
-
-
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
@@ -66,11 +24,7 @@ def forward(self, x):
x = self.fc2(x)
return F.log_softmax(x, dim=1)
-model = Net().to(device)
-
-optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
-
-def train(epoch):
+def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
@@ -84,7 +38,7 @@ def train(epoch):
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
-def test():
+def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
@@ -101,7 +55,55 @@ def test():
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
+def main():
+ # Training settings
+ parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
+ parser.add_argument('--batch-size', type=int, default=64, metavar='N',
+ help='input batch size for training (default: 64)')
+ parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
+ help='input batch size for testing (default: 1000)')
+ parser.add_argument('--epochs', type=int, default=10, metavar='N',
+ help='number of epochs to train (default: 10)')
+ parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
+ help='learning rate (default: 0.01)')
+ parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
+ help='SGD momentum (default: 0.5)')
+ parser.add_argument('--no-cuda', action='store_true', default=False,
+ help='disables CUDA training')
+ parser.add_argument('--seed', type=int, default=1, metavar='S',
+ help='random seed (default: 1)')
+ parser.add_argument('--log-interval', type=int, default=10, metavar='N',
+ help='how many batches to wait before logging training status')
+ args = parser.parse_args()
+ use_cuda = not args.no_cuda and torch.cuda.is_available()
+
+ torch.manual_seed(args.seed)
+
+ device = torch.device("cuda" if use_cuda else "cpu")
+
+ kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
+ train_loader = torch.utils.data.DataLoader(
+ datasets.MNIST('../data', train=True, download=True,
+ transform=transforms.Compose([
+ transforms.ToTensor(),
+ transforms.Normalize((0.1307,), (0.3081,))
+ ])),
+ batch_size=args.batch_size, shuffle=True, **kwargs)
+ test_loader = torch.utils.data.DataLoader(
+ datasets.MNIST('../data', train=False, transform=transforms.Compose([
+ transforms.ToTensor(),
+ transforms.Normalize((0.1307,), (0.3081,))
+ ])),
+ batch_size=args.test_batch_size, shuffle=True, **kwargs)
+
+
+ model = Net().to(device)
+ optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
+
+ for epoch in range(1, args.epochs + 1):
+ train(args, model, device, train_loader, optimizer, epoch)
+ test(args, model, device, test_loader)
+
-for epoch in range(1, args.epochs + 1):
- train(epoch)
- test()
+if __name__ == '__main__':
+ main()
\ No newline at end of file
| Multiprocessing runtime error freeze_support() in Windows 64 bit
copied from original issue from below , but looks like it has to be fixed in this examples:
https://github.com/pytorch/pytorch/issues/7485
| 2018-05-12T01:28:42 |
||
pytorch/examples | 379 | pytorch__examples-379 | [
"378"
] | f83508117b1ba9b752b227de992799093af3b215 | diff --git a/fast_neural_style/download_saved_models.py b/fast_neural_style/download_saved_models.py
new file mode 100644
--- /dev/null
+++ b/fast_neural_style/download_saved_models.py
@@ -0,0 +1,14 @@
+import os
+import zipfile
+
+from torch.utils.model_zoo import _download_url_to_file
+
+
+def unzip(source_filename, dest_dir):
+ with zipfile.ZipFile(source_filename) as zf:
+ zf.extractall(path=dest_dir)
+
+
+if __name__ == '__main__':
+ _download_url_to_file('https://www.dropbox.com/s/lrvwfehqdcxoza8/saved_models.zip?dl=1', 'saved_models.zip', None, True)
+ unzip('saved_models.zip', '.')
| Using Windows, can't train model, getting errors about vgg16
I am trying to work with the fast_neural_style example https://github.com/pytorch/examples/tree/master/fast_neural_style
I read through the readme several times and downloaded all needed files mentioned, then I ran
```
C:\Users\Aaron\Documents\fast-neural-style>python neural_style/neural_style.py train --dataset train2017 --style-image images/style-images/2.jpg --save-model-dir saved-models --epochs 2 --cuda 1
usage: neural_style.py train [-h] [--epochs EPOCHS] [--batch-size BATCH_SIZE]
--dataset DATASET [--style-image STYLE_IMAGE]
--vgg-model-dir VGG_MODEL_DIR --save-model-dir
SAVE_MODEL_DIR [--image-size IMAGE_SIZE]
[--style-size STYLE_SIZE] --cuda CUDA
[--seed SEED] [--content-weight CONTENT_WEIGHT]
[--style-weight STYLE_WEIGHT] [--lr LR]
[--log-interval LOG_INTERVAL]
neural_style.py train: error: the following arguments are required: --vgg-model-dir
```
Which the readme says nothing about. I then made an empty directory in the project root called vgg and ran the command again
```
(base) C:\Users\Aaron\Documents\fast-neural-style>python neural_style/neural_style.py train --dataset train2017 --style-image images/style-images/2.jpg --vgg-model-dir vgg --save-model-dir saved-models --epochs 2 --cuda 1
C:\Users\Aaron\Anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
'wget' is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File "neural_style/neural_style.py", line 210, in <module>
main()
File "neural_style/neural_style.py", line 204, in main
train(args)
File "neural_style/neural_style.py", line 41, in train
utils.init_vgg16(args.vgg_model_dir)
File "C:\Users\Aaron\Documents\fast-neural-style\neural_style\utils.py", line 70, in init_vgg16
vgglua = load_lua(os.path.join(model_folder, 'vgg16.t7'))
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 606, in load_lua
with open(filename, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'vgg\\vgg16.t7'
```
So I went and downloaded wget for Windows https://eternallybored.org/misc/wget/ and added the exe to my path, restarted my Anaconda prompt and then it downloaded the vgg16 model.
```
(base) C:\Users\Aaron\Documents\fast-neural-style>python neural_style/neural_style.py train --dataset train2017 --style-image images/style-images/2.jpg --vgg-model-dir vgg --save-model-dir saved-models --epochs 2 --cuda 1
C:\Users\Aaron\Anaconda3\lib\site-packages\torchvision\transforms\transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
--2018-06-29 18:06:05-- https://www.dropbox.com/s/76l3rt4kyi3s8x7/vgg16.t7?dl=1
Resolving www.dropbox.com (www.dropbox.com)... 162.125.3.1
Connecting to www.dropbox.com (www.dropbox.com)|162.125.3.1|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: /s/dl/76l3rt4kyi3s8x7/vgg16.t7 [following]
--2018-06-29 18:06:05-- https://www.dropbox.com/s/dl/76l3rt4kyi3s8x7/vgg16.t7
Reusing existing connection to www.dropbox.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://ucbb0bb9ba3b2c7e5d05c57f9bb8.dl.dropboxusercontent.com/cd/0/get/AKKR-EpfnYnls1gRv_prqzvUcfHyPNBGPkKr3WEMwMcxUlE_UfjKY6ilAGjky_PvlZh5Relm-vYj9pwE-IMHgu6s-kS6RaiKhVhrSw4MZ5Em334q_YXSRsW76rAchlu3wrl8lmayBboIaTfAdXUbPtEQm9f-YV03JYP-OdU3xKFypp438CuuX9l9kLZZ1rqeCe0/file?dl=1 [following]
--2018-06-29 18:06:06-- https://ucbb0bb9ba3b2c7e5d05c57f9bb8.dl.dropboxusercontent.com/cd/0/get/AKKR-EpfnYnls1gRv_prqzvUcfHyPNBGPkKr3WEMwMcxUlE_UfjKY6ilAGjky_PvlZh5Relm-vYj9pwE-IMHgu6s-kS6RaiKhVhrSw4MZ5Em334q_YXSRsW76rAchlu3wrl8lmayBboIaTfAdXUbPtEQm9f-YV03JYP-OdU3xKFypp438CuuX9l9kLZZ1rqeCe0/file?dl=1
Resolving ucbb0bb9ba3b2c7e5d05c57f9bb8.dl.dropboxusercontent.com (ucbb0bb9ba3b2c7e5d05c57f9bb8.dl.dropboxusercontent.com)... 162.125.3.6
Connecting to ucbb0bb9ba3b2c7e5d05c57f9bb8.dl.dropboxusercontent.com (ucbb0bb9ba3b2c7e5d05c57f9bb8.dl.dropboxusercontent.com)|162.125.3.6|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 553452665 (528M) [application/octet-stream]
Saving to: 'vgg/vgg16.t7'
vgg/vgg16.t7 100%[=================================================>] 527.81M 484KB/s in 13m 21s
2018-06-29 18:19:28 (675 KB/s) - 'vgg/vgg16.t7' saved [553452665/553452665]
Traceback (most recent call last):
File "neural_style/neural_style.py", line 210, in <module>
main()
File "neural_style/neural_style.py", line 204, in main
train(args)
File "neural_style/neural_style.py", line 41, in train
utils.init_vgg16(args.vgg_model_dir)
File "C:\Users\Aaron\Documents\fast-neural-style\neural_style\utils.py", line 70, in init_vgg16
vgglua = load_lua(os.path.join(model_folder, 'vgg16.t7'))
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 608, in load_lua
return reader.read()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 593, in read
return self.read_object()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 523, in wrapper
result = fn(self, *args, **kwargs)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 546, in read_object
return reader_registry[cls_name](self, version)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 243, in read_nn_class
attributes = reader.read()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 595, in read
return self.read_table()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 523, in wrapper
result = fn(self, *args, **kwargs)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 571, in read_table
k = self.read()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 595, in read
return self.read_table()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 523, in wrapper
result = fn(self, *args, **kwargs)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 572, in read_table
v = self.read()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 593, in read
return self.read_object()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 523, in wrapper
result = fn(self, *args, **kwargs)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 546, in read_object
return reader_registry[cls_name](self, version)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 318, in wrapper
obj = build_fn(reader, version)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 318, in wrapper
obj = build_fn(reader, version)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 243, in read_nn_class
attributes = reader.read()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 595, in read
return self.read_table()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 523, in wrapper
result = fn(self, *args, **kwargs)
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 572, in read_table
v = self.read()
File "C:\Users\Aaron\Anaconda3\lib\site-packages\torch\utils\serialization\read_lua_file.py", line 598, in read
"corrupted.".format(typeidx))
torch.utils.serialization.read_lua_file.T7ReaderException: unknown type id 1056626884. The file may be corrupted.
```
Not sure what to do next
Made an issue for three reasons
1. this should be mentioned in the README
2. In case anyone else runs into this
3. If anyone has any suggestions?
| cc: @peterjc123 can you issue to make sure we use cross-platform download command. We can probably even use torch.utils.download_url (or something like that which we have)
So this issue can be split into 2 parts:
1. Cross-platform file download utilities. Unfortunately, the `download_url` is in `pytorch/vision`, should I move that to `pytorch/pytorch`, @soumith ?
2. Probably cross-platform `load_lua`. I'm sure how to do this. You may have to specify `long_size=8` by yourself.
@peterjc123 torch.utils.modelzoo has `_download_url_to_file`, which is probably sufficient?
https://github.com/pytorch/pytorch/blob/master/torch/utils/model_zoo.py#L69
about (2), I can re-host the model after it's loaded via `load_lua` and then saved back to disk in Python pickle format
@soumith Okay!
@soumith Should I change this in `model_zoo.py` or move it somewhere around?
just change it in model_zoo.py for now. | 2018-07-02T04:12:51 |
|
pytorch/examples | 575 | pytorch__examples-575 | [
"537"
] | 97304e232807082c2e7b54c597615dc0ad8f6173 | diff --git a/imagenet/main.py b/imagenet/main.py
--- a/imagenet/main.py
+++ b/imagenet/main.py
@@ -261,8 +261,10 @@ def train(train_loader, model, criterion, optimizer, epoch, args):
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
- progress = ProgressMeter(len(train_loader), batch_time, data_time, losses, top1,
- top5, prefix="Epoch: [{}]".format(epoch))
+ progress = ProgressMeter(
+ len(train_loader),
+ [batch_time, data_time, losses, top1, top5],
+ prefix="Epoch: [{}]".format(epoch))
# switch to train mode
model.train()
@@ -296,7 +298,7 @@ def train(train_loader, model, criterion, optimizer, epoch, args):
end = time.time()
if i % args.print_freq == 0:
- progress.print(i)
+ progress.display(i)
def validate(val_loader, model, criterion, args):
@@ -332,7 +334,7 @@ def validate(val_loader, model, criterion, args):
end = time.time()
if i % args.print_freq == 0:
- progress.print(i)
+ progress.display(i)
# TODO: this should also be done with the ProgressMeter
print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
@@ -372,12 +374,12 @@ def __str__(self):
class ProgressMeter(object):
- def __init__(self, num_batches, *meters, prefix=""):
+ def __init__(self, num_batches, meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
- def print(self, batch):
+ def display(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('\t'.join(entries))
| Syntax error when running imagenet main.py
I got the following syntax error while running imagenet main.py
Command used:
```
% python main.py --help
```
*Error* :
```
File "main.py", line 300
progress.print(i)
^
SyntaxError: invalid syntax
```
I'm using python 2.7 with pytorch built from source.
| This is an error that appears only for Py2 and not for Py3. Can someone please fix it so we can run benchmarks on both Py2 and Py3 like the earlier version did? To be clear, this is a regression in functionality.
this is so weird, i cant see why this is not Py2 syntax.
Does it help to add at the top of the file `from __future__ import print_function` ?
That seems to have fixed this particular error, although the script now errors out on line 377:
```
root@9b47a454adcf:/data/pytorch_examples_latest/imagenet# python main.py
File "main.py", line 377
def __init__(self, num_batches, *meters, prefix=""):
^
SyntaxError: invalid syntax
```
@soumith I think I've introduced this with #529. But, just like you, I have no idea why this is a problem. | 2019-06-18T14:46:23 |
|
pytorch/examples | 699 | pytorch__examples-699 | [
"698"
] | 0c1654d6913f77f09c0505fb284d977d89c17c1a | diff --git a/dcgan/main.py b/dcgan/main.py
--- a/dcgan/main.py
+++ b/dcgan/main.py
@@ -15,7 +15,7 @@
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', required=True, help='cifar10 | lsun | mnist |imagenet | folder | lfw | fake')
-parser.add_argument('--dataroot', required=True, help='path to dataset')
+parser.add_argument('--dataroot', required=False, help='path to dataset')
parser.add_argument('--workers', type=int, help='number of data loading workers', default=2)
parser.add_argument('--batchSize', type=int, default=64, help='input batch size')
parser.add_argument('--imageSize', type=int, default=64, help='the height / width of the input image to network')
@@ -51,6 +51,9 @@
if torch.cuda.is_available() and not opt.cuda:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
+
+if opt.dataroot is None and str(opt.dataset).lower() != 'fake':
+ raise ValueError("`dataroot` parameter is required for dataset \"%s\"" % opt.dataset)
if opt.dataset in ['imagenet', 'folder', 'lfw']:
# folder dataset
| dcgan fails on "fake" dataset
I have been using the dcgan example as a stress test for a machine. To save time, I have made use of the `fake` dataset.
`python main.py --dataset 'fake'` fails because `dataroot` is a required parameter. However, the `fake` dataset, does not need such information.
| 2020-01-20T15:40:08 |
||
pytorch/examples | 832 | pytorch__examples-832 | [
"157"
] | 599654abf5ead6230dfa166133b067189a1b9275 | diff --git a/mnist/main.py b/mnist/main.py
--- a/mnist/main.py
+++ b/mnist/main.py
@@ -100,12 +100,14 @@ def main():
device = torch.device("cuda" if use_cuda else "cpu")
- kwargs = {'batch_size': args.batch_size}
+ train_kwargs = {'batch_size': args.batch_size}
+ test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
- kwargs.update({'num_workers': 1,
+ cuda_kwargs = {'num_workers': 1,
'pin_memory': True,
- 'shuffle': True},
- )
+ 'shuffle': True}
+ train_kwargs.update(cuda_kwargs)
+ test_kwargs.update(cuda_kwargs)
transform=transforms.Compose([
transforms.ToTensor(),
@@ -115,8 +117,8 @@ def main():
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
- train_loader = torch.utils.data.DataLoader(dataset1,**kwargs)
- test_loader = torch.utils.data.DataLoader(dataset2, **kwargs)
+ train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
+ test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)
model = Net().to(device)
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
| two lines of code in mnist/main.py
There are two arguments called batch_size and test_batch_size:
`parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')`
`parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')`
but batch_size is used here:
`test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)`
Also, what does this line(line 105) do:
`test_loss = test_loss`
and it seems that `epoch` is not used in test().
| 2020-10-10T02:10:56 |
||
pytorch/examples | 897 | pytorch__examples-897 | [
"892"
] | af111380839d35924e9e36437aeb9757b5a68f96 | diff --git a/snli/train.py b/snli/train.py
--- a/snli/train.py
+++ b/snli/train.py
@@ -119,7 +119,7 @@
epoch, iterations, 1+batch_idx, len(train_iter),
100. * (1+batch_idx) / len(train_iter), loss.item(), dev_loss.item(), train_acc, dev_acc))
- # update best valiation set accuracy
+ # update best validation set accuracy
if dev_acc > best_dev_acc:
# found a model with better validation set accuracy
| Typo
https://github.com/pytorch/examples/blob/4db11160c21d0e26634ca1fcb94a73ad8d870ba7/snli/train.py#L122
`validation` instead of `valiation`
| 2021-04-02T15:31:33 |
||
pytorch/examples | 980 | pytorch__examples-980 | [
"769",
"907"
] | 0352380e6c066ed212e570d6fe74e3674f496258 | diff --git a/imagenet/main.py b/imagenet/main.py
--- a/imagenet/main.py
+++ b/imagenet/main.py
@@ -18,6 +18,7 @@
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
+from torch.utils.data import Subset
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
@@ -214,24 +215,29 @@ def main_worker(gpu, ngpus_per_node, args):
normalize,
]))
+ val_dataset = datasets.ImageFolder(
+ valdir,
+ transforms.Compose([
+ transforms.Resize(256),
+ transforms.CenterCrop(224),
+ transforms.ToTensor(),
+ normalize,
+ ]))
+
if args.distributed:
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
+ val_sampler = torch.utils.data.distributed.DistributedSampler(val_dataset, shuffle=False, drop_last=True)
else:
train_sampler = None
+ val_sampler = None
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None),
num_workers=args.workers, pin_memory=True, sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(
- datasets.ImageFolder(valdir, transforms.Compose([
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- normalize,
- ])),
- batch_size=args.batch_size, shuffle=False,
- num_workers=args.workers, pin_memory=True)
+ val_dataset, batch_size=args.batch_size, shuffle=False,
+ num_workers=args.workers, pin_memory=True, sampler=val_sampler)
if args.evaluate:
validate(val_loader, model, criterion, args)
@@ -307,48 +313,64 @@ def train(train_loader, model, criterion, optimizer, epoch, args):
end = time.time()
if i % args.print_freq == 0:
- progress.display(i)
+ progress.display(i + 1)
def validate(val_loader, model, criterion, args):
+
+ def run_validate(loader, base_progress=0):
+ with torch.no_grad():
+ end = time.time()
+ for i, (images, target) in enumerate(loader):
+ i = base_progress + i
+ if args.gpu is not None:
+ images = images.cuda(args.gpu, non_blocking=True)
+ if torch.cuda.is_available():
+ target = target.cuda(args.gpu, non_blocking=True)
+
+ # compute output
+ output = model(images)
+ loss = criterion(output, target)
+
+ # measure accuracy and record loss
+ acc1, acc5 = accuracy(output, target, topk=(1, 5))
+ losses.update(loss.item(), images.size(0))
+ top1.update(acc1[0], images.size(0))
+ top5.update(acc5[0], images.size(0))
+
+ # measure elapsed time
+ batch_time.update(time.time() - end)
+ end = time.time()
+
+ if i % args.print_freq == 0:
+ progress.display(i + 1)
+
batch_time = AverageMeter('Time', ':6.3f', Summary.NONE)
losses = AverageMeter('Loss', ':.4e', Summary.NONE)
top1 = AverageMeter('Acc@1', ':6.2f', Summary.AVERAGE)
top5 = AverageMeter('Acc@5', ':6.2f', Summary.AVERAGE)
progress = ProgressMeter(
- len(val_loader),
+ len(val_loader) + (args.distributed and (len(val_loader.sampler) * args.world_size < len(val_loader.dataset))),
[batch_time, losses, top1, top5],
prefix='Test: ')
# switch to evaluate mode
model.eval()
- with torch.no_grad():
- end = time.time()
- for i, (images, target) in enumerate(val_loader):
- if args.gpu is not None:
- images = images.cuda(args.gpu, non_blocking=True)
- if torch.cuda.is_available():
- target = target.cuda(args.gpu, non_blocking=True)
-
- # compute output
- output = model(images)
- loss = criterion(output, target)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, target, topk=(1, 5))
- losses.update(loss.item(), images.size(0))
- top1.update(acc1[0], images.size(0))
- top5.update(acc5[0], images.size(0))
-
- # measure elapsed time
- batch_time.update(time.time() - end)
- end = time.time()
+ run_validate(val_loader)
+ if args.distributed:
+ top1.all_reduce()
+ top5.all_reduce()
- if i % args.print_freq == 0:
- progress.display(i)
+ if args.distributed and (len(val_loader.sampler) * args.world_size < len(val_loader.dataset)):
+ aux_val_dataset = Subset(val_loader.dataset,
+ range(len(val_loader.sampler) * args.world_size, len(val_loader.dataset)))
+ aux_val_loader = torch.utils.data.DataLoader(
+ aux_val_dataset, batch_size=args.batch_size, shuffle=False,
+ num_workers=args.workers, pin_memory=True)
+ run_validate(aux_val_loader, len(val_loader))
- progress.display_summary()
+ progress.display_summary()
return top1.avg
@@ -384,6 +406,12 @@ def update(self, val, n=1):
self.count += n
self.avg = self.sum / self.count
+ def all_reduce(self):
+ total = torch.FloatTensor([self.sum, self.count])
+ dist.all_reduce(total, dist.ReduceOp.SUM, async_op=False)
+ self.sum, self.count = total.tolist()
+ self.avg = self.sum / self.count
+
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
| Add DistributedSampler for validation data loader in Imagenet example
In examples/imagenet/main.py, we only set a distributed sampler (i.e., `train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)`) for `train_loader`, while neglecting setting the distributed sampler for `val_loader`. In this case, the loss and accuracy metrics of test logs are exactly the same among different GPUs as follows, leading to unnecessary computation.
How about adding DistributedSampler for both `train_sampler` and `val_loader`?
```
Test: [ 0/62] Time 12.870 (12.870) Loss 2.1374e+00 (2.1374e+00) Acc@1 0.00 ( 0.00) Acc@5 100.00 (100.00)
Test: [10/62] Time 0.461 ( 1.547) Loss 2.1963e+00 (2.1630e+00) Acc@1 0.00 ( 0.00) Acc@5 98.44 ( 99.72)
Test: [20/62] Time 0.352 ( 0.969) Loss 2.1709e+00 (2.1958e+00) Acc@1 0.00 ( 0.00) Acc@5 98.44 ( 73.07)
Test: [30/62] Time 0.487 ( 0.810) Loss 2.3123e+00 (2.2959e+00) Acc@1 0.00 ( 0.00) Acc@5 79.69 ( 62.75)
Test: [40/62] Time 0.345 ( 0.714) Loss 2.8967e+00 (2.3489e+00) Acc@1 0.00 ( 0.11) Acc@5 1.56 ( 60.90)
Test: [50/62] Time 0.485 ( 0.666) Loss 2.4475e+00 (2.8809e+00) Acc@1 0.00 ( 12.56) Acc@5 0.00 ( 61.61)
Test: [60/62] Time 0.401 ( 0.620) Loss 2.2804e+00 (2.9429e+00) Acc@1 18.75 ( 11.96) Acc@5 21.88 ( 53.18)
* Acc@1 11.924 Acc@5 52.917
Test: [ 0/62] Time 12.978 (12.978) Loss 2.1374e+00 (2.1374e+00) Acc@1 0.00 ( 0.00) Acc@5 100.00 (100.00)
Test: [10/62] Time 0.297 ( 1.554) Loss 2.1963e+00 (2.1630e+00) Acc@1 0.00 ( 0.00) Acc@5 98.44 ( 99.72)
Test: [20/62] Time 0.150 ( 0.970) Loss 2.1709e+00 (2.1958e+00) Acc@1 0.00 ( 0.00) Acc@5 98.44 ( 73.07)
Test: [30/62] Time 0.267 ( 0.810) Loss 2.3123e+00 (2.2959e+00) Acc@1 0.00 ( 0.00) Acc@5 79.69 ( 62.75)
Test: [40/62] Time 0.145 ( 0.715) Loss 2.8967e+00 (2.3489e+00) Acc@1 0.00 ( 0.11) Acc@5 1.56 ( 60.90)
Test: [50/62] Time 0.247 ( 0.666) Loss 2.4475e+00 (2.8809e+00) Acc@1 0.00 ( 12.56) Acc@5 0.00 ( 61.61)
Test: [60/62] Time 0.164 ( 0.620) Loss 2.2804e+00 (2.9429e+00) Acc@1 18.75 ( 11.96) Acc@5 21.88 ( 53.18)
* Acc@1 11.924 Acc@5 52.917
```
Distribute validation set in imagenet example
also display progress only on rank 0, on both train and val
| I try to do this in https://github.com/triomino/examples/blob/master/imagenet/main.py. JoinableQueue is used to gather testing results because I am not familiar with other multiprocessing api.
Now the program works but I am not sure if it is a correct or good practice. I am also waiting for an official version mentioned in https://github.com/pytorch/examples/issues/461.
Update: use dist.all_reduce() to collect testing results.
Update: DistributedSampler cause incorrect validation accuracy, as mentioned in https://github.com/pytorch/pytorch/issues/25162.
| 2022-03-13T09:19:42 |
|
pytorch/examples | 982 | pytorch__examples-982 | [
"771"
] | 0352380e6c066ed212e570d6fe74e3674f496258 | diff --git a/imagenet/main.py b/imagenet/main.py
--- a/imagenet/main.py
+++ b/imagenet/main.py
@@ -148,7 +148,7 @@ def main_worker(gpu, ngpus_per_node, args):
model.cuda(args.gpu)
# When using a single GPU per process and per
# DistributedDataParallel, we need to divide the batch size
- # ourselves based on the total number of GPUs we have
+ # ourselves based on the total number of GPUs of the current node.
args.batch_size = int(args.batch_size / ngpus_per_node)
args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
| Inconsistent batch size calculation and description in imagenet main.py
https://github.com/pytorch/examples/blob/b9f3b2ebb9464959bdbf0c3ac77124a704954828/imagenet/main.py#L146-L149
As above shown, `batch_size` should be adjusted by *total number of GPUs we have*, which should be equivalent to `world_size` rather than `ngpus_per_node` according to the comments.
However, the description of `--batch-size` says it's only the sum batch size of one node as follows.
https://github.com/pytorch/examples/blob/b9f3b2ebb9464959bdbf0c3ac77124a704954828/imagenet/main.py#L39-L43
Different description seems a bit confusing when trying to keep the same batch size on single GPU between test across nodes and within one node.
| @yzs981130 would you like to make a PR for this fix? | 2022-03-17T01:55:28 |
|
pytorch/examples | 1,057 | pytorch__examples-1057 | [
"1056"
] | 32f0c1e1073913e75fa75b848bccdbf011b2a569 | diff --git a/imagenet/main.py b/imagenet/main.py
--- a/imagenet/main.py
+++ b/imagenet/main.py
@@ -88,6 +88,7 @@ def main():
random.seed(args.seed)
torch.manual_seed(args.seed)
cudnn.deterministic = True
+ cudnn.benchmark = False
warnings.warn('You have chosen to seed training. '
'This will turn on the CUDNN deterministic setting, '
'which can slow down your training considerably! '
@@ -204,7 +205,6 @@ def main_worker(gpu, ngpus_per_node, args):
else:
print("=> no checkpoint found at '{}'".format(args.resume))
- cudnn.benchmark = True
# Data loading code
if args.dummy:
| cudnn deterministic not guaranteed when seed already set in imagenet
## Context
* Pytorch version: not related
* Operating System and version: not related
## Your Environment
* Is it a CPU or GPU environment?: GPU
* Which example are you using: `imagenet/main.py` https://github.com/pytorch/examples/blob/32f0c1e1073913e75fa75b848bccdbf011b2a569/imagenet/main.py
## Expected Behavior
The whole training will be deterministic when the seed is set, according to the annotations here:
https://github.com/pytorch/examples/blob/32f0c1e1073913e75fa75b848bccdbf011b2a569/imagenet/main.py#L91-L95
## Current Behavior
The `cudnn.benchmark = true` here will indeed introduce nondeterminism, according to [PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html#cuda-convolution-benchmarking)
https://github.com/pytorch/examples/blob/32f0c1e1073913e75fa75b848bccdbf011b2a569/imagenet/main.py#L207
I will send a pr soon.
| 2022-09-13T06:34:46 |
||
pytorch/examples | 1,084 | pytorch__examples-1084 | [
"1078"
] | 74a70e107dbfdc359a3b8c8f44994cfee38ff715 | diff --git a/distributed/ddp-tutorial-series/multinode.py b/distributed/ddp-tutorial-series/multinode.py
--- a/distributed/ddp-tutorial-series/multinode.py
+++ b/distributed/ddp-tutorial-series/multinode.py
@@ -37,7 +37,8 @@ def __init__(
self.model = DDP(self.model, device_ids=[self.local_rank])
def _load_snapshot(self, snapshot_path):
- snapshot = torch.load(snapshot_path)
+ loc = f"cuda:{self.gpu_id}"
+ snapshot = torch.load(snapshot_path, map_location=loc)
self.model.load_state_dict(snapshot["MODEL_STATE"])
self.epochs_run = snapshot["EPOCHS_RUN"]
print(f"Resuming training from snapshot at Epoch {self.epochs_run}")
| The GPU load is unbalanced
https://github.com/pytorch/examples/blob/2ee8d43dbe420be152fd5ce0d80b43b419a0e352/distributed/ddp-tutorial-series/multigpu_torchrun.py#L39
When I run the code and resume from a existed .pt file. The memory usage of GPU0 is significantly higher than other GPUs.
It can be solved by adding a parameter "map_location".
`snapshot = torch.load(snapshot_path, map_location=torch.device('cuda', int(os.environ["LOCAL_RANK"])))`
## My Environment
cudatoolkit 10.2
pytorch 12.1
| @lianchengmingjue good catch! By default, torch.load() first loads the snapshot to CPU then moves to the device it was saved from(I guess it's GPU0). In this case, all ranks load the snapshot to GPU0. We should always use "map_location" in torch.load() to load files saved in other environment. Because it might be saved in GPUx which doesn't exist in your host and cause a failure during loading. Please feel free to send a PR for the fix.
cc: @suraj813 | 2022-10-18T15:47:54 |
|
pytorch/examples | 1,098 | pytorch__examples-1098 | [
"1093"
] | e6cba0aa46b2a33b01207e1451e0cd10ca96c04c | diff --git a/distributed/minGPT-ddp/mingpt/trainer.py b/distributed/minGPT-ddp/mingpt/trainer.py
--- a/distributed/minGPT-ddp/mingpt/trainer.py
+++ b/distributed/minGPT-ddp/mingpt/trainer.py
@@ -111,7 +111,7 @@ def _run_batch(self, source, targets, train: bool = True) -> float:
return loss.item()
def _run_epoch(self, epoch: int, dataloader: DataLoader, train: bool = True):
- self.dataloader.sampler.set_epoch(epoch)
+ dataloader.sampler.set_epoch(epoch)
for iter, (source, targets) in enumerate(dataloader):
step_type = "Train" if train else "Eval"
source = source.to(self.local_rank)
| minGPT-ddp: AttributeError: 'Trainer' object has no attribute 'dataloader'
When executing examples/distributed/minGPT-ddp/mingpt/main.py
This error is raised when trying to train minGPT.
Python version: main branch
## Possible Solution
113 def _run_epoch(self, epoch: int, dataloader: DataLoader, train: bool = True):
114 #self.dataloader.sampler.set_epoch(epoch)
115 dataloader.sampler.set_epoch(epoch)
## Steps to Reproduce
Just run main.py
## Failure Logs [if any]
Traceback (most recent call last):
File "/mnt/tier1/project/lxp/fmansouri/pytorch/examples/distributed/minGPT-ddp/mingpt/main.py", line 41, in <module>
main()
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/main.py", line 90, in decorated_main
_run_hydra(
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/_internal/utils.py", line 389, in _run_hydra
_run_app(
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/_internal/utils.py", line 452, in _run_app
run_and_report(
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/_internal/utils.py", line 216, in run_and_report
raise ex
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/_internal/utils.py", line 213, in run_and_report
return func()
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/_internal/utils.py", line 453, in <lambda>
lambda: hydra.run(
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/_internal/hydra.py", line 132, in run
_ = ret.return_value
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/home/users/fmansouri/.local/lib/python3.10/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "/mnt/tier1/project/lxp/fmansouri/pytorch/examples/distributed/minGPT-ddp/mingpt/main.py", line 35, in main
trainer.train()
File "/mnt/tier1/project/lxp/fmansouri/pytorch/examples/distributed/minGPT-ddp/mingpt/trainer.py", line 144, in train
self._run_epoch(epoch, self.train_loader, train=True)
File "/mnt/tier1/project/lxp/fmansouri/pytorch/examples/distributed/minGPT-ddp/mingpt/trainer.py", line 114, in _run_epoch
self.dataloader.sampler.set_epoch(epoch)
AttributeError: 'Trainer' object has no attribute 'dataloader'. Did you mean: 'test_loader'?
| 2022-11-29T15:46:01 |
||
pytorch/examples | 1,109 | pytorch__examples-1109 | [
"911"
] | d8456a36d1bbb22f72b003f59406a19a0a0547c3 | diff --git a/word_language_model/model.py b/word_language_model/model.py
--- a/word_language_model/model.py
+++ b/word_language_model/model.py
@@ -121,7 +121,7 @@ def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
- self.decoder = nn.Linear(ninp, ntoken)
+ self.decoder = nn.Linear(nhid, ntoken)
self.init_weights()
| word Language Model bug
self.decoder = nn.Linear(**ninp**, ntoken) in model.py line 124 shoud be "nhid"
| Would you like to make a PR for this? Just make sure that code still runs and updated README as needed | 2023-02-02T20:31:57 |
|
pytorch/examples | 1,114 | pytorch__examples-1114 | [
"1111"
] | 0252bda5b651b1e49594b91a61bc75f165431c9c | diff --git a/mnist_forward_forward/main.py b/mnist_forward_forward/main.py
new file mode 100644
--- /dev/null
+++ b/mnist_forward_forward/main.py
@@ -0,0 +1,183 @@
+# This code is based on the implementation of Mohammad Pezeshki available at
+# https://github.com/mohammadpz/pytorch_forward_forward and licensed under the MIT License.
+# Modifications/Improvements to the original code have been made by Vivek V Patel.
+
+import argparse
+import torch
+import torch.nn as nn
+from torchvision.datasets import MNIST
+from torchvision.transforms import Compose, ToTensor, Normalize, Lambda
+from torch.utils.data import DataLoader
+from torch.optim import Adam
+
+
+def get_y_neg(y):
+ y_neg = y.clone()
+ for idx, y_samp in enumerate(y):
+ allowed_indices = list(range(10))
+ allowed_indices.remove(y_samp.item())
+ y_neg[idx] = torch.tensor(allowed_indices)[
+ torch.randint(len(allowed_indices), size=(1,))
+ ].item()
+ return y_neg.to(device)
+
+
+def overlay_y_on_x(x, y, classes=10):
+ x_ = x.clone()
+ x_[:, :classes] *= 0.0
+ x_[range(x.shape[0]), y] = x.max()
+ return x_
+
+
+class Net(torch.nn.Module):
+ def __init__(self, dims):
+
+ super().__init__()
+ self.layers = []
+ for d in range(len(dims) - 1):
+ self.layers = self.layers + [Layer(dims[d], dims[d + 1]).to(device)]
+
+ def predict(self, x):
+ goodness_per_label = []
+ for label in range(10):
+ h = overlay_y_on_x(x, label)
+ goodness = []
+ for layer in self.layers:
+ h = layer(h)
+ goodness = goodness + [h.pow(2).mean(1)]
+ goodness_per_label += [sum(goodness).unsqueeze(1)]
+ goodness_per_label = torch.cat(goodness_per_label, 1)
+ return goodness_per_label.argmax(1)
+
+ def train(self, x_pos, x_neg):
+ h_pos, h_neg = x_pos, x_neg
+ for i, layer in enumerate(self.layers):
+ print("training layer: ", i)
+ h_pos, h_neg = layer.train(h_pos, h_neg)
+
+
+class Layer(nn.Linear):
+ def __init__(self, in_features, out_features, bias=True, device=None, dtype=None):
+ super().__init__(in_features, out_features, bias, device, dtype)
+ self.relu = torch.nn.ReLU()
+ self.opt = Adam(self.parameters(), lr=args.lr)
+ self.threshold = args.threshold
+ self.num_epochs = args.epochs
+
+ def forward(self, x):
+ x_direction = x / (x.norm(2, 1, keepdim=True) + 1e-4)
+ return self.relu(torch.mm(x_direction, self.weight.T) + self.bias.unsqueeze(0))
+
+ def train(self, x_pos, x_neg):
+ for i in range(self.num_epochs):
+ g_pos = self.forward(x_pos).pow(2).mean(1)
+ g_neg = self.forward(x_neg).pow(2).mean(1)
+ loss = torch.log(
+ 1
+ + torch.exp(
+ torch.cat([-g_pos + self.threshold, g_neg - self.threshold])
+ )
+ ).mean()
+ self.opt.zero_grad()
+ loss.backward()
+ self.opt.step()
+ if i % args.log_interval == 0:
+ print("Loss: ", loss.item())
+ return self.forward(x_pos).detach(), self.forward(x_neg).detach()
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "--epochs",
+ type=int,
+ default=1000,
+ metavar="N",
+ help="number of epochs to train (default: 10)",
+ )
+ parser.add_argument(
+ "--lr",
+ type=float,
+ default=0.03,
+ metavar="LR",
+ help="learning rate (default: 0.03)",
+ )
+ parser.add_argument(
+ "--no_cuda", action="store_true", default=False, help="disables CUDA training"
+ )
+ parser.add_argument(
+ "--no_mps", action="store_true", default=False, help="disables MPS training"
+ )
+ parser.add_argument(
+ "--seed", type=int, default=1, metavar="S", help="random seed (default: 1)"
+ )
+ parser.add_argument(
+ "--save_model",
+ action="store_true",
+ default=False,
+ help="For saving the current Model",
+ )
+ parser.add_argument(
+ "--train_size", type=int, default=50000, help="size of training set"
+ )
+ parser.add_argument(
+ "--threshold", type=float, default=2, help="threshold for training"
+ )
+ parser.add_argument("--test_size", type=int, default=10000, help="size of test set")
+ parser.add_argument(
+ "--save-model",
+ action="store_true",
+ default=False,
+ help="For Saving the current Model",
+ )
+ parser.add_argument(
+ "--log-interval",
+ type=int,
+ default=10,
+ metavar="N",
+ help="how many batches to wait before logging training status",
+ )
+ args = parser.parse_args()
+ use_cuda = not args.no_cuda and torch.cuda.is_available()
+ use_mps = not args.no_mps and torch.backends.mps.is_available()
+ if use_cuda:
+ device = torch.device("cuda")
+ elif use_mps:
+ device = torch.device("mps")
+ else:
+ device = torch.device("cpu")
+
+ train_kwargs = {"batch_size": args.train_size}
+ test_kwargs = {"batch_size": args.test_size}
+
+ if use_cuda:
+ cuda_kwargs = {"num_workers": 1, "pin_memory": True, "shuffle": True}
+ train_kwargs.update(cuda_kwargs)
+ test_kwargs.update(cuda_kwargs)
+
+ transform = Compose(
+ [
+ ToTensor(),
+ Normalize((0.1307,), (0.3081,)),
+ Lambda(lambda x: torch.flatten(x)),
+ ]
+ )
+ train_loader = DataLoader(
+ MNIST("./data/", train=True, download=True, transform=transform), **train_kwargs
+ )
+ test_loader = DataLoader(
+ MNIST("./data/", train=False, download=True, transform=transform), **test_kwargs
+ )
+ net = Net([784, 500, 500])
+ x, y = next(iter(train_loader))
+ x, y = x.to(device), y.to(device)
+ x_pos = overlay_y_on_x(x, y)
+ y_neg = get_y_neg(y)
+ x_neg = overlay_y_on_x(x, y_neg)
+ net.train(x_pos, x_neg)
+ print("train error:", 1.0 - net.predict(x).eq(y).float().mean().item())
+ x_te, y_te = next(iter(test_loader))
+ x_te, y_te = x_te.to(device), y_te.to(device)
+ if args.save_model:
+ torch.save(net.state_dict(), "mnist_ff.pt")
+ print("test error:", 1.0 - net.predict(x_te).eq(y_te).float().mean().item())
| 🚀 Feature request / I want to contribute an algorithm
<!--
Thank you for suggesting an idea to improve pytorch/examples
Please fill in as much of the template below as you're able.
-->
## Is your feature request related to a problem? Please describe.
<!-- Please describe the problem you are trying to solve. -->
Currently, PyTorch/examples does not have an implementation of the forward forward algorithm.[forward forward algorithm.](https://arxiv.org/abs/2212.13345) This algorithm is a new learning procedure for neural networks and has promising approach to training neural networks, it is also becoming popular, because it's written by father of deep learning aka Geoffrey Hinton, its inclusion in PyTorch/examples would make it more accessible to a wider community of researchers practitioners, and I would like to contribute in it❤️, I've Implemented This algorithm in my local notebook in pure pytorch❤️.I am new so please let me know How can I contribute this algorithm in this repo.
Thanks,
Vivek
## Describe the solution
The solution is to implement/add the forward forward algorithm in PyTorch/examples. This would include writing the code for the algorithm, as well as any docs or tutorial addition to the existing codebase.
## Describe alternatives solution
<!-- Please describe alternative solutions or features you have considered. -->
[https://keras.io/examples/vision/forwardforward/](https://keras.io/examples/vision/forwardforward/)
| hi vivek, this sounds like a good example to have. would love to see your contribution. | 2023-02-15T14:12:11 |
|
pytorch/examples | 1,189 | pytorch__examples-1189 | [
"1188"
] | 13009eff7a80ebcf6ae89ed217d5d176bd3e019d | diff --git a/mnist_hogwild/main.py b/mnist_hogwild/main.py
--- a/mnist_hogwild/main.py
+++ b/mnist_hogwild/main.py
@@ -30,7 +30,9 @@
parser.add_argument('--cuda', action='store_true', default=False,
help='enables CUDA training')
parser.add_argument('--mps', action='store_true', default=False,
- help='enables macOS GPU training')
+ help='enables macOS GPU training')
+parser.add_argument('--save_model', action='store_true', default=False,
+ help='save the trained model to state_dict')
parser.add_argument('--dry-run', action='store_true', default=False,
help='quickly check a single pass')
@@ -96,5 +98,8 @@ def forward(self, x):
for p in processes:
p.join()
+ if args.save_model:
+ torch.save(model.state_dict(), "MNIST_hogwild.pt")
+
# Once training is complete, we can test the model
test(args, model, device, dataset2, kwargs)
| Add `save_model` arg to `mnist_hogwild` example
Currently the example doesn't support the `--save_model` argument like the other examples
| 2023-09-01T03:11:17 |
||
ansible-collections/community.vmware | 47 | ansible-collections__community.vmware-47 | [
"43"
] | ed70749f4e684dc30a9ccd2e2a6efbbc385988c2 | diff --git a/plugins/modules/vmware_cluster.py b/plugins/modules/vmware_cluster.py
--- a/plugins/modules/vmware_cluster.py
+++ b/plugins/modules/vmware_cluster.py
@@ -60,12 +60,16 @@
enable_drs:
description:
- If set to C(yes), will enable DRS when the cluster is created.
+ - Use C(enable_drs) of M(vmware_cluster_drs) instead.
+ - Deprecated option, will be removed in version 2.12.
type: bool
default: 'no'
drs_enable_vm_behavior_overrides:
description:
- Determines whether DRS Behavior overrides for individual virtual machines are enabled.
- If set to C(True), overrides C(drs_default_vm_behavior).
+ - Use C(drs_enable_vm_behavior_overrides) of M(vmware_cluster_drs) instead.
+ - Deprecated option, will be removed in version 2.12.
type: bool
default: True
drs_default_vm_behavior:
@@ -77,16 +81,22 @@
for the placement with a host. vCenter should not implement the recommendations automatically.
- If set to C(fullyAutomated), then vCenter should automate both the migration of virtual machines
and their placement with a host at power on.
+ - Use C(drs_default_vm_behavior) of M(vmware_cluster_drs) instead.
+ - Deprecated option, will be removed in version 2.12.
default: fullyAutomated
choices: [ fullyAutomated, manual, partiallyAutomated ]
drs_vmotion_rate:
description:
- Threshold for generated ClusterRecommendations.
+ - Use C(drs_vmotion_rate) of M(vmware_cluster_drs) instead.
+ - Deprecated option, will be removed in version 2.12.
default: 3
choices: [ 1, 2, 3, 4, 5 ]
enable_ha:
description:
- If set to C(yes) will enable HA when the cluster is created.
+ - Use C(enable_ha) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
type: bool
default: 'no'
ha_host_monitoring:
@@ -95,6 +105,8 @@
- If set to C(enabled), HA restarts virtual machines after a host fails.
- If set to C(disabled), HA does not restart virtual machines after a host fails.
- If C(enable_ha) is set to C(no), then this value is ignored.
+ - Use C(ha_host_monitoring) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
choices: [ 'enabled', 'disabled' ]
default: 'enabled'
ha_vm_monitoring:
@@ -104,6 +116,8 @@
- If set to C(vmMonitoringDisabled), virtual machine health monitoring is disabled.
- If set to C(vmMonitoringOnly), HA response to virtual machine heartbeat failure.
- If C(enable_ha) is set to C(no), then this value is ignored.
+ - Use C(ha_vm_monitoring) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
choices: ['vmAndAppMonitoring', 'vmMonitoringOnly', 'vmMonitoringDisabled']
default: 'vmMonitoringDisabled'
ha_failover_level:
@@ -111,11 +125,15 @@
- Number of host failures that should be tolerated, still guaranteeing sufficient resources to
restart virtual machines on available hosts.
- Accepts integer values only.
+ - Use C(slot_based_admission_control), C(reservation_based_admission_control) or C(failover_host_admission_control) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: 2
ha_admission_control_enabled:
description:
- Determines if strict admission control is enabled.
- It is recommended to set this parameter to C(True), please refer documentation for more details.
+ - Use C(slot_based_admission_control), C(reservation_based_admission_control) or C(failover_host_admission_control) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: True
type: bool
ha_vm_failure_interval:
@@ -124,6 +142,8 @@
if no heartbeat has been received.
- This setting is only valid if C(ha_vm_monitoring) is set to, either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- Unit is seconds.
+ - Use C(ha_vm_failure_interval) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: 30
ha_vm_min_up_time:
description:
@@ -131,12 +151,16 @@
the virtual machine has been powered on.
- This setting is only valid if C(ha_vm_monitoring) is set to, either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- Unit is seconds.
+ - Use C(ha_vm_min_up_time) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: 120
ha_vm_max_failures:
description:
- Maximum number of failures and automated resets allowed during the time
that C(ha_vm_max_failure_window) specifies.
- This setting is only valid if C(ha_vm_monitoring) is set to, either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
+ - Use C(ha_vm_max_failures) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: 3
ha_vm_max_failure_window:
description:
@@ -145,6 +169,8 @@
- This setting is only valid if C(ha_vm_monitoring) is set to, either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- Unit is seconds.
- Default specifies no failure window.
+ - Use C(ha_vm_max_failure_window) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: -1
ha_restart_priority:
description:
@@ -158,17 +184,23 @@
when there is insufficient capacity on hosts to meet all virtual machine needs.
- If set to C(low), then virtual machine with this priority have a lower chance of powering on after a failure,
when there is insufficient capacity on hosts to meet all virtual machine needs.
+ - Use C(ha_restart_priority) of M(vmware_cluster_ha) instead.
+ - Deprecated option, will be removed in version 2.12.
default: 'medium'
choices: [ 'disabled', 'high', 'low', 'medium' ]
enable_vsan:
description:
- If set to C(yes) will enable vSAN when the cluster is created.
+ - Use C(enable_vsan) of M(vmware_cluster_vsan) instead.
+ - Deprecated option, will be removed in version 2.12.
type: bool
default: 'no'
vsan_auto_claim_storage:
description:
- Determines whether the VSAN service is configured to automatically claim local storage
on VSAN-enabled hosts in the cluster.
+ - Use C(vsan_auto_claim_storage) of M(vmware_cluster_vsan) instead.
+ - Deprecated option, will be removed in version 2.12.
type: bool
default: False
state:
@@ -176,6 +208,10 @@
- Create C(present) or remove C(absent) a VMware vSphere cluster.
choices: [ absent, present ]
default: present
+seealso:
+- module: vmware_cluster_drs
+- module: vmware_cluster_ha
+- module: vmware_cluster_vsan
extends_documentation_fragment:
- community.vmware.vmware.documentation
| Document `vmware_cluster` parameters that will deprecated and removed in 2.12
##### SUMMARY
Since 2.9 (I think), we face a deprecated message when using the `vmware_cluster` module.
```
[DEPRECATION WARNING]: Configuring HA using vmware_cluster module is deprecated and will be removed in version 2.12. Please use vmware_cluster_ha module for the new functionality.. This feature will be removed in version 2.12. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Configuring DRS using vmware_cluster module is deprecated and will be removed in version 2.12. Please use vmware_cluster_drs module for the new functionality.. This feature will be removed in version 2.12. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
```
As far as I understand the messages and the discussion ansible/ansible#58023 and associated PR ansible/ansible#58468, it's not the entire module `vmware_cluster` that will become deprecated but only the arguments used to configure the HA, DRS and vSAN.
Like it's done for [`ip_address` param of `vmware_vmkernel`](https://docs.ansible.com/ansible/latest/modules/vmware_vmkernel_module.html#parameter-ip_address), I think that it could be great to add comment for each nearly deprecated parameter. Like that it will be easier for everybody to identify if you need to update your playbook.
I assume that all parameters starting with `[drs_|ha_|vsan]` will be deprecated, but it's not clear to me for the `enable_[drs|ha|vsan]` …
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
- vmware_cluster
- vmware_cluster_drs
- vmware_cluster_ha
- vmware_cluster_vsan
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /home/myuser/Projects/313/etc/ansible.cfg
configured module search path = ['/home/myuser/Projects/313/library']
ansible python module location = /home/myuser/.virtualenvs/313/lib/python3.7/site-packages/ansible
executable location = /home/myuser/.virtualenvs/313/bin/ansible
python version = 3.7.6 (default, Dec 19 2019, 22:50:01) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/home/myuser/Projects/313/etc/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/myuser/Projects/313/etc/ansible.cfg) = $HOME/.ansible/facts/
CACHE_PLUGIN_TIMEOUT(/home/myuser/Projects/313/etc/ansible.cfg) = 3600
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/home/myuser/.virtualenvs/313/lib/python3.7/site-packages/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = ['/home/myuser/.virtualenvs/313/lib/python3.7/site-packages/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/home/myuser/Projects/313/etc/ansible.cfg) = ['profile_tasks']
DEFAULT_FILTER_PLUGIN_PATH(env: ANSIBLE_FILTER_PLUGINS) = ['/home/myuser/Projects/313/plugins/filter']
DEFAULT_GATHERING(/home/myuser/Projects/313/etc/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/myuser/Projects/313/etc/ansible.cfg) = ['/home/myuser/Projects/313/inventories/T40', '/home/myuser/Projects/313/inventories/setupenv']
DEFAULT_LOG_PATH(/home/myuser/Projects/313/etc/ansible.cfg) = /home/myuser/.ansible/log/ansible.log
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/home/myuser/Projects/313/library']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['/home/myuser/Projects/313/roles.galaxy', '/home/myuser/Projects/313/roles']
DEFAULT_STDOUT_CALLBACK(/home/myuser/Projects/313/etc/ansible.cfg) = yaml
GALAXY_ROLE_SKELETON(env: ANSIBLE_GALAXY_ROLE_SKELETON) = /home/myuser/Projects/313/etc/skel/default
HOST_KEY_CHECKING(/home/myuser/Projects/313/etc/ansible.cfg) = False
```
##### OS / ENVIRONMENT
na
##### ADDITIONAL INFORMATION
na
| Moved from ansible/ansible ([#68041](https://github.com/ansible/ansible/issues/68041)) to ansible-collections/vmware.
@xenlo Thanks for providing information and providing your concern. AFAIK, `enable_[ha|drs|vsan]` will also deprecate with the other options (Since they are related). I think @mariolenz would be the right person to comment on this.
Thanks.
I'll have a look at this. | 2020-03-09T18:37:30 |
|
ansible-collections/community.vmware | 66 | ansible-collections__community.vmware-66 | [
"59"
] | dbd6032e1c9e5d437ad5820ddaf0854ed931f3f2 | diff --git a/plugins/modules/vmware_host_active_directory.py b/plugins/modules/vmware_host_active_directory.py
--- a/plugins/modules/vmware_host_active_directory.py
+++ b/plugins/modules/vmware_host_active_directory.py
@@ -162,14 +162,13 @@ def ensure(self):
results['result'][host.name]['membership_state'] = active_directory_info.domainMembershipStatus
results['result'][host.name]['joined_domain'] = active_directory_info.joinedDomain
results['result'][host.name]['trusted_domains'] = active_directory_info.trustedDomain
- msg = "Host is joined to AD domain, but "
+ msg = host.name + " is joined to AD domain, but "
if active_directory_info.domainMembershipStatus == 'clientTrustBroken':
msg += "the client side of the trust relationship is broken"
elif active_directory_info.domainMembershipStatus == 'inconsistentTrust':
msg += "unexpected domain controller responded"
elif active_directory_info.domainMembershipStatus == 'noServers':
- msg += "the host thinks it's part of a domain and " \
- "no domain controllers could be reached to confirm"
+ msg += "no domain controllers could be reached to confirm"
elif active_directory_info.domainMembershipStatus == 'serverTrustBroken':
msg += "the server side of the trust relationship is broken (or bad machine password)"
elif active_directory_info.domainMembershipStatus == 'otherProblem':
@@ -177,6 +176,7 @@ def ensure(self):
elif active_directory_info.domainMembershipStatus == 'unknown':
msg += "the Active Directory integration provider does not support domain trust checks"
results['result'][host.name]['msg'] = msg
+ self.module.fail_json(msg=msg)
# Enable and join AD domain
else:
if self.module.check_mode:
| vmware_host_active_directory: Problems with corner cases
##### SUMMARY
When there's a problem with the domain membership of a host, the module always reports a `change` but doesn't do anything (lines 160ff).
We ran into this issue when we accidentally removed the computer account of an existing ESXi host. Our playbook reported changing the AD membership over and over again which was a bit weird. After all, after the first change I'd expect an `OK` on all later runs.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_host_active_directory
##### ANSIBLE VERSION
```
ansible 2.9.6
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Nov 14 2019, 00:43:46) [GCC 7.3.0]
```
##### CONFIGURATION
```
```
##### STEPS TO REPRODUCE
Join an ESXi host to AD, remove the computer account and run the task to join AD several times. It always reports `changed` but nothing happens.
##### EXPECTED RESULTS
I'm not sure... there are so many different problems that can occur (`clientTrustBroken`, `inconsistentTrust`, `noServers`, `serverTrustBroken` or even `otherProblem ` or `unknown`) that it's probably impossible to deal with them. Maybe the module should simply fail in these cases with an appropriate error message. Something like "The ESXi host seems to be an AD member but reports an `unkown` problem". This would make it pretty obvious that manual troubleshooting is needed.
##### ACTUAL RESULTS
The module reports a change but doesn't do anything.
| I agree with you @mariolenz. This should not happen. | 2020-03-24T16:57:13 |
|
ansible-collections/community.vmware | 114 | ansible-collections__community.vmware-114 | [
"113"
] | 0670cd5272a6e7d7581b53776fbae6a9d8bd1521 | diff --git a/plugins/modules/vmware_content_deploy_ovf_template.py b/plugins/modules/vmware_content_deploy_ovf_template.py
new file mode 100644
--- /dev/null
+++ b/plugins/modules/vmware_content_deploy_ovf_template.py
@@ -0,0 +1,235 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2020, Lev Goncharov <[email protected]>
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+ANSIBLE_METADATA = {
+ 'metadata_version': '1.1',
+ 'status': ['preview'],
+ 'supported_by': 'community'
+}
+
+DOCUMENTATION = r'''
+---
+module: vmware_content_deploy_ovf_template
+short_description: Deploy Virtual Machine from ovf template stored in content library.
+description:
+- Module to deploy virtual machine from ovf template in content library.
+- All variables and VMware object names are case sensitive.
+author:
+- Lev Goncharv (@ultral)
+notes:
+- Tested on vSphere 6.7
+requirements:
+- python >= 2.6
+- PyVmomi
+- vSphere Automation SDK
+options:
+ ovf_template:
+ description:
+ - The name of OVF template from which VM to be deployed.
+ type: str
+ required: True
+ aliases: ['ovf', 'template_src']
+ name:
+ description:
+ - The name of the VM to be deployed.
+ type: str
+ required: True
+ aliases: ['vm_name']
+ datacenter:
+ description:
+ - Name of the datacenter, where VM to be deployed.
+ type: str
+ required: True
+ datastore:
+ description:
+ - Name of the datastore to store deployed VM and disk.
+ type: str
+ required: True
+ folder:
+ description:
+ - Name of the folder in datacenter in which to place deployed VM.
+ type: str
+ required: True
+ host:
+ description:
+ - Name of the ESX Host in datacenter in which to place deployed VM.
+ type: str
+ required: True
+ resource_pool:
+ description:
+ - Name of the resourcepool in datacenter in which to place deployed VM.
+ type: str
+ required: False
+ cluster:
+ description:
+ - Name of the cluster in datacenter in which to place deployed VM.
+ type: str
+ required: False
+extends_documentation_fragment: vmware_rest_client.documentation
+'''
+
+EXAMPLES = r'''
+- name: Deploy Virtual Machine from OVF template in content library
+ vmware_content_deploy_ovf_template:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ ovf_template: rhel_test_template
+ datastore: Shared_NFS_Volume
+ folder: vm
+ datacenter: Sample_DC_1
+ name: Sample_VM
+ resource_pool: test_rp
+ validate_certs: False
+ delegate_to: localhost
+'''
+
+RETURN = r'''
+vm_deploy_info:
+ description: Virtual machine deployment message and vm_id
+ returned: on success
+ type: dict
+ sample: {
+ "msg": "Deployed Virtual Machine 'Sample_VM'.",
+ "vm_id": "vm-1009"
+ }
+'''
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.vmware_rest_client import VmwareRestClient
+from ansible.module_utils.vmware import PyVmomi
+
+HAS_VAUTOMATION_PYTHON_SDK = False
+try:
+ from com.vmware.vcenter.ovf_client import LibraryItem
+ HAS_VAUTOMATION_PYTHON_SDK = True
+except ImportError:
+ pass
+
+
+class VmwareContentDeployOvfTemplate(VmwareRestClient):
+ def __init__(self, module):
+ """Constructor."""
+ super(VmwareContentDeployOvfTemplate, self).__init__(module)
+ self.ovf_template_name = self.params.get('ovf_template')
+ self.vm_name = self.params.get('name')
+ self.datacenter = self.params.get('datacenter')
+ self.datastore = self.params.get('datastore')
+ self.folder = self.params.get('folder')
+ self.resourcepool = self.params.get('resource_pool')
+ self.cluster = self.params.get('cluster')
+ self.host = self.params.get('host')
+
+ def deploy_vm_from_ovf_template(self):
+ # Find the datacenter by the given datacenter name
+ self.datacenter_id = self.get_datacenter_by_name(datacenter_name=self.datacenter)
+ if not self.datacenter_id:
+ self.module.fail_json(msg="Failed to find the datacenter %s" % self.datacenter)
+ # Find the datastore by the given datastore name
+ self.datastore_id = self.get_datastore_by_name(self.datacenter, self.datastore)
+ if not self.datastore_id:
+ self.module.fail_json(msg="Failed to find the datastore %s" % self.datastore)
+ # Find the LibraryItem (Template) by the given LibraryItem name
+ self.library_item_id = self.get_library_item_by_name(self.ovf_template_name)
+ if not self.library_item_id:
+ self.module.fail_json(msg="Failed to find the library Item %s" % self.ovf_template_name)
+ # Find the folder by the given folder name
+ self.folder_id = self.get_folder_by_name(self.datacenter, self.folder)
+ if not self.folder_id:
+ self.module.fail_json(msg="Failed to find the folder %s" % self.folder)
+ # Find the Host by given HostName
+ self.host_id = self.get_host_by_name(self.datacenter, self.host)
+ if not self.host_id:
+ self.module.fail_json(msg="Failed to find the Host %s" % self.host)
+ # Find the resourcepool by the given resourcepool name
+ self.resourcepool_id = None
+ if self.resourcepool:
+ self.resourcepool_id = self.get_resource_pool_by_name(self.datacenter, self.resourcepool)
+ if not self.resourcepool_id:
+ self.module.fail_json(msg="Failed to find the resource_pool %s" % self.resourcepool)
+ # Find the Cluster by the given Cluster name
+ self.cluster_id = None
+ if self.cluster:
+ self.cluster_id = self.get_cluster_by_name(self.datacenter, self.cluster)
+ if not self.cluster_id:
+ self.module.fail_json(msg="Failed to find the Cluster %s" % self.cluster)
+
+ deployment_target = LibraryItem.DeploymentTarget(
+ resource_pool_id=self.resourcepool_id,
+ folder_id=self.folder_id
+ )
+
+ self.ovf_summary = self.api_client.vcenter.ovf.LibraryItem.filter(
+ ovf_library_item_id=self.library_item_id,
+ target=deployment_target)
+
+ self.deploy_spec = LibraryItem.ResourcePoolDeploymentSpec(
+ name=self.vm_name,
+ annotation=self.ovf_summary.annotation,
+ accept_all_eula=True,
+ network_mappings=None,
+ storage_mappings=None,
+ storage_provisioning=None,
+ storage_profile_id=None,
+ locale=None,
+ flags=None,
+ additional_parameters=None,
+ default_datastore_id=None)
+
+ result = self.api_client.vcenter.ovf.LibraryItem.deploy(self.library_item_id, deployment_target, self.deploy_spec)
+
+ if result.succeeded:
+ self.module.exit_json(
+ changed=True,
+ vm_deploy_info=dict(
+ msg="Deployed Virtual Machine '%s'." % self.vm_name,
+ vm_id=result.resource_id.id,
+ )
+ )
+ self.module.exit_json(changed=False,
+ vm_deploy_info=dict(msg="Virtual Machine deployment failed", vm_id=''))
+
+
+def main():
+ argument_spec = VmwareRestClient.vmware_client_argument_spec()
+ argument_spec.update(
+ ovf_template=dict(type='str', aliases=['template_src', 'ovf'], required=True),
+ name=dict(type='str', required=True, aliases=['vm_name']),
+ datacenter=dict(type='str', required=True),
+ datastore=dict(type='str', required=True),
+ folder=dict(type='str', required=True),
+ host=dict(type='str', required=True),
+ resource_pool=dict(type='str', required=False),
+ cluster=dict(type='str', required=False),
+ )
+ module = AnsibleModule(argument_spec=argument_spec,
+ supports_check_mode=True)
+ result = {'failed': False, 'changed': False}
+ pyv = PyVmomi(module=module)
+ vm = pyv.get_vm()
+ if vm:
+ module.exit_json(
+ changed=False,
+ vm_deploy_info=dict(
+ msg="Virtual Machine '%s' already Exists." % module.params['name'],
+ vm_id=vm._moId,
+ )
+ )
+ vmware_contentlib_create = VmwareContentDeployOvfTemplate(module)
+ if module.check_mode:
+ result.update(
+ vm_name=module.params['name'],
+ changed=True,
+ desired_operation='Create VM with PowerOff State',
+ )
+ module.exit_json(**result)
+ vmware_contentlib_create.deploy_vm_from_ovf_template()
+
+
+if __name__ == '__main__':
+ main()
| vmware_content_deploy_template doesn't support creating VM from OVF templates
##### SUMMARY
vmware_content_deploy_template doesn't support creating VM from OVF templates
##### ISSUE TYPE
- Feature Idea
##### ADDITIONAL INFORMATION
```yaml
$ ansible --version
ansible 2.9.6.post0
config file = None
configured module search path = [u'/home/lgonchar/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/lgonchar/ansible/lib/ansible
executable location = /home/lgonchar/ansible/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
$ ansible-playbook ../some_repo/ansible/playbooks/sandbox/create_vm_vcenter.yml
....
id='com.vmware.vdcs.vmtx-main.invalid_item_type',
default_message="The library item 'Packer_centos7' (ID: 4f31f87a-9ebd-4b64-97a3-2ed77ca87fcb) has type 'ovf', but needs to be of type 'vm-template'.",
```
| 2020-04-13T13:51:13 |
||
ansible-collections/community.vmware | 145 | ansible-collections__community.vmware-145 | [
"143"
] | 191c9cc213489d558b5c1406db8b3928410f38df | diff --git a/plugins/modules/vmware_dvs_portgroup_find.py b/plugins/modules/vmware_dvs_portgroup_find.py
--- a/plugins/modules/vmware_dvs_portgroup_find.py
+++ b/plugins/modules/vmware_dvs_portgroup_find.py
@@ -141,7 +141,7 @@ def vlan_match(self, pgup, userup, vlanlst):
for ln in vlanlst:
if '-' in ln:
arr = ln.split('-')
- if arr[0] < self.vlan and self.vlan < arr[1]:
+ if int(arr[0]) < self.vlan and self.vlan < int(arr[1]):
res = True
elif ln == str(self.vlan):
res = True
| vmware_dvs_portgroup_find TypeError: '<' not supported between instances of 'str' and 'int'
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_dvs_portgroup_find
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
docker -> ansible/awx_task:11
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Confirm if vlan 15 is present
vmware_dvs_portgroup_find:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vlanid: '15'
validate_certs: no
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```paste below
"msg": {
"changed": false,
"dvs_portgroups": [
{
"name": "Test_1",
"trunk": false,
"pvlan": false,
"vlan_id": "15",
"dvswitch": "myDVSwitch"
},
{
"name": "My-Uplinks",
"trunk": true,
"pvlan": false,
"vlan_id": "0-4094",
"dvswitch": "myDVSwitch"
},
...
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": {
"module_stdout": "",
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1587460742.276308-276582575233091/AnsiballZ_vmware_dvs_portgroup_find.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1587460742.276308-276582575233091/AnsiballZ_vmware_dvs_portgroup_find.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1587460742.276308-276582575233091/AnsiballZ_vmware_dvs_portgroup_find.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_dvs_portgroup_find', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 213, in <module>\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 209, in main\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 173, in get_dvs_portgroup\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 141, in vlan_match\nTypeError: '<' not supported between instances of 'str' and 'int'\n",
"exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1587460742.276308-276582575233091/AnsiballZ_vmware_dvs_portgroup_find.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1587460742.276308-276582575233091/AnsiballZ_vmware_dvs_portgroup_find.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1587460742.276308-276582575233091/AnsiballZ_vmware_dvs_portgroup_find.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_dvs_portgroup_find', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 213, in <module>\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 209, in main\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 173, in get_dvs_portgroup\n File \"/tmp/ansible_vmware_dvs_portgroup_find_payload_7hp8sciy/ansible_vmware_dvs_portgroup_find_payload.zip/ansible/modules/cloud/vmware/vmware_dvs_portgroup_find.py\", line 141, in vlan_match\nTypeError: '<' not supported between instances of 'str' and 'int'\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1,
"_ansible_no_log": false,
"changed": false
}
```
##### POSSIBLE SOLLUTION?
cast as integer
```paste below
filename: vmware_dvs_portgroup_find.py
line 133ff:
def vlan_match(self, pgup, userup, vlanlst):
res = False
if pgup and userup:
return True
for ln in vlanlst:
if '-' in ln:
arr = ln.split('-')
if int(arr[0]) < self.vlan and self.vlan < int(arr[1]):
res = True
elif ln == str(self.vlan):
res = True
return res
```
| 2020-04-22T16:46:09 |
||
ansible-collections/community.vmware | 214 | ansible-collections__community.vmware-214 | [
"212"
] | 24273235f07ff07ae962fb4d62345b435ae0215f | diff --git a/plugins/modules/vmware_guest_disk.py b/plugins/modules/vmware_guest_disk.py
--- a/plugins/modules/vmware_guest_disk.py
+++ b/plugins/modules/vmware_guest_disk.py
@@ -84,6 +84,9 @@
- ' - C(persistent) Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent) Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent) Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
+ - ' - C(sharing) (bool): The sharing mode of the virtual disk. The default value is no sharing.'
+ - ' Setting C(sharing) means that multiple virtual machines can write to the virtual disk.'
+ - ' Sharing can only be set if C(type) is C(eagerzeroedthick).'
- ' - C(datastore) (string): Name of datastore or datastore cluster to be used for the disk.'
- ' - C(autoselect_datastore) (bool): Select the less used datastore. Specify only if C(datastore) is not specified.'
- ' - C(scsi_controller) (integer): SCSI controller number. Valid value range from 0 to 3.'
@@ -344,13 +347,14 @@ def create_scsi_controller(self, scsi_type, scsi_bus_number):
return scsi_ctl
@staticmethod
- def create_scsi_disk(scsi_ctl_key, disk_index, disk_mode, disk_filename):
+ def create_scsi_disk(scsi_ctl_key, disk_index, disk_mode, disk_filename, sharing):
"""
Create Virtual Device Spec for virtual disk
Args:
scsi_ctl_key: Unique SCSI Controller Key
disk_index: Disk unit number at which disk needs to be attached
disk_mode: Disk mode
+ sharing: Disk sharing mode
disk_filename: Path to the disk file on the datastore
Returns: Virtual Device Spec for virtual disk
@@ -361,6 +365,7 @@ def create_scsi_disk(scsi_ctl_key, disk_index, disk_mode, disk_filename):
disk_spec.device = vim.vm.device.VirtualDisk()
disk_spec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
disk_spec.device.backing.diskMode = disk_mode
+ disk_spec.device.backing.sharing = sharing
disk_spec.device.controllerKey = scsi_ctl_key
disk_spec.device.unitNumber = disk_index
@@ -416,6 +421,28 @@ def get_ioandshares_diskconfig(self, disk_spec, disk):
disk_spec.device.storageIOAllocation = io_disk_spec
return disk_spec
+ def get_sharing(self, disk, disk_type, disk_index):
+ """
+ Get the sharing mode of the virtual disk
+ Args:
+ disk: Virtual disk data object
+ disk_type: Disk type of the virtual disk
+ disk_index: Disk unit number at which disk needs to be attached
+
+ Returns:
+ sharing_mode: The sharing mode of the virtual disk
+
+ """
+ sharing = disk.get('sharing')
+ if sharing and disk_type != 'eagerzeroedthick':
+ self.module.fail_json(msg="Invalid 'sharing' mode specified for disk index [%s]. 'disk_mode'"
+ " must be 'eagerzeroedthick' when 'sharing'." % disk_index)
+ if sharing:
+ sharing_mode = 'sharingMultiWriter'
+ else:
+ sharing_mode = 'sharingNone'
+ return sharing_mode
+
def ensure_disks(self, vm_obj=None):
"""
Manage internal state of virtual machine disks
@@ -466,7 +493,7 @@ def ensure_disks(self, vm_obj=None):
scsi_controller = disk['scsi_controller'] + 1000 # VMware auto assign 1000 + SCSI Controller
if disk['disk_unit_number'] not in current_scsi_info[scsi_controller]['disks'] and disk['state'] == 'present':
# Add new disk
- disk_spec = self.create_scsi_disk(scsi_controller, disk['disk_unit_number'], disk['disk_mode'], disk['filename'])
+ disk_spec = self.create_scsi_disk(scsi_controller, disk['disk_unit_number'], disk['disk_mode'], disk['filename'], disk['sharing'])
if disk['filename'] is None:
disk_spec.device.capacityInKB = disk['size']
if disk['disk_type'] == 'thin':
@@ -480,6 +507,7 @@ def ensure_disks(self, vm_obj=None):
if disk['filename'] is not None:
disk_spec.device.backing.fileName = disk['filename']
disk_spec.device.backing.datastore = disk['datastore']
+ disk_spec.device.backing.sharing = disk['sharing']
disk_spec = self.get_ioandshares_diskconfig(disk_spec, disk)
self.config_spec.deviceChange.append(disk_spec)
disk_change = True
@@ -552,7 +580,8 @@ def sanitize_disk_inputs(self):
autoselect_datastore=True,
disk_unit_number=0,
scsi_controller=0,
- disk_mode='persistent')
+ disk_mode='persistent',
+ sharing=False)
# Check state
if 'state' in disk:
if disk['state'] not in ['absent', 'present']:
@@ -718,6 +747,9 @@ def sanitize_disk_inputs(self):
" 'disk_mode' value from ['persistent', 'independent_persistent', 'independent_nonpersistent']." % disk_index)
current_disk['disk_mode'] = temp_disk_mode
+ # Sharing mode of disk
+ current_disk['sharing'] = self.get_sharing(disk, disk_type, disk_index)
+
# SCSI Controller Type
scsi_contrl_type = disk.get('scsi_type', 'paravirtual').lower()
if scsi_contrl_type not in self.scsi_device_type.keys():
@@ -807,6 +839,7 @@ def gather_disk_facts(vm_obj):
backing_filename=disk.backing.fileName,
backing_datastore=disk.backing.datastore.name,
backing_disk_mode=disk.backing.diskMode,
+ backing_sharing=disk.backing.sharing,
backing_writethrough=disk.backing.writeThrough,
backing_thinprovisioned=disk.backing.thinProvisioned,
backing_eagerlyscrub=bool(disk.backing.eagerlyScrub),
| diff --git a/tests/integration/targets/vmware_guest_disk/tasks/main.yml b/tests/integration/targets/vmware_guest_disk/tasks/main.yml
--- a/tests/integration/targets/vmware_guest_disk/tasks/main.yml
+++ b/tests/integration/targets/vmware_guest_disk/tasks/main.yml
@@ -86,7 +86,7 @@
datacenter: "{{ dc1 }}"
validate_certs: no
name: "{{ virtual_machines[0].name }}"
- disk:
+ disk:
- size_gb: 1
type: eagerzeroedthick
datastore: "{{ rw_datastore }}"
@@ -106,7 +106,7 @@
assert:
that:
- test_custom_shares is changed
-
+
- name: create new disk with custom IO limits and shares in IO Limits
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
@@ -115,7 +115,7 @@
datacenter: "{{ dc1 }}"
validate_certs: no
name: "{{ virtual_machines[0].name }}"
- disk:
+ disk:
- size_gb: 1
type: eagerzeroedthick
datastore: "{{ rw_datastore }}"
@@ -146,7 +146,7 @@
datacenter: "{{ dc1 }}"
validate_certs: no
name: "{{ virtual_machines[0].name }}"
- disk:
+ disk:
- size_gb: 2
type: eagerzeroedthick
datastore: "{{ rw_datastore }}"
@@ -177,7 +177,7 @@
datacenter: "{{ dc1 }}"
validate_certs: no
name: "{{ virtual_machines[0].name }}"
- disk:
+ disk:
- size_gb: 3
type: eagerzeroedthick
datastore: "{{ rw_datastore }}"
@@ -208,7 +208,7 @@
datacenter: "{{ dc1 }}"
validate_certs: no
name: "{{ virtual_machines[0].name }}"
- disk:
+ disk:
- size_gb: 4
type: eagerzeroedthick
datastore: "{{ rw_datastore }}"
@@ -280,3 +280,60 @@
assert:
that:
- test_recreate_disk is changed
+
+- name: create new disk with sharing (multi-writer) mode
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: no
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: eagerzeroedthick
+ sharing: True
+ unit_number: 6
+ register: test_create_disk_sharing
+
+- debug:
+ msg: "{{ test_create_disk_sharing }}"
+
+- name: assert that changes were made
+ assert:
+ that:
+ - test_create_disk_sharing is changed
+
+- name: create new disk with invalid disk type for sharing (multi-writer) mode
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: no
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: thin
+ unit_number: 5
+ sharing: True
+ register: test_create_disk_sharing_invalid
+ ignore_errors: True
+
+- debug:
+ msg: "{{ test_create_disk_sharing_invalid }}"
+
+- name: assert that changes were not made
+ assert:
+ that:
+ - not(test_create_disk_sharing_invalid is changed)
| vmware_guest_disk: add support for setting the sharing mode of virtual disks
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
This feature should allow setting the sharing mode of the virtual disk.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
plugins/modules/vmware_guest_disk.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Setting the sharing to multi writer means that multiple virtual machines can write to the virtual disk. This sharing mode is allowed only for eagerly zeroed thick virtual disks[1].
References:
[1] vSphere WS API: https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.vm.device.VirtualDisk.FlatVer2BackingInfo.html
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| 2020-05-31T22:51:37 |
|
ansible-collections/community.vmware | 218 | ansible-collections__community.vmware-218 | [
"230"
] | 85a2b8675044250a752e82a00f3b9d8b01cb78fe | diff --git a/plugins/modules/vmware_tag_manager.py b/plugins/modules/vmware_tag_manager.py
--- a/plugins/modules/vmware_tag_manager.py
+++ b/plugins/modules/vmware_tag_manager.py
@@ -47,7 +47,7 @@
description:
- Type of object to work with.
required: True
- choices: [ VirtualMachine, Datacenter, ClusterComputeResource, HostSystem, DistributedVirtualSwitch, DistributedVirtualPortgroup ]
+ choices: [ VirtualMachine, Datacenter, ClusterComputeResource, HostSystem, DistributedVirtualSwitch, DistributedVirtualPortgroup, Datastore, DatastoreCluster, ResourcePool, Folder ]
type: str
object_name:
description:
@@ -175,12 +175,25 @@ def __init__(self, module):
if self.object_type == 'VirtualMachine':
self.managed_object = self.pyv.get_vm_or_template(self.object_name)
+ if self.object_type == 'Folder':
+ self.managed_object = self.pyv.find_folder_by_name(self.object_name)
+
if self.object_type == 'Datacenter':
self.managed_object = self.pyv.find_datacenter_by_name(self.object_name)
+ if self.object_type == 'Datastore':
+ self.managed_object = self.pyv.find_datastore_by_name(self.object_name)
+
+ if self.object_type == 'DatastoreCluster':
+ self.managed_object = self.pyv.find_datastore_cluster_by_name(self.object_name)
+ self.object_type = 'StoragePod'
+
if self.object_type == 'ClusterComputeResource':
self.managed_object = self.pyv.find_cluster_by_name(self.object_name)
+ if self.object_type == 'ResourcePool':
+ self.managed_object = self.pyv.find_resource_pool_by_name(self.object_name)
+
if self.object_type == 'HostSystem':
self.managed_object = self.pyv.find_hostsystem_by_name(self.object_name)
@@ -302,7 +315,8 @@ def main():
object_name=dict(type='str', required=True),
object_type=dict(type='str', required=True, choices=['VirtualMachine', 'Datacenter', 'ClusterComputeResource',
'HostSystem', 'DistributedVirtualSwitch',
- 'DistributedVirtualPortgroup']),
+ 'DistributedVirtualPortgroup', 'Datastore', 'ResourcePool',
+ 'Folder', 'DatastoreCluster']),
)
module = AnsibleModule(argument_spec=argument_spec)
| vmware_tag_manager: allow tags to datastore clusters
##### SUMMARY
vmware_tag_manager: allow tags to be applied to datastore clusters much like they can to switches, vms, etc.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_tag_manager
##### ADDITIONAL INFORMATION
As a VI Admin, it is common to need to apply tags to all items in VMware in order to track information about those items. Additionally, in the instance of datastore clusters, these are often used to create storage policies within the environment.
```yaml
- name: "apply tags to datastore cluster"
vmware_tag_manager:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
tag_names: ["Storage:gold"]
object_name: "High-Performance-Datastore-Cluster"
object_type: "DatastoreCluster"
state: "present"
```
| 2020-06-02T20:37:11 |
||
ansible-collections/community.vmware | 237 | ansible-collections__community.vmware-237 | [
"236"
] | d8da2a5ec707d8540bac740e1aa7785c1d00a19e | diff --git a/plugins/modules/vmware_vm_storage_policy.py b/plugins/modules/vmware_vm_storage_policy.py
new file mode 100644
--- /dev/null
+++ b/plugins/modules/vmware_vm_storage_policy.py
@@ -0,0 +1,357 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+# Copyright: (c) 2020, Ansible Project
+# Copyright: (c) 2020, Dustin Scott <[email protected]>
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+DOCUMENTATION = r'''
+---
+module: vmware_vm_storage_policy
+short_description: Create vSphere storage policies
+description:
+- A vSphere storage policy defines metadata that describes storage requirements
+ for virtual machines and storage capabilities of storage providers.
+- Currently, only tag-based storage policy creation is supported.
+version_added: '2.10'
+author:
+- Dustin Scott (@scottd018)
+notes:
+- Tested on vSphere 6.5
+requirements:
+- python >= 2.7
+- PyVmomi
+options:
+ name:
+ description:
+ - Name of the storage policy to create, update, or delete.
+ required: True
+ type: str
+ description:
+ description:
+ - Description of the storage policy to create or update.
+ - This parameter is ignored when C(state=absent).
+ type: str
+ required: False
+ tag_category:
+ description:
+ - Name of the pre-existing tag category to assign to the storage policy.
+ - This parameter is ignored when C(state=absent).
+ - This parameter is required when C(state=present).
+ required: False
+ type: str
+ tag_name:
+ description:
+ - Name of the pre-existing tag to assign to the storage policy.
+ - This parameter is ignored when C(state=absent).
+ - This parameter is required when C(state=present).
+ required: False
+ type: str
+ tag_affinity:
+ description:
+ - If set to C(true), the storage policy enforces that virtual machines require the existence of a tag for datastore placement.
+ - If set to C(false), the storage policy enforces that virtual machines require the absence of a tag for datastore placement.
+ - This parameter is ignored when C(state=absent).
+ required: False
+ type: bool
+ default: True
+ state:
+ description:
+ - State of storage policy.
+ - If set to C(present), the storage policy is created.
+ - If set to C(absent), the storage policy is deleted.
+ default: present
+ choices: [ absent, present ]
+ type: str
+extends_documentation_fragment:
+- community.vmware.vmware.documentation
+extends_documentation_fragment: vmware.documentation
+'''
+
+EXAMPLES = r'''
+- name: Create or update a vSphere tag-based storage policy
+ community.vmware.vmware_vm_storage_policy:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ name: "vSphere storage policy"
+ description: "vSphere storage performance policy"
+ tag_category: "performance_tier"
+ tag_name: "gold"
+ tag_affinity: true
+ state: "present"
+ delegate_to: localhost
+
+- name: Remove a vSphere tag-based storage policy
+ community.vmware.vmware_vm_storage_policy:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ name: "vSphere storage policy"
+ state: "absent"
+ delegate_to: localhost
+'''
+
+RETURN = r'''
+vmware_vm_storage_policy:
+ description: dictionary of information for the storage policy
+ returned: success
+ type: dict
+ sample: {
+ "vmware_vm_storage_policy": {
+ "description": "Storage policy for gold-tier storage",
+ "id": "aa6d5a82-1c88-45da-85d3-3d74b91a5bad",
+ "name": "gold"
+ }
+ }
+'''
+
+try:
+ from pyVmomi import pbm
+except ImportError:
+ pass
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible_collections.community.vmware.plugins.module_utils.vmware_spbm import SPBM
+from ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec
+
+
+class VmwareStoragePolicyManager(SPBM):
+ def __init__(self, module):
+ super(VmwareStoragePolicyManager, self).__init__(module)
+
+ #
+ # MOB METHODS
+ #
+ # These will generate the individual items with the following expected structure (see
+ # https://github.com/vmware/pyvmomi/blob/master/pyVmomi/PbmObjects.py):
+ #
+ # PbmProfile: array
+ # - name: string
+ # description: string
+ # constraints: PbmCapabilityConstraints
+ # subProfiles: ArrayOfPbmCapabilitySubProfile
+ # - name: string
+ # capability: ArrayOfPbmCapabilityInstance
+ # - constraint: ArrayOfCapabilityConstraintInstance
+ # - id: string
+ # value: anyType
+ # values: arrayOfStrings
+ # - tags
+ #
+ #
+ def create_mob_tag_values(self, tags):
+ return pbm.capability.types.DiscreteSet(values=tags)
+
+ def create_mob_capability_property_instance(self, tag_id, tag_operator, tags):
+ return pbm.capability.PropertyInstance(
+ id=tag_id,
+ operator=tag_operator,
+ value=self.create_mob_tag_values(tags)
+ )
+
+ def create_mob_capability_constraint_instance(self, tag_id, tag_operator, tags):
+ return pbm.capability.ConstraintInstance(
+ propertyInstance=[self.create_mob_capability_property_instance(tag_id, tag_operator, tags)]
+ )
+
+ def create_mob_capability_metadata_uniqueid(self, tag_category):
+ return pbm.capability.CapabilityMetadata.UniqueId(
+ namespace="http://www.vmware.com/storage/tag",
+ id=tag_category
+ )
+
+ def create_mob_capability_instance(self, tag_id, tag_operator, tags, tag_category):
+ return pbm.capability.CapabilityInstance(
+ id=self.create_mob_capability_metadata_uniqueid(tag_category),
+ constraint=[self.create_mob_capability_constraint_instance(tag_id, tag_operator, tags)]
+ )
+
+ def create_mob_capability_constraints_subprofile(self, tag_id, tag_operator, tags, tag_category):
+ return pbm.profile.SubProfileCapabilityConstraints.SubProfile(
+ name="Tag based placement",
+ capability=[self.create_mob_capability_instance(tag_id, tag_operator, tags, tag_category)]
+ )
+
+ def create_mob_capability_subprofile(self, tag_id, tag_operator, tags, tag_category):
+ return pbm.profile.SubProfileCapabilityConstraints(
+ subProfiles=[self.create_mob_capability_constraints_subprofile(tag_id, tag_operator, tags, tag_category)]
+ )
+
+ def create_mob_pbm_update_spec(self, tag_id, tag_operator, tags, tag_category, description):
+ return pbm.profile.CapabilityBasedProfileUpdateSpec(
+ description=description,
+ constraints=self.create_mob_capability_subprofile(tag_id, tag_operator, tags, tag_category)
+ )
+
+ def create_mob_pbm_create_spec(self, tag_id, tag_operator, tags, tag_category, description, name):
+ return pbm.profile.CapabilityBasedProfileCreateSpec(
+ name=name,
+ description=description,
+ resourceType=pbm.profile.ResourceType(resourceType="STORAGE"),
+ category="REQUIREMENT",
+ constraints=self.create_mob_capability_subprofile(tag_id, tag_operator, tags, tag_category)
+ )
+
+ def get_tag_constraints(self, capabilities):
+ """
+ Return tag constraints for a profile given its capabilities
+ """
+ tag_constraints = {}
+ for capability in capabilities:
+ for constraint in capability.constraint:
+ if hasattr(constraint, 'propertyInstance'):
+ for propertyInstance in constraint.propertyInstance:
+ if hasattr(propertyInstance.value, 'values'):
+ tag_constraints['id'] = propertyInstance.id
+ tag_constraints['values'] = propertyInstance.value.values
+ tag_constraints['operator'] = propertyInstance.operator
+
+ return tag_constraints
+
+ def get_profile_manager(self):
+ self.get_spbm_connection()
+
+ return self.spbm_content.profileManager
+
+ def get_storage_policies(self, profile_manager):
+ profile_ids = profile_manager.PbmQueryProfile(
+ resourceType=pbm.profile.ResourceType(resourceType="STORAGE"),
+ profileCategory="REQUIREMENT"
+ )
+ profiles = []
+ if profile_ids:
+ profiles = profile_manager.PbmRetrieveContent(profileIds=profile_ids)
+
+ return profiles
+
+ def format_profile(self, profile):
+ formatted_profile = {
+ 'name': profile.name,
+ 'id': profile.profileId.uniqueId,
+ 'description': profile.description
+ }
+
+ return formatted_profile
+
+ def format_tag_mob_id(self, tag_category):
+ return "com.vmware.storage.tag." + tag_category + ".property"
+
+ def format_results_and_exit(self, results, policy, changed):
+ results['vmware_vm_storage_policy'] = self.format_profile(policy)
+ results['changed'] = changed
+
+ self.module.exit_json(**results)
+
+ def update_storage_policy(self, policy, pbm_client, results):
+ expected_description = self.params.get('description')
+ expected_tags = [self.params.get('tag_name')]
+ expected_tag_category = self.params.get('tag_category')
+ expected_tag_id = self.format_tag_mob_id(expected_tag_category)
+ expected_operator = "NOT"
+ if self.params.get('tag_affinity'):
+ expected_operator = None
+
+ needs_change = False
+
+ if policy.description != expected_description:
+ needs_change = True
+
+ if hasattr(policy.constraints, 'subProfiles'):
+ for subprofile in policy.constraints.subProfiles:
+ tag_constraints = self.get_tag_constraints(subprofile.capability)
+ if tag_constraints['id'] == expected_tag_id:
+ if tag_constraints['values'] != expected_tags:
+ needs_change = True
+ else:
+ needs_change = True
+
+ if tag_constraints['operator'] != expected_operator:
+ needs_change = True
+ else:
+ needs_change = True
+
+ if needs_change:
+ pbm_client.PbmUpdate(
+ profileId=policy.profileId,
+ updateSpec=self.create_mob_pbm_update_spec(expected_tag_id, expected_operator, expected_tags, expected_tag_category, expected_description)
+ )
+
+ self.format_results_and_exit(results, policy, needs_change)
+
+ def remove_storage_policy(self, policy, pbm_client, results):
+ pbm_client.PbmDelete(profileId=[policy.profileId])
+
+ self.format_results_and_exit(results, policy, True)
+
+ def create_storage_policy(self, policy, pbm_client, results):
+ profile_ids = pbm_client.PbmCreate(
+ createSpec=self.create_mob_pbm_create_spec(
+ self.format_tag_mob_id(self.params.get('tag_category')),
+ None,
+ [self.params.get('tag_name')],
+ self.params.get('tag_category'),
+ self.params.get('description'),
+ self.params.get('name')
+ )
+ )
+
+ policy = pbm_client.PbmRetrieveContent(profileIds=[profile_ids])
+
+ self.format_results_and_exit(results, policy[0], True)
+
+ def ensure_state(self):
+ client = self.get_profile_manager()
+ policies = self.get_storage_policies(client)
+ policy_name = self.params.get('name')
+ results = dict(changed=False, vmware_vm_storage_policy={})
+
+ if self.params.get('state') == 'present':
+ if self.params.get('tag_category') is None:
+ self.module.fail_json(msg="tag_category is required when 'state' is 'present'")
+
+ if self.params.get('tag_name') is None:
+ self.module.fail_json(msg="tag_name is required when 'state' is 'present'")
+
+ # loop through and update the first match
+ for policy in policies:
+ if policy.name == policy_name:
+ self.update_storage_policy(policy, client, results)
+
+ # if we didn't exit by now create the profile
+ self.create_storage_policy(policy, client, results)
+
+ if self.params.get('state') == 'absent':
+ # loop through and delete the first match
+ for policy in policies:
+ if policy.name == policy_name:
+ self.remove_storage_policy(policy, client, results)
+
+ # if we didn't exit by now exit without changing anything
+ self.module.exit_json(**results)
+
+
+def main():
+ argument_spec = vmware_argument_spec()
+ argument_spec.update(
+ name=dict(type='str', required=True),
+ description=dict(type='str', required=False),
+ tag_name=dict(type='str', required=False),
+ tag_category=dict(type='str', required=False),
+ tag_affinity=dict(type='bool', default=True),
+ state=dict(type='str', choices=['absent', 'present'], default='present')
+ )
+
+ module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
+ manager = VmwareStoragePolicyManager(module)
+
+ manager.ensure_state()
+
+
+if __name__ == '__main__':
+ main()
| diff --git a/tests/integration/targets/vmware_vm_storage_policy/aliases b/tests/integration/targets/vmware_vm_storage_policy/aliases
new file mode 100644
--- /dev/null
+++ b/tests/integration/targets/vmware_vm_storage_policy/aliases
@@ -0,0 +1,3 @@
+needs/target/prepare_vmware_tests
+zuul/vmware/vcenter_only
+cloud/vcenter
\ No newline at end of file
diff --git a/tests/integration/targets/vmware_vm_storage_policy/tasks/main.yml b/tests/integration/targets/vmware_vm_storage_policy/tasks/main.yml
new file mode 100644
--- /dev/null
+++ b/tests/integration/targets/vmware_vm_storage_policy/tasks/main.yml
@@ -0,0 +1,119 @@
+# Test code for the vmware_tag CRUD Operations.
+# Copyright: (c) 2020, Dustin Scott <[email protected]>
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+- import_role:
+ name: prepare_vmware_tests
+
+- block:
+ - name: Create category
+ vmware_category:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ category_name: "{{ cat_one }}"
+ category_cardinality: 'multiple'
+ state: present
+ register: category_one_create
+
+ - name: Set category one id
+ set_fact: cat_one_id={{ category_one_create['category_results']['category_id'] }}
+
+ - name: Create tag
+ vmware_tag:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ tag_name: "{{ tag_one }}"
+ category_id: "{{ cat_one_id }}"
+ state: present
+ register: tag_one_create
+
+ - name: Check tag is created
+ assert:
+ that:
+ - tag_one_create.changed
+
+ - &policy_create
+ name: Create or update a vSphere tag-based storage policy
+ community.vmware.vmware_vm_storage_policy:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ name: "{{ policy_one }}"
+ description: "{{ policy_one }}"
+ tag_category: "{{ cat_one }}"
+ tag_name: "{{ tag_one }}"
+ tag_affinity: true
+ state: "present"
+ register: policy_create
+
+ - name: Check policy is created
+ assert:
+ that:
+ - policy_create.changed
+
+ - <<: *policy_create
+ name: Create policy again
+
+ - name: Check policy is created
+ assert:
+ that:
+ - not policy_create.changed
+
+ - &policy_delete
+ name: Remove a vSphere tag-based storage policy
+ community.vmware.vmware_vm_storage_policy:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ name: "{{ policy_one }}"
+ state: "absent"
+ register: policy_delete
+
+ - name: Check policy is deleted
+ assert:
+ that:
+ - policy_delete.changed
+
+ - <<: *policy_delete
+ name: Delete policy again
+
+ - name: Check policy is deleted
+ assert:
+ that:
+ - not policy_delete.changed
+
+ always:
+
+ - name: Delete Tags
+ vmware_tag:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ tag_name: "{{ item }}"
+ state: absent
+ register: delete_tag
+ with_items:
+ - "{{ tag_one }}"
+
+ - name: Delete Categories
+ vmware_category:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: no
+ category_name: "{{ item }}"
+ state: absent
+ register: delete_categories
+ with_items:
+ - "{{ cat_one }}"
+ vars:
+ cat_one: category_1004
+ tag_one: tag_1004
+ policy_one: policy_1004
| Add support for vmware_vm_storage_policy
##### SUMMARY
Add support to allow creation of VMware storage policies in vCenter. These policies allow the VI Admin to enforce placement of virtual machines based on their storage requirements.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_vm_storage_policy (non-existent currently)
##### ADDITIONAL INFORMATION
This is a common item that VI Admins typically control within vCenter environments.
```yaml
- name: Create or update a vSphere tag-based storage policy
community.vmware.vmware_vm_storage_policy:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
name: "vSphere storage policy"
description: "vSphere storage performance policy"
tag_category: "performance_tier"
tag_name: "gold"
tag_affinity: true
state: "present"
delegate_to: localhost
- name: Remove a vSphere tag-based storage policy
community.vmware.vmware_vm_storage_policy:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
name: "vSphere storage policy"
state: "absent"
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| 2020-06-16T16:57:38 |
|
ansible-collections/community.vmware | 252 | ansible-collections__community.vmware-252 | [
"243"
] | 7abdaa5ce49d19f6ba7cae83c3f287ed7affcda8 | diff --git a/plugins/modules/vmware_guest_snapshot_info.py b/plugins/modules/vmware_guest_snapshot_info.py
--- a/plugins/modules/vmware_guest_snapshot_info.py
+++ b/plugins/modules/vmware_guest_snapshot_info.py
@@ -48,7 +48,7 @@
folder:
description:
- Destination folder, absolute or relative path to find an existing guest.
- - This is required only, if multiple virtual machines with same name are found on given vCenter.
+ - This is required parameter, if C(name) is supplied.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter
- 'Examples:'
- ' folder: /ha-datacenter/vm'
@@ -78,6 +78,7 @@
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
+ folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
delegate_to: localhost
register: snapshot_info
| vmware_guest_snapshot_info documentation is outdated on website, as folder is now a required parameter
_From @NeverUsedID on Jun 17, 2020 08:57_
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Like in https://github.com/ansible/ansible/issues/22644 for vmware_guest_snapshot, the documentation for vmware_guest_snapshot_info is outdated, as the folder parameter is now REQUIRED.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
The "Edit on Github Button" does not work, as it throws a 404
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
vmware_guest_snapshot_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Apr 15 2020, 17:07:12) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
_Copied from original issue: ansible/ansible#70111_
| 2020-06-20T19:35:06 |
||
ansible-collections/community.vmware | 277 | ansible-collections__community.vmware-277 | [
"274"
] | 23c166d4b2a9199106b5fc55d71da5e6214ab0aa | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -1285,18 +1285,28 @@ def sanitize_cdrom_params(self):
if cdrom_spec['type'] == 'iso' and not cdrom_spec.get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
- if cdrom_spec['controller_type'] == 'ide' and \
- (cdrom_spec.get('controller_number') not in [0, 1] or cdrom_spec.get('unit_number') not in [0, 1]):
+ if 'controller_number' not in cdrom_spec or 'unit_number' not in cdrom_spec:
+ self.module.fail_json(msg="'cdrom.controller_number' and 'cdrom.unit_number' are required"
+ " parameters when configure CDROM list.")
+ try:
+ cdrom_ctl_num = int(cdrom_spec.get('controller_number'))
+ cdrom_ctl_unit_num = int(cdrom_spec.get('unit_number'))
+ except ValueError:
+ self.module.fail_json(msg="'cdrom.controller_number' and 'cdrom.unit_number' attributes should be "
+ "integer values.")
+
+ if cdrom_spec['controller_type'] == 'ide' and (cdrom_ctl_num not in [0, 1] or cdrom_ctl_unit_num not in [0, 1]):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s, valid"
" values are 0 or 1 for IDE controller."
% (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
- if cdrom_spec['controller_type'] == 'sata' and \
- (cdrom_spec.get('controller_number') not in range(0, 4) or cdrom_spec.get('unit_number') not in range(0, 30)):
+ if cdrom_spec['controller_type'] == 'sata' and (cdrom_ctl_num not in range(0, 4) or cdrom_ctl_unit_num not in range(0, 30)):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s,"
" valid controller_number value is 0-3, valid unit_number is 0-29"
" for SATA controller." % (cdrom_spec.get('controller_number'),
cdrom_spec.get('unit_number')))
+ cdrom_spec['controller_number'] = cdrom_ctl_num
+ cdrom_spec['unit_number'] = cdrom_ctl_unit_num
ctl_exist = False
for exist_spec in cdrom_specs:
@@ -2191,8 +2201,11 @@ def sanitize_disk_parameters(self, vm_obj):
" mandatory parameters when configure multiple disk controllers and disks.")
try:
ctl_num = int(disk_spec['controller_number'])
+ ctl_unit_num = int(disk_spec['unit_number'])
except ValueError:
- self.module.fail_json(msg="Failed to parse 'disk.controller_number' value, valid type is integer.")
+ self.module.fail_json(msg="'disk.controller_number' and 'disk.unit_number' attributes should be integer"
+ " values.")
+ disk_spec['unit_number'] = ctl_unit_num
ctl_type = disk_spec['controller_type'].lower()
# max number of same type disk controller is 4
if ctl_num > 3:
| vmware_guest: parameter like controller_type or unit_number should convert to int firstly
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Parameters' values with "int" type should be converted to int firstly using "integer_value" function. Or it will pass string type of value when setting it to a variable.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Reconfigure VM CDROM"
vmware_guest:
hostname: "{{ vc }}"
username: "{{ vc_user }}"
password: "{{ vc_password }}"
validate_certs: "{{ valid_cert }}"
datacenter: "{{ vc_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_name }}"
cdrom:
- type: "{{ cdrom_type }}"
iso_path: "{{ cdrom_iso_file | default(omit) }}"
controller_type: "{{ cdrom_controller | default(omit) }}"
controller_number: "{{ cdrom_controller_num }}"
unit_number: "{{ cdrom_unit_num }}"
state: "present"
register: vm_result
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
When int type parameter's value is set to other variable, and this variable's value is not set to an integer directly, then passed parameter's value is treated as string type. So in the code, we need to convert it to int firstly before validate it's value.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```
TASK [Reconfigure VM CDROM] **************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Invalid cdrom.controller_number: 0 or cdrom.unit_number: 1, valid values are 0 or 1 for IDE controller."}
```
<!--- Paste verbatim command output between quotes -->
```paste below
```
| 2020-07-07T03:06:46 |
||
ansible-collections/community.vmware | 289 | ansible-collections__community.vmware-289 | [
"260"
] | dba5fb5b63423703746182850b1cf5d439e4ddcd | diff --git a/plugins/modules/vmware_cluster_vsan.py b/plugins/modules/vmware_cluster_vsan.py
--- a/plugins/modules/vmware_cluster_vsan.py
+++ b/plugins/modules/vmware_cluster_vsan.py
@@ -20,9 +20,11 @@
author:
- Joseph Callen (@jcpowermac)
- Abhijeet Kasurde (@Akasurde)
+- Mario Lenz (@mariolenz)
requirements:
- - Tested on ESXi 5.5 and 6.5.
+ - Tested on ESXi 6.7.
- PyVmomi installed.
+ - vSAN Management SDK, which needs to be downloaded from VMware and installed manually.
options:
cluster_name:
description:
@@ -46,6 +48,33 @@
on VSAN-enabled hosts in the cluster.
type: bool
default: False
+ advanced_options:
+ version_added: "1.1.0"
+ description:
+ - Advanced VSAN Options.
+ suboptions:
+ automatic_rebalance:
+ description:
+ - If enabled, vSAN automatically rebalances (moves the data among disks) when a capacity disk fullness hits proactive rebalance threshold.
+ type: bool
+ disable_site_read_locality:
+ description:
+ - For vSAN stretched clusters, reads to vSAN objects occur on the site the VM resides on.
+ - Setting to C(True) will force reads across all mirrors.
+ type: bool
+ large_cluster_support:
+ description:
+ - Allow > 32 VSAN hosts per cluster; if this is changed on an existing vSAN cluster, all hosts are required to reboot to apply this change.
+ type: bool
+ object_repair_timer:
+ description:
+ - Delay time in minutes for VSAN to wait for the absent component to come back before starting to repair it.
+ type: int
+ thin_swap:
+ description:
+ - When C(enabled), swap objects would not reserve 100% space of their size on vSAN datastore.
+ type: bool
+ type: dict
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -62,6 +91,18 @@
enable_vsan: yes
delegate_to: localhost
+- name: Enable vSAN and automatic rebalancing
+ community.vmware.vmware_cluster_vsan:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ datacenter_name: datacenter
+ cluster_name: cluster
+ enable_vsan: yes
+ advanced_options:
+ automatic_rebalance: True
+ delegate_to: localhost
+
- name: Enable vSAN and claim storage automatically
community.vmware.vmware_cluster_vsan:
hostname: "{{ vcenter_hostname }}"
@@ -78,12 +119,21 @@
RETURN = r"""#
"""
+import traceback
+
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
-from ansible.module_utils.basic import AnsibleModule
+try:
+ import vsanapiutils
+ HAS_VSANPYTHONSDK = True
+except ImportError:
+ VSANPYTHONSDK_IMP_ERR = traceback.format_exc()
+ HAS_VSANPYTHONSDK = False
+
+from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible_collections.community.vmware.plugins.module_utils.vmware import (
PyVmomi,
TaskError,
@@ -101,6 +151,7 @@ def __init__(self, module):
self.enable_vsan = module.params['enable_vsan']
self.datacenter = None
self.cluster = None
+ self.advanced_options = None
self.datacenter = find_datacenter_by_name(self.content, self.datacenter_name)
if self.datacenter is None:
@@ -110,6 +161,14 @@ def __init__(self, module):
if self.cluster is None:
self.module.fail_json(msg="Cluster %s does not exist." % self.cluster_name)
+ if module.params['advanced_options'] is not None:
+ self.advanced_options = module.params['advanced_options']
+ client_stub = self.si._GetStub()
+ ssl_context = client_stub.schemeArgs.get('context')
+ apiVersion = vsanapiutils.GetLatestVmodlVersion(module.params['hostname'])
+ vcMos = vsanapiutils.GetVsanVcMos(client_stub, context=ssl_context, version=apiVersion)
+ self.vsanClusterConfigSystem = vcMos['vsan-cluster-config-system']
+
def check_vsan_config_diff(self):
"""
Check VSAN configuration diff
@@ -121,6 +180,25 @@ def check_vsan_config_diff(self):
if vsan_config.enabled != self.enable_vsan or \
vsan_config.defaultConfig.autoClaimStorage != self.params.get('vsan_auto_claim_storage'):
return True
+
+ if self.advanced_options is not None:
+ vsan_config_info = self.vsanClusterConfigSystem.GetConfigInfoEx(self.cluster).extendedConfig
+ if self.advanced_options['automatic_rebalance'] is not None and \
+ self.advanced_options['automatic_rebalance'] != vsan_config_info.proactiveRebalanceInfo.enabled:
+ return True
+ if self.advanced_options['disable_site_read_locality'] is not None and \
+ self.advanced_options['disable_site_read_locality'] != vsan_config_info.disableSiteReadLocality:
+ return True
+ if self.advanced_options['large_cluster_support'] is not None and \
+ self.advanced_options['large_cluster_support'] != vsan_config_info.largeScaleClusterSupport:
+ return True
+ if self.advanced_options['object_repair_timer'] is not None and \
+ self.advanced_options['object_repair_timer'] != vsan_config_info.objectRepairTimer:
+ return True
+ if self.advanced_options['thin_swap'] is not None and \
+ self.advanced_options['thin_swap'] != vsan_config_info.enableCustomizedSwapObject:
+ return True
+
return False
def configure_vsan(self):
@@ -132,14 +210,32 @@ def configure_vsan(self):
if self.check_vsan_config_diff():
if not self.module.check_mode:
- cluster_config_spec = vim.cluster.ConfigSpecEx()
- cluster_config_spec.vsanConfig = vim.vsan.cluster.ConfigInfo()
- cluster_config_spec.vsanConfig.enabled = self.enable_vsan
- cluster_config_spec.vsanConfig.defaultConfig = vim.vsan.cluster.ConfigInfo.HostDefaultInfo()
- cluster_config_spec.vsanConfig.defaultConfig.autoClaimStorage = self.params.get('vsan_auto_claim_storage')
+ vSanSpec = vim.vsan.ReconfigSpec(
+ modify=True,
+ )
+ vSanSpec.vsanClusterConfig = vim.vsan.cluster.ConfigInfo(
+ enabled=self.enable_vsan
+ )
+ vSanSpec.vsanClusterConfig.defaultConfig = vim.vsan.cluster.ConfigInfo.HostDefaultInfo(
+ autoClaimStorage=self.params.get('vsan_auto_claim_storage')
+ )
+ if self.advanced_options is not None:
+ vSanSpec.extendedConfig = vim.vsan.VsanExtendedConfig()
+ if self.advanced_options['automatic_rebalance'] is not None:
+ vSanSpec.extendedConfig.proactiveRebalanceInfo = vim.vsan.ProactiveRebalanceInfo(
+ enabled=self.advanced_options['automatic_rebalance']
+ )
+ if self.advanced_options['disable_site_read_locality'] is not None:
+ vSanSpec.extendedConfig.disableSiteReadLocality = self.advanced_options['disable_site_read_locality']
+ if self.advanced_options['large_cluster_support'] is not None:
+ vSanSpec.extendedConfig.largeScaleClusterSupport = self.advanced_options['large_cluster_support']
+ if self.advanced_options['object_repair_timer'] is not None:
+ vSanSpec.extendedConfig.objectRepairTimer = self.advanced_options['object_repair_timer']
+ if self.advanced_options['thin_swap'] is not None:
+ vSanSpec.extendedConfig.enableCustomizedSwapObject = self.advanced_options['thin_swap']
try:
- task = self.cluster.ReconfigureComputeResource_Task(cluster_config_spec, True)
- changed, result = wait_for_task(task)
+ task = self.vsanClusterConfigSystem.VsanClusterReconfig(self.cluster, vSanSpec)
+ changed, result = wait_for_task(vim.Task(task._moId, self.si._stub))
except vmodl.RuntimeFault as runtime_fault:
self.module.fail_json(msg=to_native(runtime_fault.msg))
except vmodl.MethodFault as method_fault:
@@ -163,6 +259,13 @@ def main():
# VSAN
enable_vsan=dict(type='bool', default=False),
vsan_auto_claim_storage=dict(type='bool', default=False),
+ advanced_options=dict(type='dict', options=dict(
+ automatic_rebalance=dict(type='bool', required=False),
+ disable_site_read_locality=dict(type='bool', required=False),
+ large_cluster_support=dict(type='bool', required=False),
+ object_repair_timer=dict(type='int', required=False),
+ thin_swap=dict(type='bool', required=False),
+ )),
))
module = AnsibleModule(
@@ -170,6 +273,9 @@ def main():
supports_check_mode=True,
)
+ if not HAS_VSANPYTHONSDK:
+ module.fail_json(msg=missing_required_lib('vSAN Management SDK for Python'), exception=VSANPYTHONSDK_IMP_ERR)
+
vmware_cluster_vsan = VMwareCluster(module)
vmware_cluster_vsan.configure_vsan()
| diff --git a/tests/integration/targets/vmware_cluster_vsan/aliases b/tests/integration/targets/vmware_cluster_vsan/aliases
--- a/tests/integration/targets/vmware_cluster_vsan/aliases
+++ b/tests/integration/targets/vmware_cluster_vsan/aliases
@@ -1,3 +1,4 @@
+disabled
cloud/vcenter
needs/target/prepare_vmware_tests
zuul/vmware/vcenter_only
diff --git a/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml b/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml
--- a/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml
+++ b/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml
@@ -35,7 +35,62 @@
that:
- "{{ cluster_vsan_result_0001.changed == true }}"
- # Testcase 0002: Disable vSAN
+ # Testcase 0002: Enable vSAN again (check for idempotency)
+ - name: Enable vSAN again (check for idempotency)
+ vmware_cluster_vsan:
+ validate_certs: False
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_vsan
+ enable_vsan: yes
+ register: cluster_vsan_result_0002
+
+ - name: Ensure vSAN is not enabled again
+ assert:
+ that:
+ - "{{ cluster_vsan_result_0002.changed == false }}"
+
+ # Testcase 0003: Change object repair timer
+ - name: Change object repair timer
+ vmware_cluster_vsan:
+ validate_certs: False
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_vsan
+ enable_vsan: yes
+ advanced_options:
+ object_repair_timer: 67
+ register: cluster_vsan_result_0003
+
+ - name: Ensure object repair timer is changed
+ assert:
+ that:
+ - "{{ cluster_vsan_result_0003.changed == true }}"
+
+ # Testcase 0004: Change object repair timer again (check for idempotency)
+ - name: Change object repair timer again (check for idempotency)
+ vmware_cluster_vsan:
+ validate_certs: False
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_vsan
+ enable_vsan: yes
+ advanced_options:
+ object_repair_timer: 67
+ register: cluster_vsan_result_0004
+
+ - name: Ensure object repair timer is not changed again
+ assert:
+ that:
+ - "{{ cluster_vsan_result_0004.changed == false }}"
+
+ # Testcase 0005: Disable vSAN
- name: Disable vSAN
vmware_cluster_vsan:
validate_certs: False
@@ -45,12 +100,29 @@
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_vsan
enable_vsan: no
- register: cluster_vsan_result_0002
+ register: cluster_vsan_result_0005
- name: Ensure vSAN is disabled
assert:
that:
- - "{{ cluster_vsan_result_0002.changed == true }}"
+ - "{{ cluster_vsan_result_0005.changed == true }}"
+
+ # Testcase 0006: Disable vSAN again (check for idempotency)
+ - name: Disable vSAN again (check for idempotency)
+ vmware_cluster_vsan:
+ validate_certs: False
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_vsan
+ enable_vsan: no
+ register: cluster_vsan_result_0006
+
+ - name: Ensure vSAN is not disabled again
+ assert:
+ that:
+ - "{{ cluster_vsan_result_0006.changed == false }}"
# Delete test cluster
- name: Delete test cluster
| vmware_cluster_vsan - add support for advanced parameters
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
While configuring vsan clusters, setting the advanced parameters for the vsan service (Object repair timer, Thin Swap, Large Cluster Support etc.) should be possible.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_cluster_vsan
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
These parameters often need to be tweaked while configuring vsan clusters.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Enable vSAN and set advanced parameters
vmware_cluster_vsan:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_vsan: yes
advanced_parameters:
object_repair_timer: 120
site_read_locality: enabled
thin_swap: enabled
large_cluster_support: enabled
automatic_rebalance: disabled
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| I'd like to work on this but don't have much time at the moment. For the record: I think `object_repair_timer`, `site_read_locality`, `large_cluster_support ` and maybe `thin_swap` can be controlled through [VsanExtendedConfig](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b525fb12-61bb-4ede-b9e3-c4a1f8171510/99ba073a-60e9-4933-8690-149860ce8754/doc/vim.vsan.VsanExtendedConfig.html). Haven't found anything to implement `automatic_rebalance` yet, though.
> These parameters often need to be tweaked while configuring vsan clusters.
We've been running VSAN clusters for years and never had to change these settings at all. I don't know your use cases, maybe these parameters are really important to you... but I think most people never touch them. As an ops guy, I especially would question the design decision to build VSAN clustern with > 32 hosts... that's a fairly big failure domain ;-)
Anyway, I'll try to find some time work on this.
In my use case i only had to tweak `object_repair_timer` and `thin_swap` but i thought implementing them all might make sense while someone is at it.
I stumbled across the problem that the `VsanExtendedConfig` is part of the vsan api, and i had to use additional modules from VMware(`vsanmgmtObjects.py`). Only the stuff from the vSphere Webservices API is available in `pyvmomi` if i understand that correctly? I'm very new to VMware and their APIs so i might be talking nonsense here.
Great to hear that someone wants to work on it though :-)
> In my use case i only had to tweak `object_repair_timer` and `thin_swap` but i thought implementing them all might make sense while someone is at it.
Yes, I can imagine use cases where you want to tweak `object_repair_timer` and `thin_swap`. And I agree that, when implementing these two, it makes sense to implement the other advanced settings as well.
> Great to hear that someone wants to work on it though :-)
Well... yes... wants to... I really do, it's just a question of finding the time. At the end of the day, I _use_ these modules and fix bugs or implement features that affect us. Everything else, I do in my spare time... but I'll try. After all, sooner or later we might want to tweak these settings, too ;-)
Bad news: It looks like pyVmomi doesn't know about these advanced VSAN options at all, you really need to install the [vSAN Management SDK for Python](https://storagehub.vmware.com/t/vsan-api-cookbook-for-python/virtual-san-sdks/) first. _(edit: But you've stumbled across this already)_ I have a really bad feeling introducing this as a dependency... @Akasurde @goneri Or do you think differently?
Btw: Feel free to kick VMware for not providing this SDK in an easily consumable way ;-)
Maybe I could implement this with the vSphere Automation SDK for Python but I'm still waiting for it to be published on PyPI (vmware/vsphere-automation-sdk-python#38).
Hi @lupa95 and @mariolenz,
`vmware_vsan_health_info` already depends on the vsan sdk (a.ka vSAN Management SDK for Python). The extra dependency is mentioned in the requirements of the module. We cannot properly test the module in the CI, since we cannot easily pip install the depency.
I was also reluctant to include the module, but at this time, they were no real alternative. I don't know if the situation has changed.
So, I would say, I'm ok to merge to patch BUT we must the extra dependency must be mentioned in the documentation, and if someone try to run the module without the dependency, the error message should be as clear as possible.
> `vmware_vsan_health_info` already depends on the vsan sdk (a.ka vSAN Management SDK for Python).
Didn't know this, but that means I'm not introducing a new dependency. OK, then I'll give it a try.
> I was also reluctant to include the module, but at this time, they were no real alternative. I don't know if the situation has changed.
To the best of my knowledge the situation hasn't changed... unfortunately :-(
@lupa95 @goneri fyi: vmware/pyvmomi#909
Hi,
maybe we can extend the features request for the following settings:
- claim unused disk
- configure stretched cluster and fault domains
Regards,
Stephan
> maybe we can extend the features request for the following settings:
>
> * claim unused disk
Isn't auto-claiming deprecated? I'm sure I've read something about it...
> * configure stretched cluster and fault domains
This might be a bit too much for a single PR, but I can't tell at the moment. I'm still trying to understand the vSAN Management SDK for Python and it kind of keeps fighting back... but since this afternoon, it looks like I'm winning :-)=)
I'll have a look at it and will let you know whether I think it's a good idea to implement this in one go or not.
> Isn't auto-claiming deprecated? I'm sure I've read something about it...
Yes you are right, but there must we another way to do this because the vmware webclient allow to do this more or less automatically. Maybe this doc helps: https://vdc-download.vmware.com/vmwb-repository/dcr-public/424d010b-c80e-40de-b1a3-25f6e9861e6a/3b934f51-98b6-4ea1-9336-b1bac1f23403/vsan-sdk-67.pdf **"Claiming and Managing Disks"**
> This might be a bit too much for a single PR, but I can't tell at the moment. I'm still trying to understand the vSAN Management SDK for Python and it kind of keeps fighting back... but since this afternoon, it looks like I'm winning :-)=)
>
> I'll have a look at it and will let you know whether I think it's a good idea to implement this in one go or not.
Thanks and regards,
Stephan
| 2020-07-10T15:42:22 |
ansible-collections/community.vmware | 306 | ansible-collections__community.vmware-306 | [
"196"
] | 099118c8db3b153f008a70420685d8b35eacef59 | diff --git a/plugins/modules/vmware_guest_disk.py b/plugins/modules/vmware_guest_disk.py
--- a/plugins/modules/vmware_guest_disk.py
+++ b/plugins/modules/vmware_guest_disk.py
@@ -68,86 +68,136 @@
type: bool
disk:
description:
- - A list of disks to add.
+ - A list of disks to add or remove.
- The virtual disk related information is provided using this list.
- All values and parameters are case sensitive.
- - 'Valid attributes are:'
- - ' - C(size[_tb,_gb,_mb,_kb]) (integer): Disk storage size in specified unit.'
- - ' If C(size) specified then unit must be specified. There is no space allowed in between size number and unit.'
- - ' Only first occurrence in disk element will be considered, even if there are multiple size* parameters available.'
- - ' - C(type) (string): Valid values are:'
- - ' - C(thin) thin disk'
- - ' - C(eagerzeroedthick) eagerzeroedthick disk'
- - ' - C(thick) thick disk'
- - ' Default: C(thick) thick disk, no eagerzero.'
- - ' - C(disk_mode) (string): Type of disk mode. Valid values are:'
- - ' - C(persistent) Changes are immediately and permanently written to the virtual disk. This is default.'
- - ' - C(independent_persistent) Same as persistent, but not affected by snapshots.'
- - ' - C(independent_nonpersistent) Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
- - ' - C(sharing) (bool): The sharing mode of the virtual disk. The default value is no sharing.'
- - ' Setting C(sharing) means that multiple virtual machines can write to the virtual disk.'
- - ' Sharing can only be set if C(type) is C(eagerzeroedthick).'
- - ' - C(datastore) (string): Name of datastore or datastore cluster to be used for the disk.'
- - ' - C(autoselect_datastore) (bool): Select the less used datastore. Specify only if C(datastore) is not specified.'
- - ' - C(scsi_controller) (integer): SCSI controller number. Valid value range from 0 to 3.'
- - ' Only 4 SCSI controllers are allowed per VM.'
- - ' Care should be taken while specifying C(scsi_controller) is 0 and C(unit_number) as 0 as this disk may contain OS.'
- - ' - C(unit_number) (integer): Disk Unit Number. Valid value range from 0 to 15. Only 15 disks are allowed per SCSI Controller.'
- - ' - C(scsi_type) (string): Type of SCSI controller. This value is required only for the first occurrence of SCSI Controller.'
- - ' This value is ignored, if SCSI Controller is already present or C(state) is C(absent).'
- - ' Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual).'
- - ' C(paravirtual) is default value for this parameter.'
- - ' - C(destroy) (bool): If C(state) is C(absent), make sure the disk file is deleted from the datastore (default C(yes)).'
- - ' Added in version 2.10.'
- - ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- - ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.10.'
- - ' - C(state) (string): State of disk. This is either "absent" or "present".'
- - ' If C(state) is set to C(absent), disk will be removed permanently from virtual machine configuration and from VMware storage.'
- - ' If C(state) is set to C(present), disk will be added if not present at given SCSI Controller and Unit Number.'
- - ' If C(state) is set to C(present) and disk exists with different size, disk size is increased.'
- - ' Reducing disk size is not allowed.'
suboptions:
- iolimit:
+ size:
+ description:
+ - Disk storage size.
+ - If size specified then unit must be specified. There is no space allowed in between size number and unit.
+ - Only first occurrence in disk element will be considered, even if there are multiple size* parameters available.
+ type: str
+ size_kb:
+ description: Disk storage size in kb.
+ type: int
+ size_mb:
+ description: Disk storage size in mb.
+ type: int
+ size_gb:
+ description: Disk storage size in gb.
+ type: int
+ size_tb:
+ description: Disk storage size in tb.
+ type: int
+ type:
+ description: The type of disk, if not specified then use C(thick) type for new disk, no eagerzero.
+ type: str
+ choices: ['thin', 'eagerzeroedthick', 'thick']
+ disk_mode:
+ description:
+ - Type of disk mode. If not specified then use C(persistent) mode for new disk.
+ - If set to 'persistent' mode, changes are immediately and permanently written to the virtual disk.
+ - If set to 'independent_persistent' mode, same as persistent, but not affected by snapshots.
+ - If set to 'independent_nonpersistent' mode, changes to virtual disk are made to a redo log and discarded
+ at power off, but not affected by snapshots.
+ type: str
+ choices: ['persistent', 'independent_persistent', 'independent_nonpersistent']
+ sharing:
+ description:
+ - The sharing mode of the virtual disk.
+ - Setting sharing means that multiple virtual machines can write to the virtual disk.
+ - Sharing can only be set if C(type) is set to C(eagerzeroedthick).
+ type: bool
+ default: False
+ datastore:
+ description: Name of datastore or datastore cluster to be used for the disk.
+ type: str
+ autoselect_datastore:
+ description: Select the less used datastore. Specify only if C(datastore) is not specified.
+ type: bool
+ scsi_controller:
+ description:
+ - SCSI controller number. Only 4 SCSI controllers are allowed per VM.
+ - Care should be taken while specifying 'scsi_controller' is 0 and 'unit_number' as 0 as this disk may contain OS.
+ type: int
+ choices: [0, 1, 2, 3]
+ unit_number:
+ description:
+ - Disk Unit Number.
+ - Valid value range from 0 to 15, except 7 for SCSI Controller.
+ - Valid value range from 0 to 29 for SATA controller.
+ - Valid value range from 0 to 14 for NVME controller.
+ type: int
+ required: True
+ scsi_type:
+ description:
+ - Type of SCSI controller. This value is required only for the first occurrence of SCSI Controller.
+ - This value is ignored, if SCSI Controller is already present or C(state) is C(absent).
+ type: str
+ choices: ['buslogic', 'lsilogic', 'lsilogicsas', 'paravirtual']
+ destroy:
+ description: If C(state) is C(absent), make sure the disk file is deleted from the datastore. Added in version 2.10.
+ type: bool
+ default: True
+ filename:
+ description:
+ - Existing disk image to be used. Filename must already exist on the datastore.
+ - Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.10.
+ type: str
+ state:
+ description:
+ - State of disk.
+ - If set to 'absent', disk will be removed permanently from virtual machine configuration and from VMware storage.
+ - If set to 'present', disk will be added if not present at given Controller and Unit Number.
+ - or disk exists with different size, disk size is increased, reducing disk size is not allowed.
+ type: str
+ choices: ['present', 'absent']
+ default: 'present'
+ controller_type:
description:
- - Section specifies the shares and limit for storage I/O resource.
+ - This parameter is added for managing disks attaching other types of controllers, e.g., SATA or NVMe.
+ - If either C(controller_type) or C(scsi_type) is not specified, then use C(paravirtual) type.
+ type: str
+ choices: ['buslogic', 'lsilogic', 'lsilogicsas', 'paravirtual', 'sata', 'nvme']
+ controller_number:
+ description: This parameter is used with C(controller_type) for specifying controller bus number.
+ type: int
+ choices: [0, 1, 2, 3]
+ iolimit:
+ description: Section specifies the shares and limit for storage I/O resource.
suboptions:
limit:
- description:
- - Section specifies values for limit where the utilization of a virtual machine will not exceed, even if there are available resources.
+ description: Section specifies values for limit where the utilization of a virtual machine will not exceed, even if there are available resources.
+ type: int
shares:
- description:
- - Specifies different types of shares user can add for the given disk.
+ description: Specifies different types of shares user can add for the given disk.
suboptions:
level:
- description:
- - Specifies different level for the shares section.
- - Valid values are low, normal, high, custom.
+ description: Specifies different level for the shares section.
+ type: str
+ choices: ['low', 'normal', 'high', 'custom']
level_value:
- description:
- - Custom value when C(level) is set as C(custom).
+ description: Custom value when C(level) is set as C(custom).
type: int
- type: list
- elements: dict
+ type: dict
+ type: dict
shares:
- description:
- - section for iolimit section tells about what are all different types of shares user can add for disk.
+ description: Section for iolimit section tells about what are all different types of shares user can add for disk.
suboptions:
level:
- description:
- - tells about different level for the shares section, valid values are low,normal,high,custom.
+ description: Tells about different level for the shares section.
type: str
+ choices: ['low', 'normal', 'high', 'custom']
level_value:
- description:
- - custom value when level is set as custom.
+ description: Custom value when C(level) is set as C(custom).
type: int
- type: list
- elements: dict
+ type: dict
default: []
type: list
elements: dict
extends_documentation_fragment:
- community.vmware.vmware.documentation
-
'''
EXAMPLES = r'''
@@ -271,6 +321,33 @@
destroy: no
delegate_to: localhost
register: disk_facts
+
+- name: Add disks to virtual machine using UUID to SATA and NVMe controller
+ community.vmware.vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ datacenter_name }}"
+ validate_certs: no
+ uuid: 421e4592-c069-924d-ce20-7e7533fab926
+ disk:
+ - size_mb: 256
+ type: thin
+ datastore: datacluster0
+ state: present
+ controller_type: sata
+ controller_number: 1
+ unit_number: 1
+ disk_mode: 'persistent'
+ - size_gb: 1
+ state: present
+ autoselect_datastore: True
+ controller_type: nvme
+ controller_number: 2
+ unit_number: 3
+ disk_mode: 'independent_persistent'
+ delegate_to: localhost
+ register: disk_facts
'''
RETURN = r'''
@@ -304,9 +381,11 @@
except ImportError:
pass
+from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
-from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task, find_obj, get_all_objs
+from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec,\
+ wait_for_task, find_obj, get_all_objs, get_parent_datacenter
class PyVmomiHelper(PyVmomi):
@@ -318,6 +397,8 @@ def __init__(self, module):
paravirtual=vim.vm.device.ParaVirtualSCSIController,
buslogic=vim.vm.device.VirtualBusLogicController,
lsilogicsas=vim.vm.device.VirtualLsiLogicSASController)
+ self.ctl_device_type = self.scsi_device_type.copy()
+ self.ctl_device_type.update({'sata': vim.vm.device.VirtualAHCIController, 'nvme': vim.vm.device.VirtualNVMEController})
self.config_spec = vim.vm.ConfigSpec()
self.config_spec.deviceChange = []
@@ -334,24 +415,52 @@ def create_scsi_controller(self, scsi_type, scsi_bus_number):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_ctl.device = self.scsi_device_type[scsi_type]()
- scsi_ctl.device.unitNumber = 3
+ scsi_ctl.device.deviceInfo = vim.Description()
scsi_ctl.device.busNumber = scsi_bus_number
- scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
+ scsi_ctl.device.hotAddRemove = True
+ scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
+ def create_sata_controller(self, sata_bus_number):
+ sata_ctl = vim.vm.device.VirtualDeviceSpec()
+ sata_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
+ sata_ctl.device = vim.vm.device.VirtualAHCIController()
+ sata_ctl.device.deviceInfo = vim.Description()
+ sata_ctl.device.busNumber = sata_bus_number
+ sata_ctl.device.key = -randint(15000, 19999)
+
+ return sata_ctl
+
+ def create_nvme_controller(self, nvme_bus_number):
+ nvme_ctl = vim.vm.device.VirtualDeviceSpec()
+ nvme_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
+ nvme_ctl.device = vim.vm.device.VirtualNVMEController()
+ nvme_ctl.device.deviceInfo = vim.Description()
+ nvme_ctl.device.busNumber = nvme_bus_number
+ nvme_ctl.device.key = -randint(31000, 39999)
+
+ return nvme_ctl
+
+ def find_disk_by_key(self, disk_key, disk_unit_number):
+ found_disk = None
+ for device in self.vm.config.hardware.device:
+ if isinstance(device, vim.vm.device.VirtualDisk) and device.key == disk_key:
+ if device.unitNumber == disk_unit_number:
+ found_disk = device
+ break
+
+ return found_disk
+
@staticmethod
- def create_scsi_disk(scsi_ctl_key, disk_index, disk_mode, disk_filename, sharing):
+ def create_disk(ctl_key, disk):
"""
Create Virtual Device Spec for virtual disk
Args:
- scsi_ctl_key: Unique SCSI Controller Key
- disk_index: Disk unit number at which disk needs to be attached
- disk_mode: Disk mode
- sharing: Disk sharing mode
- disk_filename: Path to the disk file on the datastore
+ ctl_key: Unique SCSI Controller Key
+ disk: The disk configurations dict
Returns: Virtual Device Spec for virtual disk
@@ -360,15 +469,14 @@ def create_scsi_disk(scsi_ctl_key, disk_index, disk_mode, disk_filename, sharing
disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
disk_spec.device = vim.vm.device.VirtualDisk()
disk_spec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
- disk_spec.device.backing.diskMode = disk_mode
- disk_spec.device.backing.sharing = sharing
- disk_spec.device.controllerKey = scsi_ctl_key
- disk_spec.device.unitNumber = disk_index
-
- if disk_filename is not None:
- disk_spec.device.backing.fileName = disk_filename
- else:
- disk_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
+ disk_spec.device.backing.diskMode = disk['disk_mode']
+ disk_spec.device.backing.sharing = disk['sharing']
+ disk_spec.device.controllerKey = ctl_key
+ disk_spec.device.unitNumber = disk['disk_unit_number']
+ if disk['disk_type'] == 'thin':
+ disk_spec.device.backing.thinProvisioned = True
+ elif disk['disk_type'] == 'eagerzeroedthick':
+ disk_spec.device.backing.eagerlyScrub = True
return disk_spec
@@ -388,7 +496,7 @@ def reconfigure_vm(self, config_spec, device_type):
task = self.vm.ReconfigVM_Task(spec=config_spec)
changed, results = wait_for_task(task)
except vim.fault.InvalidDeviceSpec as invalid_device_spec:
- self.module.fail_json(msg="Failed to manage %s on given virtual machine due to invalid"
+ self.module.fail_json(msg="Failed to manage '%s' on given virtual machine due to invalid"
" device spec : %s" % (device_type, to_native(invalid_device_spec.msg)),
details="Please check ESXi server logs for more details.")
except vim.fault.RestrictedVersion as e:
@@ -448,107 +556,131 @@ def ensure_disks(self, vm_obj=None):
"""
# Set vm object
self.vm = vm_obj
+ vm_files_datastore = self.vm.config.files.vmPathName.split(' ')[0].strip('[]')
# Sanitize user input
disk_data = self.sanitize_disk_inputs()
- # Create stateful information about SCSI devices
- current_scsi_info = dict()
+ ctl_changed = False
+ disk_change_list = list()
results = dict(changed=False, disk_data=None, disk_changes=dict())
+ new_added_disk_ctl = list()
- # Deal with SCSI Controller
- for device in vm_obj.config.hardware.device:
- if isinstance(device, tuple(self.scsi_device_type.values())):
- # Found SCSI device
- if device.busNumber not in current_scsi_info:
- device_bus_number = 1000 + device.busNumber
- current_scsi_info[device_bus_number] = dict(disks=dict())
-
- scsi_changed = False
+ # Deal with controller
for disk in disk_data:
- scsi_controller = disk['scsi_controller'] + 1000
- if scsi_controller not in current_scsi_info and disk['state'] == 'present':
- scsi_ctl = self.create_scsi_controller(disk['scsi_type'], disk['scsi_controller'])
- current_scsi_info[scsi_controller] = dict(disks=dict())
- self.config_spec.deviceChange.append(scsi_ctl)
- scsi_changed = True
- if scsi_changed:
- self.reconfigure_vm(self.config_spec, 'SCSI Controller')
+ ctl_found = False
+ # check if disk controller is in the new adding queue
+ for new_ctl in new_added_disk_ctl:
+ if new_ctl['controller_type'] == disk['controller_type'] and new_ctl['controller_number'] == disk['controller_number']:
+ ctl_found = True
+ break
+ # check if disk controller already exists
+ if not ctl_found:
+ for device in self.vm.config.hardware.device:
+ if isinstance(device, self.ctl_device_type[disk['controller_type']]):
+ if device.busNumber == disk['controller_number']:
+ ctl_found = True
+ break
+ # create disk controller when not found and disk state is present
+ if not ctl_found and disk['state'] == 'present':
+ # Create new controller
+ if disk['controller_type'] in self.scsi_device_type.keys():
+ ctl_spec = self.create_scsi_controller(disk['controller_type'], disk['controller_number'])
+ elif disk['controller_type'] == 'sata':
+ ctl_spec = self.create_sata_controller(disk['controller_number'])
+ elif disk['controller_type'] == 'nvme':
+ ctl_spec = self.create_nvme_controller(disk['controller_number'])
+ new_added_disk_ctl.append({'controller_type': disk['controller_type'], 'controller_number': disk['controller_number']})
+ ctl_changed = True
+ self.config_spec.deviceChange.append(ctl_spec)
+ elif not ctl_found and disk['state'] == 'absent':
+ self.module.fail_json(msg="Not found 'controller_type': '%s', 'controller_number': '%s', so can not"
+ " remove this disk, please make sure 'controller_type' and"
+ " 'controller_number' are correct." % (disk['controller_type'], disk['controller_number']))
+ if ctl_changed:
+ self.reconfigure_vm(self.config_spec, 'Disk Controller')
self.config_spec = vim.vm.ConfigSpec()
self.config_spec.deviceChange = []
# Deal with Disks
- for device in vm_obj.config.hardware.device:
- if isinstance(device, vim.vm.device.VirtualDisk):
- # Found Virtual Disk device
- if device.controllerKey not in current_scsi_info:
- current_scsi_info[device.controllerKey] = dict(disks=dict())
- current_scsi_info[device.controllerKey]['disks'][device.unitNumber] = device
-
- disk_change_list = []
for disk in disk_data:
+ disk_found = False
disk_change = False
- scsi_controller = disk['scsi_controller'] + 1000 # VMware auto assign 1000 + SCSI Controller
- if disk['disk_unit_number'] not in current_scsi_info[scsi_controller]['disks'] and disk['state'] == 'present':
- # Add new disk
- disk_spec = self.create_scsi_disk(scsi_controller, disk['disk_unit_number'], disk['disk_mode'], disk['filename'], disk['sharing'])
- if disk['filename'] is None:
- disk_spec.device.capacityInKB = disk['size']
- if disk['disk_type'] == 'thin':
- disk_spec.device.backing.thinProvisioned = True
- elif disk['disk_type'] == 'eagerzeroedthick':
- disk_spec.device.backing.eagerlyScrub = True
- # get Storage DRS recommended datastore from the datastore cluster
- if disk['datastore_cluster'] is not None:
- datastore_name = self.get_recommended_datastore(datastore_cluster_obj=disk['datastore_cluster'], disk_spec_obj=disk_spec)
- disk['datastore'] = find_obj(self.content, [vim.Datastore], datastore_name)
- if disk['filename'] is not None:
- disk_spec.device.backing.fileName = disk['filename']
- disk_spec.device.backing.datastore = disk['datastore']
- disk_spec.device.backing.sharing = disk['sharing']
- disk_spec = self.get_ioandshares_diskconfig(disk_spec, disk)
- self.config_spec.deviceChange.append(disk_spec)
- disk_change = True
- current_scsi_info[scsi_controller]['disks'][disk['disk_unit_number']] = disk_spec.device
- results['disk_changes'][disk['disk_index']] = "Disk created."
- elif disk['disk_unit_number'] in current_scsi_info[scsi_controller]['disks']:
- if disk['state'] == 'present':
- disk_spec = vim.vm.device.VirtualDeviceSpec()
- # set the operation to edit so that it knows to keep other settings
- disk_spec.device = current_scsi_info[scsi_controller]['disks'][disk['disk_unit_number']]
- # Edit and no resizing allowed
- if disk['size'] < disk_spec.device.capacityInKB:
- self.module.fail_json(msg="Given disk size at disk index [%s] is smaller than found (%d < %d)."
- "Reducing disks is not allowed." % (disk['disk_index'],
- disk['size'],
- disk_spec.device.capacityInKB))
- if disk['size'] != disk_spec.device.capacityInKB:
- disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+ ctl_found = False
+ for device in self.vm.config.hardware.device:
+ if isinstance(device, self.ctl_device_type[disk['controller_type']]) and device.busNumber == disk['controller_number']:
+ for disk_key in device.device:
+ disk_device = self.find_disk_by_key(disk_key, disk['disk_unit_number'])
+ if disk_device is not None:
+ disk_found = True
+ if disk['state'] == 'present':
+ disk_spec = vim.vm.device.VirtualDeviceSpec()
+ # set the operation to edit so that it knows to keep other settings
+ disk_spec.device = disk_device
+ # Edit and no resizing allowed
+ if disk['size'] < disk_spec.device.capacityInKB:
+ self.module.fail_json(msg="Given disk size at disk index [%s] is smaller than found"
+ " (%d < %d). Reducing disks is not allowed."
+ % (disk['disk_index'], disk['size'],
+ disk_spec.device.capacityInKB))
+ if disk['size'] != disk_spec.device.capacityInKB:
+ disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+ disk_spec = self.get_ioandshares_diskconfig(disk_spec, disk)
+ disk_spec.device.capacityInKB = disk['size']
+ self.config_spec.deviceChange.append(disk_spec)
+ disk_change = True
+ disk_change_list.append(disk_change)
+ results['disk_changes'][disk['disk_index']] = "Disk reconfigured."
+ elif disk['state'] == 'absent':
+ # Disk already exists, deleting
+ disk_spec = vim.vm.device.VirtualDeviceSpec()
+ disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
+ if disk['destroy'] is True:
+ disk_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.destroy
+ disk_spec.device = disk_device
+ self.config_spec.deviceChange.append(disk_spec)
+ disk_change = True
+ disk_change_list.append(disk_change)
+ results['disk_changes'][disk['disk_index']] = "Disk deleted."
+ break
+ if disk_found:
+ break
+ if not disk_found and disk['state'] == 'present':
+ # Add new disk
+ disk_spec = self.create_disk(device.key, disk)
+ # get Storage DRS recommended datastore from the datastore cluster
+ if disk['filename'] is None:
+ if disk['datastore_cluster'] is not None:
+ datastore_name = self.get_recommended_datastore(datastore_cluster_obj=disk['datastore_cluster'], disk_spec_obj=disk_spec)
+ disk['datastore'] = find_obj(self.content, [vim.Datastore], datastore_name)
+ disk_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
+ disk_spec.device.capacityInKB = disk['size']
+ # Set backing filename when datastore is configured and not the same as VM datastore
+ # If datastore is not configured or backing filename is not set, default is VM datastore
+ if disk['datastore'] is not None and disk['datastore'].name != vm_files_datastore:
+ disk_spec.device.backing.datastore = disk['datastore']
+ disk_spec.device.backing.fileName = "[%s] %s/%s_%s_%s_%s.vmdk" % (disk['datastore'].name,
+ self.vm.name,
+ self.vm.name,
+ device.key,
+ str(disk['disk_unit_number']),
+ str(randint(1, 10000)))
+ elif disk['filename'] is not None:
+ disk_spec.device.backing.fileName = disk['filename']
disk_spec = self.get_ioandshares_diskconfig(disk_spec, disk)
- disk_spec.device.capacityInKB = disk['size']
self.config_spec.deviceChange.append(disk_spec)
disk_change = True
- results['disk_changes'][disk['disk_index']] = "Disk size increased."
- else:
- results['disk_changes'][disk['disk_index']] = "Disk already exists."
-
- elif disk['state'] == 'absent':
- # Disk already exists, deleting
- disk_spec = vim.vm.device.VirtualDeviceSpec()
- disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
- if disk['destroy'] is True:
- disk_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.destroy
- disk_spec.device = current_scsi_info[scsi_controller]['disks'][disk['disk_unit_number']]
- self.config_spec.deviceChange.append(disk_spec)
- disk_change = True
- results['disk_changes'][disk['disk_index']] = "Disk deleted."
-
+ disk_change_list.append(disk_change)
+ results['disk_changes'][disk['disk_index']] = "Disk created."
+ break
+ if not disk_found and disk['state'] == 'absent':
+ self.module.fail_json(msg="Not found disk with 'controller_type': '%s',"
+ " 'controller_number': '%s', 'unit_number': '%s' to remove."
+ % (disk['controller_type'], disk['controller_number'], disk['disk_unit_number']))
if disk_change:
# Adding multiple disks in a single attempt raises weird errors
# So adding single disk at a time.
self.reconfigure_vm(self.config_spec, 'disks')
self.config_spec = vim.vm.ConfigSpec()
self.config_spec.deviceChange = []
- disk_change_list.append(disk_change)
-
if any(disk_change_list):
results['changed'] = True
results['disk_data'] = self.gather_disk_facts(vm_obj=self.vm)
@@ -562,8 +694,8 @@ def sanitize_disk_inputs(self):
"""
disks_data = list()
if not self.desired_disks:
- self.module.exit_json(changed=False, msg="No disks provided for virtual"
- " machine '%s' for management." % self.vm.name)
+ self.module.exit_json(changed=False, msg="No disks provided for virtual machine '%s' for management."
+ % self.vm.name)
for disk_index, disk in enumerate(self.desired_disks):
# Initialize default value for disk
@@ -575,33 +707,76 @@ def sanitize_disk_inputs(self):
datastore=None,
autoselect_datastore=True,
disk_unit_number=0,
- scsi_controller=0,
+ controller_number=0,
disk_mode='persistent',
+ disk_type='thick',
sharing=False)
# Check state
- if 'state' in disk:
- if disk['state'] not in ['absent', 'present']:
- self.module.fail_json(msg="Invalid state provided '%s' for disk index [%s]."
- " State can be either - 'absent', 'present'" % (disk['state'],
- disk_index))
- else:
- current_disk['state'] = disk['state']
+ if disk['state'] is not None:
+ current_disk['state'] = disk['state']
+
+ # Check controller type
+ if disk['scsi_type'] is not None and disk['controller_type'] is None:
+ current_disk['controller_type'] = disk['scsi_type']
+ elif disk['scsi_type'] is None and disk['controller_type'] is None:
+ current_disk['controller_type'] = 'paravirtual'
+ elif disk['controller_type'] is not None and disk['scsi_type'] is None:
+ current_disk['controller_type'] = disk['controller_type']
+ else:
+ self.module.fail_json(msg="Please specify either 'scsi_type' or 'controller_type' for disk index [%s]."
+ % disk_index)
+
+ # Check controller bus number
+ if disk['scsi_controller'] is not None and disk['controller_number'] is None and disk['controller_type'] is None:
+ temp_disk_controller = disk['scsi_controller']
+ elif disk['controller_number'] is not None and disk['scsi_controller'] is None and disk['scsi_type'] is None:
+ temp_disk_controller = disk['controller_number']
+ else:
+ self.module.fail_json(msg="Please specify 'scsi_controller' with 'scsi_type', or 'controller_number'"
+ " with 'controller_type' under disk parameter for disk index [%s], which is"
+ " required while creating or configuring disk." % disk_index)
+ try:
+ disk_controller = int(temp_disk_controller)
+ except ValueError:
+ self.module.fail_json(msg="Invalid controller bus number '%s' specified"
+ " for disk index [%s]" % (temp_disk_controller, disk_index))
+ current_disk['controller_number'] = disk_controller
+
+ try:
+ temp_disk_unit_number = int(disk['unit_number'])
+ except ValueError:
+ self.module.fail_json(msg="Invalid Disk unit number ID '%s' specified at index [%s]."
+ % (disk['unit_number'], disk_index))
+ if current_disk['controller_type'] in self.scsi_device_type.keys():
+ if temp_disk_unit_number not in range(0, 16):
+ self.module.fail_json(msg="Invalid Disk unit number ID specified for disk [%s] at index [%s],"
+ " please specify value between 0 to 15 only (excluding 7)."
+ % (temp_disk_unit_number, disk_index))
+ if temp_disk_unit_number == 7:
+ self.module.fail_json(msg="Invalid Disk unit number ID specified for disk at index [%s], please"
+ " specify value other than 7 as it is reserved for SCSI Controller."
+ % disk_index)
+ elif current_disk['controller_type'] == 'sata' and temp_disk_unit_number not in range(0, 30):
+ self.module.fail_json(msg="Invalid Disk unit number ID specified for SATA disk [%s] at index [%s],"
+ " please specify value between 0 to 29" % (temp_disk_unit_number, disk_index))
+ elif current_disk['controller_type'] == 'nvme' and temp_disk_unit_number not in range(0, 15):
+ self.module.fail_json(msg="Invalid Disk unit number ID specified for NVMe disk [%s] at index [%s],"
+ " please specify value between 0 to 14" % (temp_disk_unit_number, disk_index))
+ current_disk['disk_unit_number'] = temp_disk_unit_number
# By default destroy file from datastore if 'destroy' parameter is not provided
if current_disk['state'] == 'absent':
current_disk['destroy'] = disk.get('destroy', True)
elif current_disk['state'] == 'present':
# Select datastore or datastore cluster
- if 'datastore' in disk:
- if 'autoselect_datastore' in disk:
- self.module.fail_json(msg="Please specify either 'datastore' "
- "or 'autoselect_datastore' for disk index [%s]" % disk_index)
-
+ if disk['datastore'] is not None:
+ if disk['autoselect_datastore'] is not None:
+ self.module.fail_json(msg="Please specify either 'datastore' or 'autoselect_datastore' for"
+ " disk index [%s]" % disk_index)
# Check if given value is datastore or datastore cluster
datastore_name = disk['datastore']
datastore_cluster = find_obj(self.content, [vim.StoragePod], datastore_name)
datastore = find_obj(self.content, [vim.Datastore], datastore_name)
-
if datastore is None and datastore_cluster is None:
self.module.fail_json(msg="Failed to find datastore or datastore cluster named '%s' "
"in given configuration." % disk['datastore'])
@@ -609,14 +784,19 @@ def sanitize_disk_inputs(self):
# If user specified datastore cluster, keep track of that for determining datastore later
current_disk['datastore_cluster'] = datastore_cluster
elif datastore:
+ ds_datacenter = get_parent_datacenter(datastore)
+ if ds_datacenter.name != self.module.params['datacenter']:
+ self.module.fail_json(msg="Get datastore '%s' in datacenter '%s', not the configured"
+ " datacenter '%s'" % (datastore.name, ds_datacenter.name,
+ self.module.params['datacenter']))
current_disk['datastore'] = datastore
current_disk['autoselect_datastore'] = False
- elif 'autoselect_datastore' in disk:
+ elif disk['autoselect_datastore'] is not None:
# Find datastore which fits requirement
datastores = get_all_objs(self.content, [vim.Datastore])
if not datastores:
- self.module.fail_json(msg="Failed to gather information about"
- " available datastores in given datacenter.")
+ self.module.fail_json(msg="Failed to gather information about available datastores in given"
+ " datacenter '%s'." % self.module.params['datacenter'])
datastore = None
datastore_freespace = 0
for ds in datastores:
@@ -626,18 +806,13 @@ def sanitize_disk_inputs(self):
datastore_freespace = ds.summary.freeSpace
current_disk['datastore'] = datastore
- if 'datastore' not in disk and 'autoselect_datastore' not in disk and 'filename' not in disk:
- self.module.fail_json(msg="Either 'datastore' or 'autoselect_datastore' is"
- " required parameter while creating disk for "
- "disk index [%s]." % disk_index)
-
- if 'filename' in disk:
+ if disk['filename'] is not None:
current_disk['filename'] = disk['filename']
- if [x for x in disk.keys() if x.startswith('size_') or x == 'size']:
+ if [x for x in disk.keys() if ((x.startswith('size_') or x == 'size') and disk[x] is not None)]:
# size, size_tb, size_gb, size_mb, size_kb
disk_size_parse_failed = False
- if 'size' in disk:
+ if disk['size'] is not None:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(disk['size'])
if disk_size_m:
@@ -657,7 +832,7 @@ def sanitize_disk_inputs(self):
else:
# Even multiple size_ parameter provided by user,
# consider first value only
- param = [x for x in disk.keys() if x.startswith('size_')][0]
+ param = [x for x in disk.keys() if (x.startswith('size_') and disk[x] is not None)][0]
unit = param.split('_')[-1]
disk_size = disk[param]
if isinstance(disk_size, (float, int)):
@@ -685,79 +860,25 @@ def sanitize_disk_inputs(self):
current_disk['size'] = expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size for disk index [%s]."
- " Supported units are ['%s']." % (unit,
- disk_index,
- "', '".join(disk_units.keys())))
-
+ " Supported units are ['%s']." % (unit, disk_index, "', '".join(disk_units.keys())))
elif current_disk['filename'] is None:
# No size found but disk, fail
self.module.fail_json(msg="No size, size_kb, size_mb, size_gb or size_tb"
" attribute found into disk index [%s] configuration." % disk_index)
- # Check SCSI controller key
- if 'scsi_controller' in disk:
- try:
- temp_disk_controller = int(disk['scsi_controller'])
- except ValueError:
- self.module.fail_json(msg="Invalid SCSI controller ID '%s' specified"
- " at index [%s]" % (disk['scsi_controller'], disk_index))
- if temp_disk_controller not in range(0, 4):
- # Only 4 SCSI controllers are allowed per VM
- self.module.fail_json(msg="Invalid SCSI controller ID specified [%s],"
- " please specify value between 0 to 3 only." % temp_disk_controller)
- current_disk['scsi_controller'] = temp_disk_controller
- else:
- self.module.fail_json(msg="Please specify 'scsi_controller' under disk parameter"
- " at index [%s], which is required while creating disk." % disk_index)
- # Check for disk unit number
- if 'unit_number' in disk:
- try:
- temp_disk_unit_number = int(disk['unit_number'])
- except ValueError:
- self.module.fail_json(msg="Invalid Disk unit number ID '%s'"
- " specified at index [%s]" % (disk['unit_number'], disk_index))
- if temp_disk_unit_number not in range(0, 16):
- self.module.fail_json(msg="Invalid Disk unit number ID specified for disk [%s] at index [%s],"
- " please specify value between 0 to 15"
- " only (excluding 7)." % (temp_disk_unit_number, disk_index))
-
- if temp_disk_unit_number == 7:
- self.module.fail_json(msg="Invalid Disk unit number ID specified for disk at index [%s],"
- " please specify value other than 7 as it is reserved"
- "for SCSI Controller" % disk_index)
- current_disk['disk_unit_number'] = temp_disk_unit_number
- else:
- self.module.fail_json(msg="Please specify 'unit_number' under disk parameter"
- " at index [%s], which is required while creating disk." % disk_index)
-
- # Type of Disk
- disk_type = disk.get('type', 'thick').lower()
- if disk_type not in ['thin', 'thick', 'eagerzeroedthick']:
- self.module.fail_json(msg="Invalid 'disk_type' specified for disk index [%s]. Please specify"
- " 'disk_type' value from ['thin', 'thick', 'eagerzeroedthick']." % disk_index)
- current_disk['disk_type'] = disk_type
-
- # Mode of Disk
- temp_disk_mode = disk.get('disk_mode', 'persistent').lower()
- if temp_disk_mode not in ['persistent', 'independent_persistent', 'independent_nonpersistent']:
- self.module.fail_json(msg="Invalid 'disk_mode' specified for disk index [%s]. Please specify"
- " 'disk_mode' value from ['persistent', 'independent_persistent', 'independent_nonpersistent']." % disk_index)
- current_disk['disk_mode'] = temp_disk_mode
-
- # Sharing mode of disk
- current_disk['sharing'] = self.get_sharing(disk, disk_type, disk_index)
-
- # SCSI Controller Type
- scsi_contrl_type = disk.get('scsi_type', 'paravirtual').lower()
- if scsi_contrl_type not in self.scsi_device_type.keys():
- self.module.fail_json(msg="Invalid 'scsi_type' specified for disk index [%s]. Please specify"
- " 'scsi_type' value from ['%s']" % (disk_index,
- "', '".join(self.scsi_device_type.keys())))
- current_disk['scsi_type'] = scsi_contrl_type
- if 'shares' in disk:
- current_disk['shares'] = disk['shares']
- if 'iolimit' in disk:
- current_disk['iolimit'] = disk['iolimit']
+ # Type of Disk
+ if disk['type'] is not None:
+ current_disk['disk_type'] = disk['type']
+ # Mode of Disk
+ if disk['disk_mode'] is not None:
+ current_disk['disk_mode'] = disk['disk_mode']
+ # Sharing mode of disk
+ current_disk['sharing'] = self.get_sharing(disk, current_disk['disk_type'], disk_index)
+
+ if disk['shares'] is not None:
+ current_disk['shares'] = disk['shares']
+ if disk['iolimit'] is not None:
+ current_disk['iolimit'] = disk['iolimit']
disks_data.append(current_disk)
return disks_data
@@ -863,14 +984,55 @@ def main():
moid=dict(type='str'),
folder=dict(type='str'),
datacenter=dict(type='str', required=True),
- disk=dict(type='list', default=[], elements='dict'),
use_instance_uuid=dict(type='bool', default=False),
+ disk=dict(
+ type='list',
+ default=[],
+ elements='dict',
+ options=dict(
+ size=dict(type='str'),
+ size_kb=dict(type='int'),
+ size_mb=dict(type='int'),
+ size_gb=dict(type='int'),
+ size_tb=dict(type='int'),
+ type=dict(type='str', choices=['thin', 'eagerzeroedthick', 'thick']),
+ disk_mode=dict(type='str', choices=['persistent', 'independent_persistent', 'independent_nonpersistent']),
+ sharing=dict(type='bool', default=False),
+ datastore=dict(type='str'),
+ autoselect_datastore=dict(type='bool'),
+ scsi_controller=dict(type='int', choices=[0, 1, 2, 3]),
+ unit_number=dict(type='int', required=True),
+ scsi_type=dict(type='str', choices=['buslogic', 'lsilogic', 'paravirtual', 'lsilogicsas']),
+ destroy=dict(type='bool', default=True),
+ filename=dict(type='str'),
+ state=dict(type='str', default='present', choices=['present', 'absent']),
+ controller_type=dict(type='str', choices=['buslogic', 'lsilogic', 'paravirtual', 'lsilogicsas', 'sata', 'nvme']),
+ controller_number=dict(type='int', choices=[0, 1, 2, 3]),
+ iolimit=dict(
+ type='dict',
+ options=dict(
+ limit=dict(type='int'),
+ shares=dict(
+ type='dict',
+ options=dict(
+ level=dict(type='str', choices=['low', 'high', 'normal', 'custom']),
+ level_value=dict(type='int'),
+ ),
+ ),
+ )),
+ shares=dict(
+ type='dict',
+ options=dict(
+ level=dict(type='str', choices=['low', 'high', 'normal', 'custom']),
+ level_value=dict(type='int'),
+ ),
+ ),
+ ),
+ ),
)
module = AnsibleModule(
argument_spec=argument_spec,
- required_one_of=[
- ['name', 'uuid', 'moid']
- ]
+ required_one_of=[['name', 'uuid', 'moid']],
)
if module.params['folder']:
| diff --git a/tests/integration/targets/vmware_guest_disk/tasks/main.yml b/tests/integration/targets/vmware_guest_disk/tasks/main.yml
--- a/tests/integration/targets/vmware_guest_disk/tasks/main.yml
+++ b/tests/integration/targets/vmware_guest_disk/tasks/main.yml
@@ -16,14 +16,14 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - datastore: "{{ rw_datastore }}"
- disk_mode: "invalid_disk_mode"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 10
- state: present
- type: eagerzeroedthick
- unit_number: 2
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "invalid_disk_mode"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 10
+ state: present
+ type: eagerzeroedthick
+ unit_number: 2
register: test_create_disk1
ignore_errors: true
@@ -33,7 +33,7 @@
- name: assert that changes were not made
assert:
that:
- - not(test_create_disk1 is changed)
+ - not(test_create_disk1 is changed)
- name: create new disk(s) with valid disk mode
vmware_guest_disk:
@@ -44,30 +44,30 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - datastore: "{{ rw_datastore }}"
- disk_mode: "independent_persistent"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 1
- state: present
- type: eagerzeroedthick
- unit_number: 2
- - datastore: "{{ rw_datastore }}"
- disk_mode: "independent_nonpersistent"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 1
- state: present
- type: eagerzeroedthick
- unit_number: 3
- - datastore: "{{ rw_datastore }}"
- disk_mode: "persistent"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 1
- state: present
- type: eagerzeroedthick
- unit_number: 4
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: eagerzeroedthick
+ unit_number: 2
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_nonpersistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: eagerzeroedthick
+ unit_number: 3
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: eagerzeroedthick
+ unit_number: 4
register: test_create_disk2
- debug:
@@ -76,7 +76,7 @@
- name: assert that changes were made
assert:
that:
- - test_create_disk2 is changed
+ - test_create_disk2 is changed
- name: create new disk with custom shares
vmware_guest_disk:
@@ -87,16 +87,16 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - size_gb: 1
- type: eagerzeroedthick
- datastore: "{{ rw_datastore }}"
- disk_mode: "independent_nonpersistent"
- scsi_controller: 1
- state: present
- unit_number: 4
- shares:
- level: custom
- level_value: 1300
+ - size_gb: 1
+ type: eagerzeroedthick
+ datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_nonpersistent"
+ scsi_controller: 1
+ state: present
+ unit_number: 4
+ shares:
+ level: custom
+ level_value: 1300
register: test_custom_shares
- debug:
@@ -105,7 +105,7 @@
- name: assert that changes were made
assert:
that:
- - test_custom_shares is changed
+ - test_custom_shares is changed
- name: create new disk with custom IO limits and shares in IO Limits
vmware_guest_disk:
@@ -116,18 +116,18 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - size_gb: 1
- type: eagerzeroedthick
- datastore: "{{ rw_datastore }}"
- disk_mode: "independent_nonpersistent"
- scsi_controller: 2
- state: present
- unit_number: 4
- iolimit:
- limit: 1506
- shares:
- level: custom
- level_value: 1305
+ - size_gb: 1
+ type: eagerzeroedthick
+ datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_nonpersistent"
+ scsi_controller: 2
+ state: present
+ unit_number: 4
+ iolimit:
+ limit: 1506
+ shares:
+ level: custom
+ level_value: 1305
register: test_custom_IoLimit_shares
- debug:
@@ -136,71 +136,117 @@
- name: assert that changes were made
assert:
that:
- - test_custom_IoLimit_shares is changed
-
-- name: Update disk for custom IO limits in IO Limits
- vmware_guest_disk:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter: "{{ dc1 }}"
- validate_certs: false
- name: "{{ virtual_machines[0].name }}"
- disk:
- - size_gb: 2
- type: eagerzeroedthick
- datastore: "{{ rw_datastore }}"
- disk_mode: "independent_nonpersistent"
- scsi_controller: 2
- state: present
- unit_number: 4
- iolimit:
- limit: 1500
- shares:
- level: custom
- level_value: 1305
- register: test_custom_IoLimit
-
-- debug:
- msg: "{{ test_custom_IoLimit }}"
-
-- name: assert that changes were made
- assert:
- that:
- - test_custom_IoLimit is changed
-
-- name: Update disk for shares of IO limits
- vmware_guest_disk:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter: "{{ dc1 }}"
- validate_certs: false
- name: "{{ virtual_machines[0].name }}"
- disk:
- - size_gb: 3
- type: eagerzeroedthick
- datastore: "{{ rw_datastore }}"
- disk_mode: "independent_nonpersistent"
- scsi_controller: 2
- state: present
- unit_number: 4
- iolimit:
- limit: 1500
- shares:
- level: low
- level_value: 650
- register: test_shares_IoLimit
-
-- debug:
- msg: "{{ test_shares_IoLimit }}"
+ - test_custom_IoLimit_shares is changed
+
+# TODO: vcsim does not support reconfiguration of disk mode, fails with types.InvalidDeviceSpec
+- when: vcsim is not defined
+ block:
+ - name: Update disk for custom IO limits in IO Limits
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - size_gb: 2
+ scsi_controller: 2
+ state: present
+ unit_number: 4
+ iolimit:
+ limit: 1500
+ shares:
+ level: custom
+ level_value: 1305
+ register: test_custom_IoLimit
+
+ - debug:
+ msg: "{{ test_custom_IoLimit }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_custom_IoLimit is changed
+
+ - name: Update disk for shares of IO limits
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - size_gb: 3
+ scsi_controller: 2
+ state: present
+ unit_number: 4
+ iolimit:
+ limit: 1500
+ shares:
+ level: low
+ register: test_shares_IoLimit
+
+ - debug:
+ msg: "{{ test_shares_IoLimit }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_shares_IoLimit is changed
+
+ - name: Update disk for shares and IoLimits of IO limits
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - size_gb: 4
+ scsi_controller: 2
+ state: present
+ unit_number: 4
+ iolimit:
+ limit: 1507
+ shares:
+ level: high
+ register: test_shares_IoLimits
+
+ - debug:
+ msg: "{{ test_shares_IoLimits }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_shares_IoLimits is changed
+
+ - name: remove disks without destroy file
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - state: "absent"
+ scsi_controller: 0
+ unit_number: 2
+ destroy: false
+ register: test_remove_without_destroy
+
+ - debug:
+ msg: "{{ test_remove_without_destroy }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_remove_without_destroy is changed
-- name: assert that changes were made
- assert:
- that:
- - test_shares_IoLimit is changed
-
-- name: Update disk for shares and IoLimits of IO limits
+- name: re-create disk with valid disk mode
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -209,29 +255,25 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - size_gb: 4
- type: eagerzeroedthick
- datastore: "{{ rw_datastore }}"
- disk_mode: "independent_nonpersistent"
- scsi_controller: 2
- state: present
- unit_number: 4
- iolimit:
- limit: 1507
- shares:
- level: high
- level_value: 1200
- register: test_shares_IoLimits
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: eagerzeroedthick
+ unit_number: 8
+ register: test_recreate_disk
- debug:
- msg: "{{ test_shares_IoLimits }}"
+ msg: "{{ test_recreate_disk }}"
- name: assert that changes were made
assert:
- that:
- - test_shares_IoLimits is changed
+ that:
+ - test_recreate_disk is changed
-- name: remove disks without destroy file
+- name: create new disk with sharing (multi-writer) mode
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -240,21 +282,26 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - state: "absent"
- scsi_controller: 0
- unit_number: 4
- destroy: false
- register: test_remove_without_destroy
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: eagerzeroedthick
+ sharing: true
+ unit_number: 6
+ register: test_create_disk_sharing
- debug:
- msg: "{{ test_remove_without_destroy }}"
+ msg: "{{ test_create_disk_sharing }}"
- name: assert that changes were made
assert:
that:
- - test_remove_without_destroy is changed
+ - test_create_disk_sharing is changed
-- name: re-create disk with valid disk mode
+- name: create new disk with invalid disk type for sharing (multi-writer) mode
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -263,25 +310,55 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - datastore: "{{ rw_datastore }}"
- disk_mode: "persistent"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 1
- state: present
- type: eagerzeroedthick
- unit_number: 4
- register: test_recreate_disk
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ scsi_controller: 0
+ scsi_type: 'paravirtual'
+ size_gb: 1
+ state: present
+ type: thin
+ unit_number: 5
+ sharing: true
+ register: test_create_disk_sharing_invalid
+ ignore_errors: true
- debug:
- msg: "{{ test_recreate_disk }}"
+ msg: "{{ test_create_disk_sharing_invalid }}"
-- name: assert that changes were made
+- name: assert that changes were not made
assert:
that:
- - test_recreate_disk is changed
-
-- name: create new disk with sharing (multi-writer) mode
+ - not(test_create_disk_sharing_invalid is changed)
+
+- when: vcsim is not defined
+ block:
+ - name: remove disk with destroy file
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - state: "absent"
+ scsi_controller: 0
+ unit_number: 3
+ destroy: true
+ - state: "absent"
+ scsi_controller: 0
+ unit_number: 4
+ register: test_remove_with_destroy
+
+ - debug:
+ msg: "{{ test_remove_with_destroy }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_remove_with_destroy is changed
+
+- name: create new disk with SATA controller
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -290,26 +367,25 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - datastore: "{{ rw_datastore }}"
- disk_mode: "independent_persistent"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 1
- state: present
- type: eagerzeroedthick
- sharing: true
- unit_number: 6
- register: test_create_disk_sharing
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ controller_type: 'sata'
+ controller_number: 1
+ unit_number: 3
+ size_gb: 1
+ state: present
+ type: thin
+ register: test_create_sata_disk
- debug:
- msg: "{{ test_create_disk_sharing }}"
+ msg: "{{ test_create_sata_disk }}"
- name: assert that changes were made
assert:
that:
- - test_create_disk_sharing is changed
+ - test_create_sata_disk is changed
-- name: create new disk with invalid disk type for sharing (multi-writer) mode
+- name: create new disk with NVMe controller
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -318,27 +394,25 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - datastore: "{{ rw_datastore }}"
- disk_mode: "independent_persistent"
- scsi_controller: 0
- scsi_type: 'paravirtual'
- size_gb: 1
- state: present
- type: thin
- unit_number: 5
- sharing: true
- register: test_create_disk_sharing_invalid
- ignore_errors: true
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ controller_type: 'nvme'
+ controller_number: 0
+ unit_number: 1
+ size: 1gb
+ state: present
+ type: thin
+ register: test_create_nvme_disk
- debug:
- msg: "{{ test_create_disk_sharing_invalid }}"
+ msg: "{{ test_create_nvme_disk }}"
-- name: assert that changes were not made
+- name: assert that changes were made
assert:
that:
- - not(test_create_disk_sharing_invalid is changed)
+ - test_create_nvme_disk is changed
-- name: remove disk with destroy file
+- name: create 2 new disks on existing SATA controller and NVMe controller
vmware_guest_disk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -347,19 +421,115 @@
validate_certs: false
name: "{{ virtual_machines[0].name }}"
disk:
- - state: "absent"
- scsi_controller: 0
- unit_number: 3
- destroy: true
- - state: "absent"
- scsi_controller: 0
- unit_number: 4
- register: test_remove_with_destroy
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ controller_type: 'sata'
+ controller_number: 1
+ unit_number: 6
+ size_gb: 1
+ state: present
+ - datastore: "{{ rw_datastore }}"
+ disk_mode: "independent_persistent"
+ controller_type: 'nvme'
+ controller_number: 0
+ unit_number: 4
+ size: 1gb
+ state: present
+ type: thin
+ register: test_create_two_disks
- debug:
- msg: "{{ test_remove_with_destroy }}"
+ msg: "{{ test_create_two_disks }}"
- name: assert that changes were made
assert:
that:
- - test_remove_with_destroy is changed
+ - test_create_two_disks is changed
+
+- when: vcsim is not defined
+ block:
+ - name: re-configure SATA disk
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - disk_mode: "independent_nonpersistent"
+ controller_type: 'sata'
+ controller_number: 1
+ unit_number: 3
+ size_gb: 2
+ state: present
+ shares:
+ level: custom
+ level_value: 1200
+ register: test_reconfig_sata_disk
+
+ - debug:
+ msg: "{{ test_reconfig_sata_disk }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_reconfig_sata_disk is changed
+
+ - name: re-configure NVMe disk
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - controller_type: 'nvme'
+ controller_number: 0
+ unit_number: 1
+ size_gb: 2
+ state: present
+ iolimit:
+ limit: 1507
+ shares:
+ level: custom
+ level_value: 1000
+ register: test_reconfig_nvme_disk
+
+ - debug:
+ msg: "{{ test_reconfig_nvme_disk }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_reconfig_nvme_disk is changed
+
+ - name: remove SATA and NVMe disks
+ vmware_guest_disk:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ validate_certs: false
+ name: "{{ virtual_machines[0].name }}"
+ disk:
+ - state: "absent"
+ controller_type: 'sata'
+ controller_number: 1
+ unit_number: 3
+ destroy: false
+ - state: "absent"
+ controller_type: 'nvme'
+ controller_number: 0
+ unit_number: 1
+ destroy: true
+ register: test_remove_sata_nvme_disk
+
+ - debug:
+ msg: "{{ test_remove_sata_nvme_disk }}"
+
+ - name: assert that changes were made
+ assert:
+ that:
+ - test_remove_sata_nvme_disk is changed
| vmware_guest_disk: add support for NVME and SATA disk controller type
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest_disk
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
We want to configure disks attached to other controller types using vmware_guest_disk module.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Hi @Akasurde, I want to make some changes on "vmware_guest_disk" module to support NVME and SATA controllers. What do you think? Thanks.
> I want to make some changes on "vmware_guest_disk" module to support NVME and SATA controllers. What do you think?
@Akasurde didn't object, so I'd say he agreed implicitly. Go ahead! Personally, I think it would be great if you would implement this.
@Tomorrow9 Now that #201 is merged, I think you should close this issue. Or is there anything left to be implemented?
@mariolenz I think this module should handle SATA and NVMe disks as well. And vmware_guest can not remove disks, it can add and reconfigure disks.
> I think this module should handle SATA and NVMe disks as well. And vmware_guest can not remove disks, it can add and reconfigure disks.
Sorry, my mistake... I've just realized that this issue is about vmware_guest **_disk** but the PR implemented SATA and NVME disk controller types for **vmware_guest**. I've overlooked that they are both about SATA and NVME, but address different modules.
You're absolutely right, there's no reason to close this issue. Sorry again :-/
No need to be sorry:), there are some functions in these two modules duplicated, I also had the same concern, but considering new VM creation in "vmware_guest" module also need other types of disk controllers, so add this support in it firstly. Thanks. | 2020-07-16T09:22:14 |
ansible-collections/community.vmware | 332 | ansible-collections__community.vmware-332 | [
"286"
] | dac471e2ad9b6c29e268b116c3c41e20fae8edf0 | diff --git a/plugins/module_utils/vmware.py b/plugins/module_utils/vmware.py
--- a/plugins/module_utils/vmware.py
+++ b/plugins/module_utils/vmware.py
@@ -855,14 +855,16 @@ def is_truthy(value):
# options is the dict as defined in the module parameters, current_options is
# the list of the currently set options as returned by the vSphere API.
-def option_diff(options, current_options):
+# When truthy_strings_as_bool is True, strings like 'true', 'off' or 'yes'
+# are converted to booleans.
+def option_diff(options, current_options, truthy_strings_as_bool=True):
current_options_dict = {}
for option in current_options:
current_options_dict[option.key] = option.value
change_option_list = []
for option_key, option_value in options.items():
- if is_boolean(option_value):
+ if truthy_strings_as_bool and is_boolean(option_value):
option_value = VmomiSupport.vmodlTypes['bool'](is_truthy(option_value))
elif isinstance(option_value, int):
option_value = VmomiSupport.vmodlTypes['int'](option_value)
diff --git a/plugins/modules/vmware_cluster_ha.py b/plugins/modules/vmware_cluster_ha.py
--- a/plugins/modules/vmware_cluster_ha.py
+++ b/plugins/modules/vmware_cluster_ha.py
@@ -266,7 +266,7 @@ def __init__(self, module):
self.advanced_settings = self.params.get('advanced_settings')
if self.advanced_settings:
- self.changed_advanced_settings = option_diff(self.advanced_settings, self.cluster.configurationEx.dasConfig.option)
+ self.changed_advanced_settings = option_diff(self.advanced_settings, self.cluster.configurationEx.dasConfig.option, False)
else:
self.changed_advanced_settings = None
| diff --git a/tests/integration/targets/vmware_cluster_ha/tasks/main.yml b/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
--- a/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
+++ b/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
@@ -162,6 +162,39 @@
that:
- not change_num_heartbeat_ds_again.changed
+ - name: Change advanced setting "das.includeFTcomplianceChecks" (check-mode)
+ vmware_cluster_ha: &change_includeFTcomplianceChecks
+ validate_certs: False
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ advanced_settings:
+ 'das.includeFTcomplianceChecks': 'false'
+ check_mode: yes
+ register: change_includeFTcomplianceChecks_check
+
+ - assert:
+ that:
+ - change_includeFTcomplianceChecks_check.changed
+
+ - name: Change advanced setting "das.includeFTcomplianceChecks"
+ vmware_cluster_ha: *change_includeFTcomplianceChecks
+ register: change_includeFTcomplianceChecks
+
+ - assert:
+ that:
+ - change_includeFTcomplianceChecks.changed
+
+ - name: Change advanced setting "das.includeFTcomplianceChecks" again
+ vmware_cluster_ha: *change_includeFTcomplianceChecks
+ register: change_includeFTcomplianceChecks_again
+
+ - assert:
+ that:
+ - not change_includeFTcomplianceChecks_again.changed
+
# Delete test cluster
- name: Delete test cluster
vmware_cluster:
diff --git a/tests/integration/targets/vmware_host_config_manager/tasks/main.yml b/tests/integration/targets/vmware_host_config_manager/tasks/main.yml
--- a/tests/integration/targets/vmware_host_config_manager/tasks/main.yml
+++ b/tests/integration/targets/vmware_host_config_manager/tasks/main.yml
@@ -88,3 +88,39 @@
assert:
that:
- all_hosts_result_check_mode.changed
+
+ # Test that PR 332 doesn't break boolean settings for this module.
+ - name: Change a boolean setting for a given host (check-mode)
+ vmware_host_config_manager: &change_logDirUnique
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ esxi_hostname: '{{ esxi1 }}'
+ options:
+ 'Syslog.global.logDirUnique': true
+ validate_certs: no
+ check_mode: yes
+ register: change_logDirUnique_check
+
+ - name: ensure changes would be done to given host
+ assert:
+ that:
+ - change_logDirUnique_check.changed
+
+ - name: Change a boolean setting for a given host
+ vmware_host_config_manager: *change_logDirUnique
+ register: change_logDirUnique
+
+ - name: ensure changes are done to given host
+ assert:
+ that:
+ - change_logDirUnique.changed
+
+ - name: Change a boolean setting for a given host again
+ vmware_host_config_manager: *change_logDirUnique
+ register: change_logDirUnique_again
+
+ - name: ensure changes are not done to given host
+ assert:
+ that:
+ - not change_logDirUnique_again.changed
| vmware_cluster_(ha|drs) - Addition of Advanced Options to HA vCenter cluster throws errors due to incompatible types
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Unable to add Advanced Options to vCenter HA Enabled cluster due to incompatible types being sent to the API
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_cluster_ha
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
None
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
**OS**: Photon OS
**vCenter version**: 6.7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Simply copy/paste the example of vmware_cluster_ha module documentation, fill in the correct values and make sure you are using the collection, not the official module from version 2.9
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Enable HA without admission control
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: '{{ datacenter }}'
cluster_name: '{{ cluster_name }}'
enable_ha: yes
validate_certs: False
advanced_settings:
'das.includeFTcomplianceChecks': 'false'
delegate_to: localhost
tags:
- vcenter_configuration
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
{
"changed": true,
"result": null,
"invocation": {
"module_args": {
"hostname": "vcenter_host",
"username": "vcenter_user",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"datacenter_name": "datacenter_name",
"cluster_name": "cluster_name",
"enable_ha": true,
"validate_certs": false,
"advanced_settings": {
"das.includeFTcomplianceChecks": "false"
},
"datacenter": "datacenter_name",
"port": 443,
"ha_host_monitoring": "enabled",
"host_isolation_response": "none",
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_failure_interval": 30,
"ha_vm_min_up_time": 120,
"ha_vm_max_failures": 3,
"ha_vm_max_failure_window": -1,
"ha_restart_priority": "medium",
"proxy_host": null,
"proxy_port": null,
"slot_based_admission_control": null,
"reservation_based_admission_control": null,
"failover_host_admission_control": null
}
},
"_ansible_no_log": false,
"_ansible_delegated_vars": {
"ansible_host": "localhost"
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
{
"msg": "('The request refers to an unexpected or unknown type.', None)",
"exception": " File \"/tmp/ansible_community.vmware.vmware_cluster_ha_payload_brhahxm0/ansible_community.vmware.vmware_cluster_ha_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cluster_ha.py\", line 409, in configure_ha\n File \"/tmp/ansible_community.vmware.vmware_cluster_ha_payload_brhahxm0/ansible_community.vmware.vmware_cluster_ha_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 80, in wait_for_task\n raise_from(TaskError(error_msg, host_thumbprint), task.info.error)\n File \"<string>\", line 3, in raise_from\n",
"invocation": {
"module_args": {
"hostname": "vcenter_host",
"username": "vcenter_user",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"datacenter_name": "datacenter_name",
"cluster_name": "cluster_name",
"enable_ha": true,
"validate_certs": false,
"advanced_settings": {
"das.includeFTcomplianceChecks": "no"
},
"datacenter": "datacenter_name",
"port": 443,
"ha_host_monitoring": "enabled",
"host_isolation_response": "none",
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_failure_interval": 30,
"ha_vm_min_up_time": 120,
"ha_vm_max_failures": 3,
"ha_vm_max_failure_window": -1,
"ha_restart_priority": "medium",
"proxy_host": null,
"proxy_port": null,
"slot_based_admission_control": null,
"reservation_based_admission_control": null,
"failover_host_admission_control": null
}
},
"_ansible_no_log": false,
"changed": false,
"_ansible_delegated_vars": {
"ansible_host": "localhost"
}
}
```
| The problem comes from https://github.com/ansible-collections/vmware/blob/main/plugins/module_utils/vmware.py#L864.
It tries to parse it to boolean type, but the API only accepts String values.
We have monkey patched it the following way in order to suit our case:
```
for option_key, option_value in options.items():
if self.is_boolean(option_value):
option_value = VmomiSupport.vmodlTypes['string'](option_value)
```
I.e. since we are only passing boolean values, we only changed this line, but it will probably throw the same error for other types different than strings.
@Akasurde @goneri may you have a look at it, please?
> The problem comes from https://github.com/ansible-collections/vmware/blob/main/plugins/module_utils/vmware.py#L864.
> It tries to parse it to boolean type, but the API only accepts String values.
>
> We have monkey patched it the following way in order to suit our case:
>
> ```
> for option_key, option_value in options.items():
> if self.is_boolean(option_value):
> option_value = VmomiSupport.vmodlTypes['string'](option_value)
> ```
The problem I see here is that if someone wants to set `das.includeFTcomplianceChecks` that expects a string `false`, and also `das.someOtherSetting` that expects a boolean `false` your solution would set a string value although `das.someOtherSetting` requires a bool. I can't think of a solution for this at the moment.
For ESXi hosts, we can query the API for supported advanced settings including their type. Unfortunately, it looks like we can't do this for advanced cluster settings... at least, I haven’t found a way to do this.
I've more or less stolen this from vmware_host_config_manager (ansible/ansible#62801 and ansible/ansible#65675) in the hope it would just work. Obviously, I was wrong :-(
For the record: I don't mention this to blame the authors of vmware_host_config_manager (I don't know about any problems with this module, so the code works well there), it's just that we lost a lot of commit history when these modules were migrated to a collection. And linking to the ansible PRs makes it easier for people to understand how and why this code was written... if nothing else, it will make it easier for _me_ to understand what I did and why ^^
@aleksandar-kinanov @mariolenz Do you think we can add an extra parameter to override boolean check-in `option_diff` function?
From VMware documentation - https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.option.OptionValue.html -
```
The value of the option. The Any data object type enables you to define any value for the option. Typically, however, the value of an option is of type String or Integer.
```
Let me know.
> Do you think we can add an extra parameter to override boolean check-in `option_diff` function?
The problem is: How do we distinguish between advanced settings that require a string `true` / `false` and those that require a boolean `true` / `false`? Anyway, I think we should give it a try and hope HA advanced settings always work with strings and never with booleans. I'll work on a PR and then we'll just see what happens, OK?
Personally, I think this is a case of bad API design. If it's a question of `true` or `false`, why use strings instead of booleans? Feel free to kick VMware for this... and if they complain, tell me and I'll kick 'em again.
@mariolenz Are you working on this? | 2020-08-04T16:15:34 |
ansible-collections/community.vmware | 354 | ansible-collections__community.vmware-354 | [
"163"
] | 06814b22dc8a27b68062ce4366c287c76889a71d | diff --git a/plugins/modules/vmware_content_deploy_ovf_template.py b/plugins/modules/vmware_content_deploy_ovf_template.py
--- a/plugins/modules/vmware_content_deploy_ovf_template.py
+++ b/plugins/modules/vmware_content_deploy_ovf_template.py
@@ -52,14 +52,14 @@
required: True
host:
description:
- - Name of the ESX Host in datacenter in which to place deployed VM.
+ - Name of the ESX Host in datacenter in which to place deployed VM. The host has to be a member of the cluster that contains the resource pool.
type: str
required: True
resource_pool:
description:
- Name of the resourcepool in datacenter in which to place deployed VM.
type: str
- required: False
+ required: True
cluster:
description:
- Name of the cluster in datacenter in which to place deployed VM.
@@ -165,11 +165,9 @@ def deploy_vm_from_ovf_template(self):
if not self.host_id:
self.module.fail_json(msg="Failed to find the Host %s" % self.host)
# Find the resourcepool by the given resourcepool name
- self.resourcepool_id = None
- if self.resourcepool:
- self.resourcepool_id = self.get_resource_pool_by_name(self.datacenter, self.resourcepool)
- if not self.resourcepool_id:
- self.module.fail_json(msg="Failed to find the resource_pool %s" % self.resourcepool)
+ self.resourcepool_id = self.get_resource_pool_by_name(self.datacenter, self.resourcepool)
+ if not self.resourcepool_id:
+ self.module.fail_json(msg="Failed to find the resource_pool %s" % self.resourcepool)
# Find the Cluster by the given Cluster name
self.cluster_id = None
if self.cluster:
@@ -230,7 +228,7 @@ def main():
datastore=dict(type='str', required=True),
folder=dict(type='str', required=True),
host=dict(type='str', required=True),
- resource_pool=dict(type='str', required=False),
+ resource_pool=dict(type='str', required=True),
cluster=dict(type='str', required=False),
storage_provisioning=dict(type='str',
required=False,
| vmware_content_deploy_ovf_template fails requiring resource_pool_id
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Attempting to create a VM using the vmware_content_deploy_ovf_template module. It's failing with:
```File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 257, in _visit_vapi_struct\n raise CoreException(msg)\nvmware.vapi.exception.CoreException: Field resource_pool_id missing from Structure com.vmware.vcenter.ovf.library_item.deployment_target
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_content_deploy_ovf_template module
##### ANSIBLE VERSION
```paste below
ansible 2.9.7
config file = /Users/ppiper/workspace/ansible/ansible.cfg
configured module search path = ['/Users/ppiper/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.7 (default, Mar 10 2020, 15:43:33) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORKS(/Users/ppiper/workspace/ansible/ansible.cfg) = 10
DEFAULT_HOST_LIST(/Users/ppiper/workspace/ansible/ansible.cfg) = ['/Users/ppiper/workspace/ansible/inventories/production/hosts']
DEFAULT_REMOTE_USER(/Users/ppiper/workspace/ansible/ansible.cfg) = ansible
HOST_KEY_CHECKING(/Users/ppiper/workspace/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/ppiper/workspace/ansible/ansible.cfg) = auto
```
##### OS / ENVIRONMENT
```
System Version: macOS 10.15.4 (19E287)
Kernel Version: Darwin 19.4.0
Model Name: Mac mini
Model Identifier: Macmini8,1
Processor Name: 6-Core Intel Core i7
Processor Speed: 3.2 GHz
Number of Processors: 1
Total Number of Cores: 6
L2 Cache (per Core): 256 KB
L3 Cache: 12 MB
Hyper-Threading Technology: Enabled
Memory: 64 GB
Boot ROM Version: 1037.100.362.0.0 (iBridge: 17.16.14281.0.0,0)
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
ansible-playbook test.yml -l test-centos -vvv
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# This playbook creates a single virtual machine from vcenter using vmware_guest module
- hosts: all
serial: 1
connection: local
gather_facts: no
tasks:
- name: Create a VM from a template
community.vmware.vmware_content_deploy_ovf_template:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: "{{ inventory_hostname }}"
ovf_template: "{{ vmware_template }}"
datacenter: "{{ vmware_datacenter }}"
datastore: "{{ esxi_hostname }}-datastore"
host: "{{ esxi_hostname }}.ddiguru.net"
folder: "{{ vmware_folder }}"
delegate_to: localhost
register: deploy
- name: Show Debug results
debug:
msg: "{{ deploy }}"
tags:
- always
when: debug == 'yes'
```
NOTE: when i created a Resource Pool called "test_rp" and added that to the above config. it works.
##### EXPECTED RESULTS
```TASK [Create a VM from a template] ***************************************************************************************************************************************************
task path: /Users/ppiper/workspace/ansible/test.yml:52
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ppiper
<localhost> EXEC /bin/sh -c 'echo ~ppiper && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/ppiper/.ansible/tmp `"&& mkdir /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636 && echo ansible-tmp-1588623536.459615-35952-273111745314636="` echo /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636 `" ) && sleep 0'
Using module file /Users/ppiper/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_content_deploy_ovf_template.py
<localhost> PUT /Users/ppiper/.ansible/tmp/ansible-local-35947j491acn4/tmp1iw3kd_8 TO /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636/AnsiballZ_vmware_content_deploy_ovf_template.py
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636/ /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636/AnsiballZ_vmware_content_deploy_ovf_template.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/local/bin/python3 /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636/AnsiballZ_vmware_content_deploy_ovf_template.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /Users/ppiper/.ansible/tmp/ansible-tmp-1588623536.459615-35952-273111745314636/ > /dev/null 2>&1 && sleep 0'
changed: [filebeat02 -> localhost] => {
"changed": true,
"invocation": {
"module_args": {
"cluster": null,
"datacenter": "PC",
"datastore": "esxi2-datastore",
"folder": "vm",
"host": "esxi2.ddiguru.net",
"hostname": "vcenter.ddiguru.net",
"name": "filebeat02",
"ovf_template": "centos-8.1",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"protocol": "https",
"resource_pool": "test_rp",
"username": "[email protected]",
"validate_certs": false
}
},
"vm_deploy_info": {
"msg": "Deployed Virtual Machine 'filebeat02'.",
"vm_id": "vm-2529"
}
}
```
NOTE: I SHOULD NOT have to supply the resource_pool "test_rp" to create the VM from OVF content library template.
##### ACTUAL RESULTS
```paste below
ap test.yml -l filebeat02 -vvv
ansible-playbook 2.9.7
config file = /Users/ppiper/workspace/ansible/ansible.cfg
configured module search path = ['/Users/ppiper/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.7 (default, Mar 10 2020, 15:43:33) [Clang 11.0.0 (clang-1100.0.33.17)]
Using /Users/ppiper/workspace/ansible/ansible.cfg as config file
host_list declined parsing /Users/ppiper/workspace/ansible/inventories/production/hosts as it did not pass its verify_file() method
script declined parsing /Users/ppiper/workspace/ansible/inventories/production/hosts as it did not pass its verify_file() method
auto declined parsing /Users/ppiper/workspace/ansible/inventories/production/hosts as it did not pass its verify_file() method
Parsed /Users/ppiper/workspace/ansible/inventories/production/hosts inventory source with ini plugin
PLAYBOOK: test.yml *******************************************************************************************************************************************************************
1 plays in test.yml
Enter Template [centos-8.1]:
Enter vlan number [40]:
PLAY [all] ***************************************************************************************************************************************************************************
META: ran handlers
TASK [Create a VM from a template] ***************************************************************************************************************************************************
task path: /Users/ppiper/workspace/ansible/test.yml:52
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ppiper
<localhost> EXEC /bin/sh -c 'echo ~ppiper && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/ppiper/.ansible/tmp `"&& mkdir /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703 && echo ansible-tmp-1588623093.0298822-35766-66306551630703="` echo /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703 `" ) && sleep 0'
Using module file /Users/ppiper/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_content_deploy_ovf_template.py
<localhost> PUT /Users/ppiper/.ansible/tmp/ansible-local-357616k4gnoqi/tmp5hy3c78z TO /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/AnsiballZ_vmware_content_deploy_ovf_template.py
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/ /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/AnsiballZ_vmware_content_deploy_ovf_template.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/local/bin/python3 /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/AnsiballZ_vmware_content_deploy_ovf_template.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/ > /dev/null 2>&1 && sleep 0'
fatal: [filebeat02 -> localhost]: FAILED! => {
"changed": false,
"module_stderr": "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 251, in _visit_vapi_struct\n raise AttributeError\nAttributeError\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/AnsiballZ_vmware_content_deploy_ovf_template.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/AnsiballZ_vmware_content_deploy_ovf_template.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/ppiper/.ansible/tmp/ansible-tmp-1588623093.0298822-35766-66306551630703/AnsiballZ_vmware_content_deploy_ovf_template.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_content_deploy_ovf_template', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/var/folders/zs/v62z4ydn6f757kkthdr67z_r0000gn/T/ansible_community.vmware.vmware_content_deploy_ovf_template_payload_ffs446d1/ansible_community.vmware.vmware_content_deploy_ovf_template_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_content_deploy_ovf_template.py\", line 235, in <module>\n File \"/var/folders/zs/v62z4ydn6f757kkthdr67z_r0000gn/T/ansible_community.vmware.vmware_content_deploy_ovf_template_payload_ffs446d1/ansible_community.vmware.vmware_content_deploy_ovf_template_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_content_deploy_ovf_template.py\", line 231, in main\n File \"/var/folders/zs/v62z4ydn6f757kkthdr67z_r0000gn/T/ansible_community.vmware.vmware_content_deploy_ovf_template_payload_ffs446d1/ansible_community.vmware.vmware_content_deploy_ovf_template_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_content_deploy_ovf_template.py\", line 169, in deploy_vm_from_ovf_template\n File \"/usr/local/lib/python3.7/site-packages/com/vmware/vcenter/ovf_client.py\", line 2985, in filter\n 'target': target,\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/stub.py\", line 345, in _invoke\n return self._api_interface.native_invoke(ctx, _method_name, kwargs)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/stub.py\", line 230, in native_invoke\n self._rest_converter_mode)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 1117, in convert_to_vapi\n binding_type.accept(visitor)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/type.py\", line 40, in accept\n visitor.visit(self)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/lib/visitor.py\", line 43, in visit\n return method(value)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 327, in visit_struct\n self._visit_python_dict(typ)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 306, in _visit_python_dict\n self.visit(field_type)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/lib/visitor.py\", line 43, in visit\n return method(value)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 434, in visit_reference\n self.visit(typ.resolved_type)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/lib/visitor.py\", line 43, in visit\n return method(value)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 319, in visit_struct\n self._visit_vapi_struct(typ)\n File \"/usr/local/lib/python3.7/site-packages/vmware/vapi/bindings/converter.py\", line 257, in _visit_vapi_struct\n raise CoreException(msg)\nvmware.vapi.exception.CoreException: Field resource_pool_id missing from Structure com.vmware.vcenter.ovf.library_item.deployment_target\n/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vcenter.ddiguru.net'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning,\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
| Current Python modules installed include:
```
Package Version
---------------------------------- ------------------------
ansible 2.9.7
asn1crypto 1.3.0
certifi 2020.4.5.1
cffi 1.14.0
chardet 3.0.4
click 7.1.2
colorama 0.4.3
cryptography 2.9.2
dnspython 1.16.0
docopt 0.6.2
et-xmlfile 1.0.1
Faker 4.0.3
ibx 0.0.post0.dev17+g2bee154
idna 2.9
iptools 0.7.0
jdcal 1.4.1
Jinja2 2.11.2
lxml 4.5.0
MarkupSafe 1.1.1
mysql 0.0.2
mysql-connector 2.2.9
mysqlclient 1.4.6
netaddr 0.7.19
nsx-policy-python-sdk 2.5.1.0.1.15419398
nsx-python-sdk 2.5.1.0.1.15419398
nsx-vmc-aws-integration-python-sdk 2.5.1.0.1.15419398
nsx-vmc-policy-python-sdk 2.5.1.0.1.15419398
openpyxl 3.0.3
passlib 1.7.2
pexpect 4.8.0
pip 20.1
prompt-toolkit 3.0.5
protobuf 3.11.4
ptyprocess 0.6.0
pycodestyle 2.5.0
pycparser 2.20
pyflakes 2.2.0
Pygments 2.6.1
pyOpenSSL 19.1.0
python-dateutil 2.8.1
python-dotenv 0.13.0
python-nmcli 0.1.1
pyvmomi 7.0
PyYAML 5.3.1
requests 2.23.0
setuptools 46.1.3
six 1.14.0
suds-jurko 0.6
termcolor 1.1.0
text-unidecode 1.3
tqdm 4.46.0
urllib3 1.25.9
vapi-client-bindings 3.2.0
vapi-common-client 2.14.0
vapi-runtime 2.14.0
vmc-client-bindings 1.23.0
vmc-draas-client-bindings 1.3.0
vSphere-Automation-SDK 1.25.0
wcwidth 0.1.9
wheel 0.34.2
```
cc @pgbidkar @anusha94
cc @ngp-star
is there a solution to the problem? @ultral @anusha94 @pgbidkar
cc @Tomorrow9 @goneri @lparkes @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify ---> | 2020-08-20T06:02:37 |
|
ansible-collections/community.vmware | 364 | ansible-collections__community.vmware-364 | [
"362"
] | 7e0f26f1aacd3f2130088e01e65acbaeb9e0153d | diff --git a/plugins/modules/vmware_vmkernel.py b/plugins/modules/vmware_vmkernel.py
--- a/plugins/modules/vmware_vmkernel.py
+++ b/plugins/modules/vmware_vmkernel.py
@@ -614,11 +614,11 @@ def host_vmk_update(self):
changed_services = changed_service_prov = True
if (self.enable_replication and self.vnic.device not in service_type_vmks['vSphereReplication']) or \
- (not self.enable_provisioning and self.vnic.device in service_type_vmks['vSphereReplication']):
+ (not self.enable_replication and self.vnic.device in service_type_vmks['vSphereReplication']):
changed_services = changed_service_rep = True
if (self.enable_replication_nfc and self.vnic.device not in service_type_vmks['vSphereReplicationNFC']) or \
- (not self.enable_provisioning and self.vnic.device in service_type_vmks['vSphereReplicationNFC']):
+ (not self.enable_replication_nfc and self.vnic.device in service_type_vmks['vSphereReplicationNFC']):
changed_services = changed_service_rep_nfc = True
if changed_services:
changed_list.append("services")
| vmware_vmkernel: incorrectly handles idempotence for Repl and Repl_NFC services
---
name: 🐛 Bug report
about: Create a report to help us improve
---
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
vmware_vmkernel incorrectly handles Repl and Repl_NFC services
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Module: vmware_vmkernel
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = None
configured module search path = ['/ansible/playbooks/library']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:25:15) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/ansible/playbooks/library']
DEFAULT_MODULE_UTILS_PATH(env: ANSIBLE_MODULE_UTILS) = ['/ansible/playbooks/library/module_utils']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
VMware vCenter 6.7
ESXi 6.7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Playbook that enables replication and replication_nfc.
Run playbook and it will configure VMK correctly.
Running again will also show changed even when services are correctly configured,
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Make VMK port with Replication services enabled
vmware_vmkernel:
hostname: "{{ vcenter_server }}"
username: "{{ vc_service_account }}"
password: "{{ vc_service_account_password }}"
esxi_hostname: "{{ inventory_hostname }}"
validate_certs: False
dvswitch_name: "{{ dvs_name }}"
portgroup_name: "{{ replication_pg }}"
network:
type: 'static'
ip_address: "{{ vmk_rep_ip }}"
subnet_mask: "{{ vmk_rep_subnetmask }}"
enable_replication: True
enable_replication_nfc: True
mtu: 9000
state: present
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
When Repl and Repl_NFC services are enabled the playbook is idempotent.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Even execution of the playbook shows changed for VMkernel Adapter services would be updated
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [esxi-makevmk : Make VMK port with Replication services enabled] *********************************************************************
changed: [esxi-host.company.com -> localhost] => {"changed": true, "device": "vmk5", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "192.168.8.17", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter services would be updated", "mtu": 9000, "portgroup": "replication_pg", "services": "Repl, Repl_NFC", "services_previous": "Repl, Repl_NFC", "switch": "dvswitch", "tcpip_stack": "default"}
```
| Files identified in the description:
* [`plugins/modules/vmware_vmkernel.py`](https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_vmkernel.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Issue appears to be with the following code block
```
if (self.enable_replication and self.vnic.device not in service_type_vmks['vSphereReplication']) or \
(not **self.enable_provisioning** and self.vnic.device in service_type_vmks['vSphereReplication']):
changed_services = changed_service_rep = True
if (self.enable_replication_nfc and self.vnic.device not in service_type_vmks['vSphereReplicationNFC']) or \
(not **self.enable_provisioning** and self.vnic.device in service_type_vmks['vSphereReplicationNFC']):
changed_services = changed_service_rep_nfc = True
```
Should be
```
if (self.enable_replication and self.vnic.device not in service_type_vmks['vSphereReplication']) or \
(not self.enable_replication and self.vnic.device in service_type_vmks['vSphereReplication']):
changed_services = changed_service_rep = True
if (self.enable_replication_nfc and self.vnic.device not in service_type_vmks['vSphereReplicationNFC']) or \
(not self.enable_replication_nfc and self.vnic.device in service_type_vmks['vSphereReplicationNFC']):
changed_services = changed_service_rep_nfc = True
``` | 2020-08-26T14:56:32 |
|
ansible-collections/community.vmware | 376 | ansible-collections__community.vmware-376 | [
"375"
] | aee551dc1d1f8a57f58f2da47bef7678b2461973 | diff --git a/plugins/modules/vmware_cluster.py b/plugins/modules/vmware_cluster.py
--- a/plugins/modules/vmware_cluster.py
+++ b/plugins/modules/vmware_cluster.py
@@ -411,9 +411,6 @@ def state_create_cluster(self):
if not self.module.check_mode:
self.datacenter.hostFolder.CreateClusterEx(self.cluster_name, cluster_config_spec)
self.module.exit_json(changed=True)
- except vim.fault.DuplicateName:
- # To match other vmware_* modules
- pass
except vmodl.fault.InvalidArgument as invalid_args:
self.module.fail_json(msg="Cluster configuration specification"
" parameter is invalid : %s" % to_native(invalid_args.msg))
| vmware_cluster: Stop eating exceptions
##### SUMMARY
vmware_cluster ignores a `vim.fault.DuplicateName` which is never a good idea:
https://github.com/ansible-collections/vmware/blob/aee551dc1d1f8a57f58f2da47bef7678b2461973/plugins/modules/vmware_cluster.py#L414-L416
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_cluster
##### ANSIBLE VERSION
```
ansible 2.10.1rc2
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Aug 9 2020, 02:16:00) [GCC 7.3.0]
```
##### EXPECTED RESULTS
The module should fail instead of crashing.
##### ACTUAL RESULTS
The module crashes.
| 2020-09-02T17:59:00 |
||
ansible-collections/community.vmware | 383 | ansible-collections__community.vmware-383 | [
"335"
] | df1bc69d5f184cbc6236b2f8ac12ed590b6e3c54 | diff --git a/plugins/module_utils/vmware.py b/plugins/module_utils/vmware.py
--- a/plugins/module_utils/vmware.py
+++ b/plugins/module_utils/vmware.py
@@ -131,6 +131,8 @@ def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
+ name = name.strip()
+
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if unquote(obj.name) == name:
| VMware: Unable to create vSphere Cluster in idempotent manner
_From @kclinden on Nov 03, 2019 01:58_
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Playbook fails on create vSphere Cluster when running second time
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_cluster
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/klinden/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/klinden/.local/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/home/klinden/hosts']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu on Windows 2019 WSL
```
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run playbook a second time on an empty vCenter instance.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# Configure vCenter
- name: Configure vCenter
hosts: localhost
gather_facts: false
connection: local
vars:
vcenter_hostname: vcsa-01a.corp.local
vcenter_username: '[email protected]'
vcenter_password: 'VMware1!'
datacenter_name: 'OneCloud'
cluster_name: 'Cluster-01'
esxi_hostname: esx-01a.corp.local
esxi_username: 'root'
esxi_password: 'VMware1!'
validate_certs: false
state: present
tasks:
- name: Create Datacenter
vmware_datacenter:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: '{{ datacenter_name }}'
validate_certs: '{{ validate_certs }}'
state: present
- name: Create Cluster
vmware_cluster:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: '{{ validate_certs }}'
datacenter_name: '{{ datacenter_name }}'
cluster_name: '{{ cluster_name }} '
state: present
- name: Add ESXi Host to vCenter
vmware_host:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: '{{ validate_certs }}'
datacenter: '{{ datacenter_name }}'
cluster: '{{ cluster_name }} '
esxi_hostname: '{{ esxi_hostname }}'
esxi_username: '{{ esxi_username }}'
esxi_password: '{{ esxi_password }}'
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
_Copied from original issue: ansible/ansible#64354_
| _From @Akasurde on Nov 15, 2019 11:21_
@kclinden Thanks for reporting this issue. Could you please provide `-vvvv` output for cluster task for second run ?
Thanks
needs_info
_From @kclinden on Nov 21, 2019 17:32_
> @kclinden Thanks for reporting this issue. Could you please provide `-vvvv` output for cluster task for second run ?
>
> Thanks
>
> needs_info
I will try to get more verbosity for you in the next week.
_From @Akasurde on Jan 19, 2020 05:25_
@kclinden Any news?
_From @kclinden on Jan 19, 2020 05:44_
Unfortunately I haven’t had any time to dive back into this and have since started to focus on terraform instead.
@Akasurde
Please close this issue. I neither see an expected nor an actual result, and anyway @kclinden moved to Terraform and therefor won't work with us on this.
I don't even understand the problem. The issue is called "Unable to create vSphere Cluster in idempotent manner", but this doesn't make any sense if the steps to reproduce are "run playbook a second time on an empty vCenter instance". If you run the playbook on a _empty_ vCenter, there's no question about idempotency.
Sorry, but I really don't get the problem. If you understand it, please explain it... otherwise, I think you should close this issue.
So the issue that I was seeing was that when I tried to run the same playbook to create a vSphere Cluster with name "Foo" it would error the second time cause the cluster already existed. I would think that I could run the same playbook over and over. If a cluster with the name "Foo" already existed then it would have no changes.
@kclinden
I wanted to have a look into this today but, unfortunately, didn't find the time. I hope I will next week.
You ran into this problem with ansible 2.9.0, correct? This is quite old, the latest stable release is 2.9.12. Do you run into the same problem with this version?
Anyway, seeing your actually results would be quite helpful.
I'll try to have a closer look at this next week, but I won't promise anything ;-)
cc @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
@kclinden Is this really correct: `cluster_name: '{{ cluster_name }} '`? I think we have a problem with the trailing space, can you please remove it (`cluster_name: '{{ cluster_name }}'`) and try again?
You see, I think the module searchs for 'Cluster-01 ' (literally, that is with a trailing space) and doesn't find it, but when trying to create the cluster the vCenter trims this to 'Cluster-01' and the module fails because the cluster already exists.
@Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
I think the problem is that we just pass a vim.fault.DuplicateName exception instead of failing the module:
https://github.com/ansible-collections/vmware/blob/aee551dc1d1f8a57f58f2da47bef7678b2461973/plugins/modules/vmware_cluster.py#L414-L416
However, in theory `state_create_cluster` shouldn't even be called. I think we have a general problem with trailing or leading whitespace, and I also think it's possible that other modules might be affected, too.
Thank you @mariolenz for raising this point. vCenter indeed silently removes the trailing whitespace. So we must do the same operation ourselves if we want to do a resource look up.
I agree, the `vim.fault.DuplicateName` exception should not ignore.
@goneri
> Thank you @mariolenz for raising this point. vCenter indeed silently removes the trailing whitespace. So we must do the same operation ourselves if we want to do a resource look up.
The problem is that I'm not sure where to do it. I see three possibilities:
1. trim the parameter in the module: quick'n'dirty, but this doesn't prevent us from running into similar issues with other modules
2. trim in module_utils/vmware.py: tricky; might avoid similar problems with other modules in the future, but might also introduce new ones
3. ansible-base could optionally trim (a parameter option `trim` that can be set to true?) parameters: a lot of modules could possibly use this instead of trimming parameters themselves, but this would mean a change in ansible itself
Another possibility: A parameter option `whitespace_problematic` that would give a warning if a parameter has leading or trailing whitespace; pros and cons similar to 3.
> I agree, the `vim.fault.DuplicateName` exception should not ignore.
I consider this a different issue and opened #375 to address this.
I would go with Option 2. But I agree, this can also be address at the `argument_spec` level. | 2020-09-07T17:51:46 |
|
ansible-collections/community.vmware | 384 | ansible-collections__community.vmware-384 | [
"381"
] | 10310e77f270e0ac47a2309e87e71b184b5ef41b | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -181,7 +181,7 @@
description:
- The Virtual machine hardware versions.
- Default is 10 (ESXi 5.5 and onwards).
- - If value specified as C(latest), version is set to the most current virtual hardware supported on the host.
+ - If set to C(latest), the specified virtual machine will be upgraded to the most current hardware version supported on the host.
- C(latest) is added in Ansible 2.10.
- Please check VMware documentation for correct virtual machine hardware version.
- Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given.
@@ -191,8 +191,10 @@
virt_based_security:
type: bool
description:
- - Enable Virtualization Based Security feature for Windows 10.
- - Supported from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016.
+ - Enable Virtualization Based Security feature for Windows on ESXi 6.7 and later, from hardware version 14.
+ - Supported Guest OS are Windows 10 64 bit, Windows Server 2016, Windows Server 2019 and later.
+ - The firmware of virtual machine must be EFI.
+ - Deploy on unsupported ESXi, hardware version or firmware may lead to failure or deployed VM with unexpected configurations.
guest_id:
type: str
description:
@@ -270,7 +272,8 @@
description:
- Type of disk controller.
- Valid values are C(buslogic), C(lsilogic), C(lsilogicsas), C(paravirtual), C(sata) and C(nvme).
- - C(nvme) support starts from hardware C(version) 13 and ESXi version 6.5.
+ - C(nvme) controller type support starts on ESXi 6.5 with VM hardware version C(version) 13.
+ Set this type on not supported ESXi or VM hardware version will lead to failure in deployment.
- When set to C(sata), please make sure C(unit_number) is correct and not used by SATA CDROMs.
- If set to C(sata) type, please make sure C(controller_number) and C(unit_number) are set correctly when C(cdrom) also set to C(sata) type.
controller_number:
@@ -1815,31 +1818,16 @@ def configure_hardware_params(self, vm_obj):
pass
if 'virt_based_security' in self.params['hardware']:
- host_version = self.select_host().summary.config.product.version
- if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
- self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
- guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
- if vm_obj is None:
- guestid = self.configspec.guestId
- else:
- guestid = vm_obj.summary.config.guestId
- if guestid not in guest_ids:
- self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
- if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
- (vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
+ virt_based_security_set = bool(self.params['hardware']['virt_based_security'])
+ if vm_obj is None or vm_obj.config.flags.vbsEnabled != virt_based_security_set:
self.configspec.flags = vim.vm.FlagInfo()
- self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
- if bool(self.params['hardware']['virt_based_security']):
+ self.configspec.flags.vbsEnabled = virt_based_security_set
+ if virt_based_security_set:
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
- if (vm_obj is None and self.configspec.firmware == 'efi') or \
- (vm_obj and vm_obj.config.firmware == 'efi'):
- self.configspec.bootOptions = vim.vm.BootOptions()
- self.configspec.bootOptions.efiSecureBootEnabled = True
- else:
- self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
- if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
- self.change_detected = True
+ self.configspec.bootOptions = vim.vm.BootOptions()
+ self.configspec.bootOptions.efiSecureBootEnabled = True
+ self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
device_list = []
@@ -2503,21 +2491,6 @@ def sanitize_disk_parameters(self, vm_obj):
self.module.fail_json(msg="'disk.controller_number' value is invalid, valid value is from 0 to 3.")
if ctl_type not in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas', 'sata', 'nvme']:
self.module.fail_json(msg="Disk controller type: '%s' is not supported or invalid." % disk_spec['controller_type'])
- # nvme support starts from hardware version 13 on ESXi 6.5
- if ctl_type == 'nvme':
- if 'version' in self.params['hardware'] and self.params['hardware']['version'] < 13:
- self.module.fail_json(msg="Configured hardware version '%d' not support nvme controller."
- % self.params['hardware']['version'])
- elif self.params['esxi_hostname'] is not None:
- if not self.host_version_at_least(version=(6, 5, 0), host_name=self.params['esxi_hostname']):
- self.module.fail_json(msg="ESXi host '%s' version < 6.5.0, not support nvme controller."
- % self.params['esxi_hostname'])
- elif vm_obj is not None:
- try:
- if int(vm_obj.config.version.split('-')[1]) < 13:
- self.module.fail_json(msg="VM hardware version < 13 not support nvme controller.")
- except ValueError:
- self.module.fail_json(msg="Failed to get VM hardware version to check if nvme is supported.")
if len(controllers) != 0:
ctl_exist = False
| vmware_guest: TypeError when create a VM with NVMe disk controller and hw version set to 'latest'
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When creating a new VM with disk controller type set to "nvme" and version set to "latest", then there is TypeError, since comparing "latest" with 13.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0
config file = /root/workspace/newgos_testing/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Create a new VM"
vmware_guest:
hostname: "{{ host_name }}"
username: "{{ host_user }}"
password: "{{ host_user_password }}"
validate_certs: no
datacenter: "test_dc"
folder: ""
name: "{{ vm_name }}"
guest_id: "{{ guest_id }}"
hardware:
memory_mb: 2048
num_cpus: 2
num_cpu_cores_per_socket: 1
version: 'latest'
boot_firmware: 'efi'
disk:
- size_gb: 32
type: thin
datastore: 'test_ds'
controller_type: 'nvme'
controller_number: 0
unit_number: 0
networks:
- device_type: 'vmxnet3'
name: "VM Network"
type: "dhcp"
register: vm_create_result
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible-tmp-1599463857.4835405-608-24613985225299/AnsiballZ_vmware_guest.py", line 102, in <module>
_ansiballz_main()
File "/tmp/ansible-tmp-1599463857.4835405-608-24613985225299/AnsiballZ_vmware_guest.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/tmp/ansible-tmp-1599463857.4835405-608-24613985225299/AnsiballZ_vmware_guest.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_guest', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_vmware_guest_payload_toz5zt3c/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3570, in <module>
File "/tmp/ansible_vmware_guest_payload_toz5zt3c/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3559, in main
File "/tmp/ansible_vmware_guest_payload_toz5zt3c/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3072, in deploy_vm
File "/tmp/ansible_vmware_guest_payload_toz5zt3c/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 2649, in configure_disks
File "/tmp/ansible_vmware_guest_payload_toz5zt3c/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 2585, in configure_multiple_controllers_disks
File "/tmp/ansible_vmware_guest_payload_toz5zt3c/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 2508, in sanitize_disk_parameters
TypeError: '<' not supported between instances of 'str' and 'int'
```
| 2020-09-08T07:39:48 |
||
ansible-collections/community.vmware | 400 | ansible-collections__community.vmware-400 | [
"359"
] | 13bdb32ccc380ad2f6d1b9efd36d3eaed3956ec6 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -3157,10 +3157,14 @@ def deploy_vm(self):
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
+ esx_host = None
+ # Only select specific host when ESXi hostname is provided
+ if self.params['esxi_hostname']:
+ esx_host = self.select_host()
clone_method = 'CreateVM_Task'
try:
- task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
+ task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool, host=esx_host)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
| Provision VM on a prefered ESXi Host with `vmware_guest`
##### SUMMARY
According to the [documentation](https://docs.ansible.com/ansible/latest/modules/vmware_guest_module.html#parameter-esxi_hostname), `esxi_hostname` attribute define on which ESXi host the virtual machine will run. But actually it doesn't behave like that.
As far as I use the `vmware_guest` against the vcenter I din't found a way to define a prefered ESXi host where my VM have to run on.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- plugins/modules/vmware_guest.py
##### ANSIBLE VERSION
```
ansible 2.9.12
config file = /home/xenlo/Projects/463/etc/ansible.cfg
configured module search path = ['/home/xenlo/Projects/463/library']
ansible python module location = /home/xenlo/.virtualenvs/463/lib/python3.8/site-packages/ansible
executable location = /home/xenlo/.virtualenvs/463/bin/ansible
python version = 3.8.5 (default, Aug 12 2020, 00:00:00) [GCC 10.2.1 20200723 (Red Hat 10.2.1-1)]
```
##### CONFIGURATION
```
ANSIBLE_SSH_ARGS(/home/xenlo/Projects/463/etc/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=60
CACHE_PLUGIN(/home/xenlo/Projects/463/etc/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/xenlo/Projects/463/etc/ansible.cfg) = $HOME/.ansible/facts/
CACHE_PLUGIN_TIMEOUT(/home/xenlo/Projects/463/etc/ansible.cfg) = 3600
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/home/xenlo/.virtualenvs/463/lib/python3.8/site-packages/ara/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = ['/home/xenlo/.virtualenvs/463/lib/python3.8/site-packages/ara/plugins/callback']
DEFAULT_CALLBACK_WHITELIST(/home/xenlo/Projects/463/etc/ansible.cfg) = ['profile_tasks']
DEFAULT_FILTER_PLUGIN_PATH(env: ANSIBLE_FILTER_PLUGINS) = ['/home/xenlo/Projects/463/plugins/filter']
DEFAULT_GATHERING(/home/xenlo/Projects/463/etc/ansible.cfg) = smart
DEFAULT_LOG_PATH(/home/xenlo/Projects/463/etc/ansible.cfg) = /home/xenlo/.ansible/log/ansible.log
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/home/xenlo/Projects/463/library']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['/home/xenlo/Projects/463/roles.galaxy', '/home/xenlo/Projects/463/roles']
DEFAULT_STDOUT_CALLBACK(/home/xenlo/Projects/463/etc/ansible.cfg) = yaml
GALAXY_ROLE_SKELETON(env: ANSIBLE_GALAXY_ROLE_SKELETON) = /home/xenlo/Projects/463/etc/skel/default
HOST_KEY_CHECKING(/home/xenlo/Projects/463/etc/ansible.cfg) = False
```
##### OS / ENVIRONMENT
- target: vsphere 6.7.0
##### STEPS TO REPRODUCE
A simple loop task which create VMs with a prefered host as `esxi_hostname` attribut.
Here is an sample:
```yaml
---
# Deploy some VMS
- name: Provisions VM hosts
hosts: vcenter
gather_facts: no
vars:
domain_name: "my_domain.net"
vcenter_sso_pass: "{{ ansible_ssh_pass }}"
datacenter_name: "my_little_DC"
datastore_name: "my_vmfs_01"
vm_list:
- name: little-vm-1-01
pref_node: node01
- name: little-vm-1-02
pref_node: node01
- name: little-vm-1-03
pref_node: node01
- name: little-vm-2-02
pref_node: node02
- name: little-vm-2-01
pref_node: node02
- name: little-vm-3-01
pref_node: node03
- name: little-vm-3-02
pref_node: node03
- name: little-vm-3-03
pref_node: node03
- name: little-vm-3-04
pref_node: node03
- name: little-vm-3-05
pref_node: node03
tasks:
- name: Provision VM
vmware_guest:
hostname: "{{ ansible_host }}"
esxi_hostname: "{{ vm.pref_node }}.{{ domain_name }}"
username: "Administrator@{{ domain_name }}"
password: "{{ vcenter_sso_pass }}"
validate_certs: no
name: "{{ vm.name }}.{{ domain_name }}"
state: poweredoff
folder: "/"
datacenter: "{{ datacenter_name }}"
guest_id: "debian9_64Guest"
hardware:
num_cpus: 1
memory_mb: 1024
disk:
- size_gb: 2
type: thin
datastore: "{{ datastore_name }}"
loop: "{{ vm_list }}"
loop_control:
loop_var: vm
delegate_to: localhost
```
##### EXPECTED RESULTS
The VMs are provisioned on there prefered node.
##### ACTUAL RESULTS
The VMs are created but on random ESXi hosts.
| Files identified in the description:
* [`plugins/modules/vmware_guest.py`](https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_guest.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @nerzhul @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I had some investigations with @goneri already.
He created a PR #357 to test some changes (having the set of `self.relospec.host` independent if we use template or not).
And from my side I tried to force some `RelocateVM_Task(…)` after the `CreateVM_Task(…)` but without any success. | 2020-09-24T14:24:27 |
|
ansible-collections/community.vmware | 414 | ansible-collections__community.vmware-414 | [
"412"
] | c1a375edfef64e886250a414945ba453113f46ef | diff --git a/plugins/modules/vmware_guest_custom_attributes.py b/plugins/modules/vmware_guest_custom_attributes.py
--- a/plugins/modules/vmware_guest_custom_attributes.py
+++ b/plugins/modules/vmware_guest_custom_attributes.py
@@ -199,7 +199,8 @@ def set_custom_field(self, vm, user_fields):
def check_exists(self, field):
for x in self.custom_field_mgr:
- if x.name == field:
+ # The custom attribute should be either global (managedObjectType == None) or VM specific
+ if x.managedObjectType in (None, vim.VirtualMachine) and x.name == field:
return x
return False
| vmware_guest_custom_attributes module crashes when trying to set a VirtualMachine attribute with the same name as an existing HostSystem attribute
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When running a task with `vmware_guest_custom_attributes`, if the name of any attribute already exists as a HostSystem attribute, the module will crash with an unhandled exception.
```
pyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: entity',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = u'entity'
}
```
The crash is due to the module finding the HostSystem attribute and trying to do a `self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)` with the key of the wrong type of attribute.
The issue happens because in the line https://github.com/ansible-collections/community.vmware/blob/a92ccb0a07cc833e22b13cb838d0696b16ebf64d/plugins/modules/vmware_guest_custom_attributes.py#L202 there is no explicit filtering for VirtualMachine custom attributes. If the cycle's first match is a HostSystem attribute, the function will return the wrong type.
This would work if the `check_exists` function were something like:
```
def check_exists(self, field):
for x in self.custom_field_mgr:
if x.name == field and x.managedObjectType == vim.VirtualMachine:
return x
return False
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_custom_attributes
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.13
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/user1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/user1/virtualenvs/ansible-2.9/lib/python2.7/site-packages/ansible
executable location = /home/user1/virtualenvs/ansible-2.9/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/user1/ansible.aso/ansible.cfg) =
ANSIBLE_SSH_RETRIES(/home/user1/ansible.aso/ansible.cfg) = 2
CACHE_PLUGIN(/home/user1/ansible.aso/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/user1/ansible.aso/ansible.cfg) = inventory/.cache
CACHE_PLUGIN_TIMEOUT(/home/user1/ansible.aso/ansible.cfg) = 86400
DEFAULT_GATHERING(/home/user1/ansible.aso/ansible.cfg) = smart
DEFAULT_LOG_PATH(/home/user1/ansible.aso/ansible.cfg) = /home/user1/ansible.aso/ansible.log
DEFAULT_MANAGED_STR(/home/user1/ansible.aso/ansible.cfg) = Managed by Ansible - DO NOT MODIFY
DEFAULT_TIMEOUT(/home/user1/ansible.aso/ansible.cfg) = 30
HOST_KEY_CHECKING(/home/user1/ansible.aso/ansible.cfg) = False
INVENTORY_CACHE_ENABLED(/home/user1/ansible.aso/ansible.cfg) = True
INVENTORY_CACHE_PLUGIN(/home/user1/ansible.aso/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/home/user1/ansible.aso/ansible.cfg) = inventory/.cache
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
`CentOS Linux release 7.6.1810 (Core)`
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
With the following playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Write VM custom attributes"
hosts: all
gather_facts: false
tasks:
- name: Add virtual machine custom attributes
vmware_guest_custom_attributes:
hostname: "{{ vm_vcenter_host | default(lookup('env', 'VMWARE_HOST')) }}"
username: "{{ vm_vcenter_user | default(lookup('env', 'VMWARE_USER')) }}"
password: "{{ vm_vcenter_pass | default(lookup('env', 'VMWARE_PASSWORD')) }}"
name: "{{ inventory_hostname }}"
validate_certs: no
state: present
attributes:
- name: "Department"
value: "{{ custom_attribute_department | default('undefined') }}"
delegate_to: localhost
register: attributes
```
vcenter has the following Custom Attributes:
```
(vim.CustomFieldsManager.FieldDef) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 630,
name = 'Department',
type = str,
managedObjectType = vim.HostSystem,
fieldDefPrivileges = <unset>,
fieldInstancePrivileges = <unset>
}
(vim.CustomFieldsManager.FieldDef) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 1044,
name = 'Department',
type = str,
managedObjectType = vim.VirtualMachine,
fieldDefPrivileges = <unset>,
fieldInstancePrivileges = <unset>
}
```
and run as:
`ansible-playbook -i inventory/vm_inventory_testvm.ini playbook_vcenter_custom_annotations.yml -l testvm02 -D --flush-cache -vvv`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Should create / update the VM custom attribute
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Crashes with exception:
<!--- Paste verbatim command output between quotes -->
```paste below
pyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: entity',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = u'entity'
}
```
| Now _that's_ a great issue. Thank you very much for giving us so many details! This should make it much easier for us to fix this. Thanks :-) | 2020-09-30T16:29:49 |
|
ansible-collections/community.vmware | 427 | ansible-collections__community.vmware-427 | [
"426"
] | 50dc6212845e0e562fa80647ef073df4bf04566a | diff --git a/plugins/modules/vmware_object_role_permission.py b/plugins/modules/vmware_object_role_permission.py
--- a/plugins/modules/vmware_object_role_permission.py
+++ b/plugins/modules/vmware_object_role_permission.py
@@ -11,7 +11,7 @@
__metaclass__ = type
-DOCUMENTATION = '''
+DOCUMENTATION = r'''
---
module: vmware_object_role_permission
short_description: Manage local roles on an ESXi host
@@ -73,9 +73,12 @@
'''
-EXAMPLES = '''
+EXAMPLES = r'''
- name: Assign user to VM folder
community.vmware.vmware_object_role_permission:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
role: Admin
principal: user_bob
object_name: services
@@ -84,6 +87,9 @@
- name: Remove user from VM folder
community.vmware.vmware_object_role_permission:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
role: Admin
principal: user_bob
object_name: services
@@ -92,6 +98,9 @@
- name: Assign finance group to VM folder
community.vmware.vmware_object_role_permission:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
role: Limited Users
group: finance
object_name: Accounts
@@ -100,6 +109,9 @@
- name: Assign view_user Read Only permission at root folder
community.vmware.vmware_object_role_permission:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
role: ReadOnly
principal: view_user
object_name: rootFolder
| vmware_object_role_permission: Missing required parameters in examples
##### SUMMARY
Examples missing required fields of hostname, username, and password, while other module examples include these.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
vmware_object_role_permission
##### ADDITIONAL INFORMATION
PR #422
| 2020-10-07T16:44:19 |
||
ansible-collections/community.vmware | 441 | ansible-collections__community.vmware-441 | [
"434"
] | 1985d55ef7d587a9d300c1b99dc8db72fcfba932 | diff --git a/plugins/inventory/vmware_vm_inventory.py b/plugins/inventory/vmware_vm_inventory.py
--- a/plugins/inventory/vmware_vm_inventory.py
+++ b/plugins/inventory/vmware_vm_inventory.py
@@ -280,7 +280,6 @@
import base64
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils._text import to_text, to_native
-from ansible.module_utils.common.dict_transformations import dict_merge
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
from ansible.module_utils.common.dict_transformations import _snake_to_camel
from ansible.utils.display import Display
@@ -388,6 +387,7 @@ def _login(self):
service_instance = connect.SmartConnect(host=self.hostname, user=self.username,
pwd=self.password, sslContext=ssl_context,
port=self.port)
+
except vim.fault.InvalidLogin as e:
raise AnsibleParserError("Unable to log on to vCenter or ESXi API at %s:%s as %s: %s" % (self.hostname, self.port, self.username, e.msg))
except vim.fault.NoPermission as e:
@@ -554,6 +554,21 @@ def build_containers(containers, vim_type, names, filters):
return []
+def in_place_merge(a, b):
+ """
+ Recursively merges second dict into the first.
+
+ """
+ if not isinstance(b, dict):
+ return b
+ for k, v in b.items():
+ if k in a and isinstance(a[k], dict):
+ a[k] = in_place_merge(a[k], v)
+ else:
+ a[k] = v
+ return a
+
+
def to_nested_dict(vm_properties):
"""
Parse properties from dot notation to dict
@@ -568,7 +583,7 @@ def to_nested_dict(vm_properties):
for k in prop_parents:
prop_dict = {k: prop_dict}
- host_properties = dict_merge(host_properties, prop_dict)
+ host_properties = in_place_merge(host_properties, prop_dict)
return host_properties
@@ -721,6 +736,8 @@ def _populate_from_source(self):
query_props = None
vm_properties.remove('all')
else:
+ if 'runtime.connectionState' not in vm_properties:
+ vm_properties.append('runtime.connectionState')
query_props = [x for x in vm_properties if x != "customValue"]
objects = self.pyv.get_managed_objects_properties(
@@ -743,14 +760,13 @@ def _populate_from_source(self):
hostnames = self.get_option('hostnames')
for vm_obj in objects:
- if not vm_obj.obj.config:
- # Sometime orphaned VMs return no configurations
- continue
-
properties = dict()
for vm_obj_property in vm_obj.propSet:
properties[vm_obj_property.name] = vm_obj_property.val
+ if (properties.get('runtime.connectionState') or properties['runtime'].connectionState) == 'orphaned':
+ continue
+
# Custom values
if 'customValue' in vm_properties:
field_mgr = []
| Inventory plugin is extremely slow
##### SUMMARY
It takes about 1.5min to get ~350 hosts from vCenter
Related issue: https://github.com/ansible/ansible/issues/56786
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
community.vmware.vmware_vm_inventory
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/lib/python-exec/python3.7/ansible
python version = 3.7.9 (default, Aug 21 2020, 01:37:20) [GCC 9.3.0]
```
##### CONFIGURATION
```
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_legacy_silent
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['host_list', 'yaml', 'constructed', 'vmware_vm_inventory']
RETRY_FILES_SAVE_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible/retry-files
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Gentoo GNU/Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Plugin configuration:
```yaml
plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: "server"
username: "user"
password: "*****"
validate_certs: False
with_tags: False
properties:
- 'name'
- 'guest.ipAddress'
- 'config.name'
- 'config.uuid'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Python script from https://github.com/vmware/vsphere-automation-sdk-python#connect-to-a-vcenter-server
do the work in ~3sec.
```python
import requests
import urllib3
from vmware.vapi.vsphere.client import create_vsphere_client
session = requests.session()
session.verify = False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
client = create_vsphere_client(server=hostname,
username=username,
password=password,
session=session)
print(client.vcenter.VM.list())
```
```
real 0m3.198s
user 0m0.366s
sys 0m0.037s
API invocations: 4
```
##### ACTUAL RESULTS
Ansible inventory - almost 1.5min
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
time ansible-inventory -vvvv --list -i inventory.vmware.yml
<!--- Paste verbatim command output between quotes -->
```paste below
real 1m26.505s
user 0m6.451s
sys 0m0.180s
API invocations: 5
```
| @Perlovka Thanks for reporting this issue.
@dacrystal Would you like to provide your views on this? Thanks.
I have run it on a ~100vms vCenter using same plugin configuration and it didn't take that much:
```sh
real 0m3.818s
user 0m1.999s
sys 0m0.172s
```
@Perlovka I assume when you said 350 hosts, you also meant 350 guests(vms) of vCenter, right?
From your time, the process is waiting on I/O(blocking) for **79.874s** while the actually process time is only (**6.631s**). It seems there is a huge delay in getting the data from vCenter.
Would you mind generate a profile stats and share it. You can run the following:
```sh
python -m cProfile -o profile.pstats $(command -v ansible-inventory) -i inventory.vmware.yml --list
```
As a workaround, you can enable `cache` and set a `cache_timeout`.
@dacrystal yes, 350 VMs, not ESXi hosts )
It looks like it makes query to vCenter for every VM, and because connection is slow, total time is so big.
Is there a way to get info about all VMs in one API call? Or VMWare API does not allowing this?
```
Tue Oct 13 09:08:03 2020 profile.pstats
16868969 function calls (16677172 primitive calls) in 78.814 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1184 55.276 0.047 55.276 0.047 {method 'read' of '_ssl._SSLSocket' objects}
189660 2.177 0.000 6.865 0.000 SoapAdapter.py:660(StartElementHandler)
189661 1.864 0.000 5.398 0.000 SoapAdapter.py:721(EndElementHandler)
388 1.705 0.004 14.575 0.038 {method 'ParseFile' of 'pyexpat.xmlparser' objects}
539869 1.170 0.000 1.170 0.000 VmomiSupport.py:461(GetPropertyInfo)
47597 1.148 0.000 2.330 0.000 VmomiSupport.py:624(__init__)
4 1.113 0.278 1.113 0.278 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2676028 0.914 0.000 0.990 0.000 {built-in method builtins.isinstance}
```
From the number it seems your connection to vCenter is slow. You can see each **read** of `SSLSocket` takes **0.047s**.
I'm running a test over VPN (to a nearby Data Center):
```
708 2.491 0.004 2.491 0.004 {method 'read' of '_ssl._SSLSocket' objects}
```
The plugin is calling vCenter in one pyVmomi API call.
```py
return self.content.propertyCollector.RetrieveContents([filter_spec])
```
However there is seems to be rogue calls that triggered for each VM. Would you mind remove the following line in `vmware/plugins/inventory/vmware_vm_inventory.py` (751-753):
```py
if not vm_obj.obj.config:
# Sometime orphaned VMs return no configurations
continue
```
And report the profile of `{method 'read' of '_ssl._SSLSocket' objects}`, this should improve it.
---
@Akasurde I'm planing on refactor for optimization, Here a list of improvements I have in mind:
- Remove orphaned VMs checks (TBH, I'm not sure why it is there :sweat_smile: )
- Optimize `to_nested_dict`. Maybe exploring `VmomiSupport.VmomiJSONEncoder`.
- Use `SmartConnect` and `SmartConnectNoSSL`, instead of providing `SSLContext` manually.
- Orphaned VM check is added for VMs which are not completely deleted.
- OK
- OK
For that case, we can internally add `runtime.connectionState` to `properties` if not added and then check for `orphaned `:
```py
if properties['runtime.connectionState'] == 'orphaned':
continue
```
| 2020-10-13T11:52:04 |
|
ansible-collections/community.vmware | 477 | ansible-collections__community.vmware-477 | [
"451"
] | c3e4d52e4b8dc8c23fa28d60b6c44aafeeda3c3e | diff --git a/plugins/modules/vmware_cluster_ha.py b/plugins/modules/vmware_cluster_ha.py
--- a/plugins/modules/vmware_cluster_ha.py
+++ b/plugins/modules/vmware_cluster_ha.py
@@ -173,6 +173,20 @@
- A dictionary of advanced HA settings.
default: {}
type: dict
+ apd_response:
+ description:
+ - VM storage protection setting for storage failures categorized as All Paths Down (APD).
+ type: str
+ default: 'warning'
+ choices: [ 'disabled', 'warning', 'restartConservative', 'restartAggressive' ]
+ version_added: '1.4.0'
+ pdl_response:
+ description:
+ - VM storage protection setting for storage failures categorized as Permenant Device Loss (PDL).
+ type: str
+ default: 'warning'
+ choices: [ 'disabled', 'warning', 'restartAggressive' ]
+ version_added: '1.4.0'
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -314,6 +328,10 @@ def check_ha_config_diff(self):
!= self.params.get("ha_vm_max_failures")
or das_config.defaultVmSettings.vmToolsMonitoringSettings.maxFailureWindow
!= self.params.get("ha_vm_max_failure_window")
+ or das_config.defaultVmSettings.vmComponentProtectionSettings.vmStorageProtectionForAPD
+ != self.params.get("apd_response")
+ or das_config.defaultVmSettings.vmComponentProtectionSettings.vmStorageProtectionForPDL
+ != self.params.get("pdl_response")
):
return True
@@ -373,6 +391,11 @@ def configure_ha(self):
das_vm_config.restartPriority = self.params.get('ha_restart_priority')
das_vm_config.isolationResponse = self.host_isolation_response
das_vm_config.vmToolsMonitoringSettings = vm_tool_spec
+
+ das_vm_config.vmComponentProtectionSettings = vim.cluster.VmComponentProtectionSettings()
+ das_vm_config.vmComponentProtectionSettings.vmStorageProtectionForAPD = self.params.get('apd_response')
+ das_vm_config.vmComponentProtectionSettings.vmStorageProtectionForPDL = self.params.get('pdl_response')
+
cluster_config_spec.dasConfig.defaultVmSettings = das_vm_config
cluster_config_spec.dasConfig.admissionControlEnabled = self.ha_admission_control
@@ -461,6 +484,12 @@ def main():
failover_host_admission_control=dict(type='dict', options=dict(
failover_hosts=dict(type='list', elements='str', required=True),
)),
+ apd_response=dict(type='str',
+ choices=['disabled', 'warning', 'restartConservative', 'restartAggressive'],
+ default='warning'),
+ pdl_response=dict(type='str',
+ choices=['disabled', 'warning', 'restartAggressive'],
+ default='warning'),
))
module = AnsibleModule(
| diff --git a/tests/integration/targets/vmware_cluster_ha/tasks/main.yml b/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
--- a/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
+++ b/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
@@ -16,7 +16,6 @@
cluster_name: test_cluster_ha
state: present
-# Testcase 0001: Enable HA
- name: Enable HA
vmware_cluster_ha:
validate_certs: false
@@ -26,109 +25,174 @@
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: true
- register: cluster_ha_result_0001
+ register: enable_ha_result
- name: Ensure HA is enabled
assert:
that:
- - "{{ cluster_ha_result_0001.changed == true }}"
+ - enable_ha_result.changed
-# Testcase 0002: Enable Slot based Admission Control
-- name: Enable Slot based Admission Control
- vmware_cluster_ha:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter_name: "{{ dc1 }}"
- cluster_name: test_cluster_ha
- enable_ha: true
- slot_based_admission_control:
- failover_level: 1
- register: cluster_ha_result_0002
+- when: vcsim is not defined
+ block:
+ - name: Change APD response to "restartAggressive" (check-mode)
+ vmware_cluster_ha: &change_apd_response
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+ apd_response: 'restartAggressive'
+ check_mode: true
+ register: change_apd_response_check
-- name: Ensure Admission Cotrol is enabled
- assert:
- that:
- - "{{ cluster_ha_result_0002.changed == true }}"
+ - assert:
+ that:
+ - change_apd_response_check.changed
-# Testcase 0003: Enable Cluster resource Percentage based Admission Control
-- name: Enable Cluster resource Percentage based Admission Control
- vmware_cluster_ha:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter_name: "{{ dc1 }}"
- cluster_name: test_cluster_ha
- enable_ha: true
- reservation_based_admission_control:
- auto_compute_percentages: false
- failover_level: 1
- cpu_failover_resources_percent: 33
- memory_failover_resources_percent: 33
- register: cluster_ha_result_0003
-
-- name: Ensure Admission Cotrol is enabled
- assert:
- that:
- - "{{ cluster_ha_result_0003.changed == true }}"
+ - name: Change APD response to "restartAggressive"
+ vmware_cluster_ha: *change_apd_response
+ register: change_apd_response
-# Testcase 0004: Set Isolation Response to powerOff
-- name: Set Isolation Response to powerOff
- vmware_cluster_ha:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter_name: "{{ dc1 }}"
- cluster_name: test_cluster_ha
- enable_ha: true
- host_isolation_response: 'powerOff'
- register: cluster_ha_result_0004
+ - assert:
+ that:
+ - change_apd_response.changed
-- name: Ensure Isolation Response is enabled
- assert:
- that:
- - "{{ cluster_ha_result_0004.changed == true }}"
+ - name: Change APD response to "restartAggressive" again
+ vmware_cluster_ha: *change_apd_response
+ register: change_apd_response_again
-# Testcase 0005: Set Isolation Response to shutdown
-- name: Set Isolation Response to shutdown
- vmware_cluster_ha:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter_name: "{{ dc1 }}"
- cluster_name: test_cluster_ha
- enable_ha: true
- host_isolation_response: 'shutdown'
- register: cluster_ha_result_0005
+ - assert:
+ that:
+ - not change_apd_response_again.changed
-- name: Ensure Isolation Response is enabled
- assert:
- that:
- - "{{ cluster_ha_result_0005.changed == true }}"
+ - name: Change APD response back to default
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
-# Testcase 0006: Disable HA
-- name: Disable HA
- vmware_cluster_ha:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter_name: "{{ dc1 }}"
- cluster_name: test_cluster_ha
- enable_ha: false
- register: cluster_ha_result_0006
+ - name: Change PDL response to "restartAggressive" (check-mode)
+ vmware_cluster_ha: &change_pdl_response
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+ pdl_response: 'restartAggressive'
+ check_mode: true
+ register: change_pdl_response_check
-- name: Ensure HA is disabled
- assert:
- that:
- - "{{ cluster_ha_result_0006.changed == true }}"
+ - assert:
+ that:
+ - change_pdl_response_check.changed
+
+ - name: Change PDL response to "restartAggressive"
+ vmware_cluster_ha: *change_pdl_response
+ register: change_pdl_response
+
+ - assert:
+ that:
+ - change_pdl_response.changed
+
+ - name: Change PDL response to "restartAggressive" again
+ vmware_cluster_ha: *change_pdl_response
+ register: change_pdl_response_again
+
+ - assert:
+ that:
+ - not change_pdl_response_again.changed
+
+ - name: Change PDL response back to default
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+
+ - name: Enable Slot based Admission Control
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+ slot_based_admission_control:
+ failover_level: 1
+ register: enable_slot_based_admission_control_result
+
+ - name: Ensure Admission Cotrol is enabled
+ assert:
+ that:
+ - enable_slot_based_admission_control_result.changed
+
+ - name: Enable Cluster resource Percentage based Admission Control
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+ reservation_based_admission_control:
+ auto_compute_percentages: false
+ failover_level: 1
+ cpu_failover_resources_percent: 33
+ memory_failover_resources_percent: 33
+ register: enable_percentage_based_admission_control_result
+
+ - name: Ensure Admission Cotrol is enabled
+ assert:
+ that:
+ - enable_percentage_based_admission_control_result.changed
+
+ - name: Set Isolation Response to powerOff
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+ host_isolation_response: 'powerOff'
+ register: isolation_response_poweroff_result
+
+ - name: Ensure Isolation Response is enabled
+ assert:
+ that:
+ - isolation_response_poweroff_result.changed
+
+ - name: Set Isolation Response to shutdown
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: true
+ host_isolation_response: 'shutdown'
+ register: isolation_response_shutdown_result
+
+ - name: Ensure Isolation Response is enabled
+ assert:
+ that:
+ - isolation_response_shutdown_result.changed
-- when: vcsim is not defined
- block:
- name: Change advanced setting "number of heartbeat datastores" (check-mode)
vmware_cluster_ha: &change_num_heartbeat_ds
validate_certs: false
@@ -137,6 +201,7 @@
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
+ enable_ha: true
advanced_settings:
'das.heartbeatDsPerHost': '4'
check_mode: true
@@ -170,8 +235,9 @@
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
+ enable_ha: true
advanced_settings:
- 'das.includeFTcomplianceChecks': 'false'
+ 'das.includeFTcomplianceChecks': 'False'
check_mode: true
register: change_includeFTcomplianceChecks_check
@@ -195,6 +261,22 @@
that:
- not change_includeFTcomplianceChecks_again.changed
+ - name: Disable HA
+ vmware_cluster_ha:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter_name: "{{ dc1 }}"
+ cluster_name: test_cluster_ha
+ enable_ha: false
+ register: disable_ha_result
+
+ - name: Ensure HA is disabled
+ assert:
+ that:
+ - disable_ha_result.changed
+
# Delete test cluster
- name: Delete test cluster
vmware_cluster:
| Feature: community.vmware.vmware_cluster_ha missing options
Hi
After pinging @Akasurde on slack , he's asked me to raise a fr.
So in HA I'm missing options for things like : -
Response for Host Isolation
Datastore with PDL
Datastore with APD
seems to be in the API https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.cluster.VmComponentProtectionSettings.html
also, i cant see options for advanced settings which we use a lot. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-E0161CB5-BD3F-425F-A7E0-BF83B005FECA.html
dasconfig


| What ansible version are we talking about?
> So in HA I'm missing options for things like : -
>
> Response for Host Isolation
Actually, in 2.10 there is a [host_isolation_response](https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_cluster_ha_module.html) option.
> Datastore with PDL
> Datastore with APD
Yes, I think these are really missing. I'd love to have a look at this but I don't have much time at the moment :-(
> also, i cant see options for advanced settings which we use a lot. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-E0161CB5-BD3F-425F-A7E0-BF83B005FECA.html
In ansible 2.10, there should be also an `advanced_settings` option for this modules. I think there's no example for this but it should work similar to the one in [vmware_cluster_drs](https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_cluster_drs_module.html#examples)
Hi -
so these three in particular I think. I think your correct "Response for Host Isolation" is there, confusion over this and "host failure response"

i'm on 2.9.13 so will upgrade to 2.10 and see if that helps ha advanced settings.
There's also a [ha_vm_monitoring](https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_cluster_ha_module.html) option in 2.10, I think this deals with `VM Monitoring`.
So, actually, I think we're down to missing options for `Datastore with PDL` and `Datastore with APD`.
> i'm on 2.9.13 so will upgrade to 2.10 and see if that helps ha advanced settings.
Alternatively, you can try to stay on ansible 2.9.13 and install [the collection](https://galaxy.ansible.com/community/vmware). For me, at least at the moment, it's easier to use ansible 2.10 via pip instead of using the collection on top of ansible 2.9. That's not really a technical problem, more a compliance thing... but maybe it's the other way round for you.
so i'm using 2.9.13 with the collection on top. advanced settings are there, apologies it was staring me in the face.
so yes i think its just PDL/APD settings I'm missing
thanks
Could you please have a look at PR #477? This should implement the PDL/APD settings you're missing. | 2020-11-03T19:12:45 |
ansible-collections/community.vmware | 510 | ansible-collections__community.vmware-510 | [
"507"
] | 6ece2545af6ea504d57bcf2ca4873f35eb03dcd9 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -510,6 +510,15 @@
- Domain name for this network interface (Windows).
- Optional per entry.
- Used for OS customization.
+ connected:
+ type: bool
+ description:
+ - Indicates whether the NIC is currently connected.
+ version_added: '1.5.0'
+ start_connected:
+ type: bool
+ description:
+ - Specifies whether or not to connect the device when the virtual machine starts.
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
@@ -1224,7 +1233,7 @@ def create_nic(self, device_type, device_label, device_infos):
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
- nic.device.connectable.connected = True
+ nic.device.connectable.connected = bool(device_infos.get('connected', True))
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
@@ -1950,34 +1959,21 @@ def configure_network(self, vm_obj):
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
- if "wake_on_lan" in network_devices[
- key
- ] and nic.device.wakeOnLanEnabled != network_devices[key].get(
- "wake_on_lan"
- ):
- nic.device.wakeOnLanEnabled = network_devices[key].get(
- "wake_on_lan"
- )
+ if "wake_on_lan" in network_devices[key] and \
+ nic.device.wakeOnLanEnabled != network_devices[key].get("wake_on_lan"):
+ nic.device.wakeOnLanEnabled = network_devices[key].get("wake_on_lan")
nic_change_detected = True
- if "start_connected" in network_devices[
- key
- ] and nic.device.connectable.startConnected != network_devices[key].get(
- "start_connected"
- ):
- nic.device.connectable.startConnected = network_devices[key].get(
- "start_connected"
- )
+ if "start_connected" in network_devices[key] and \
+ nic.device.connectable.startConnected != network_devices[key].get("start_connected"):
+ nic.device.connectable.startConnected = network_devices[key].get("start_connected")
nic_change_detected = True
- if "allow_guest_control" in network_devices[
- key
- ] and nic.device.connectable.allowGuestControl != network_devices[
- key
- ].get(
- "allow_guest_control"
- ):
- nic.device.connectable.allowGuestControl = network_devices[key].get(
- "allow_guest_control"
- )
+ if "connected" in network_devices[key] and \
+ nic.device.connectable.connected != network_devices[key].get("connected"):
+ nic.device.connectable.connected = network_devices[key].get("connected")
+ nic_change_detected = True
+ if "allow_guest_control" in network_devices[key] and \
+ nic.device.connectable.allowGuestControl != network_devices[key].get("allow_guest_control"):
+ nic.device.connectable.allowGuestControl = network_devices[key].get("allow_guest_control")
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
@@ -2003,9 +1999,9 @@ def configure_network(self, vm_obj):
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
- if hasattr(self.cache.get_network(network_name), 'portKeys'):
+ net_obj = self.cache.get_network(network_name)
+ if hasattr(net_obj, 'portKeys'):
# VDS switch
-
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
@@ -2052,10 +2048,10 @@ def configure_network(self, vm_obj):
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
- elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
+ elif isinstance(net_obj, vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
- network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
+ network_id = net_obj.summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
@@ -2066,7 +2062,6 @@ def configure_network(self, vm_obj):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
- net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
@@ -2079,7 +2074,7 @@ def configure_network(self, vm_obj):
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
- if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
+ if isinstance(net_obj, vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
| vmware_guest: Network parameters are missing in the documentation
##### SUMMARY
**networks** parameter is missing two sub parameters in the document section and thats impacting template deployment which need to attach different dvpg than the one actually available in the template.
- connected
- start_connected
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
For ansible 2.10, there is no edit option available in the documentation page.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
- networks
##### ANSIBLE VERSION
```
$ pip3 show ansible
Name: ansible
Version: 2.10.3
Summary: Radically simple IT automation
Home-page: https://ansible.com/
Author: Ansible, Inc.
Author-email: [email protected]
License: GPLv3+
Location: /usr/local/lib/python3.8/dist-packages
Requires: ansible-base
Required-by:
```
##### PLAYBOOK
This is an example of the play from a playbook that has already been tested and working perfectly:
```
- name: Deploy the vm '{{ inventory_hostname }}'
vmware_guest:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: '{{ vsphere_datacenter }}'
cluster: '{{ vsphere_cluster }}'
datastore: '{{ vsphere_datastore }}'
name: '{{ inventory_hostname }}'
template: '{{ win_temp }}'
folder: '/{{ vsphere_datacenter }}/vm/{{ vsphere_folder }}'
validate_certs: 'no'
networks:
- name: '{{ Mgmt_network }}'
ip: "{{ Mgmt_network_ipv4 }}"
netmask: '{{ Mgmt_network_nmv4 }}'
gateway: '{{ Mgmt_network_gwv4 }}'
device_type: vmxnet3
connected: True
start_connected: True
type: static
dns_servers:
- '{{ dns_server1 }}'
- '{{ dns_server2 }}'
state: present
wait_for_ip_address: yes
customization:
hostname: "{{ vsphere_vm_hostname }}"
dns_suffix: '{{ ad_domain }}'
domainadmin: '{{ ad_domain_admin }}'
domainadminpassword: '{{ ad_domain_password }}'
joindomain: '{{ ad_domain }}'
timezone: '{{ timezone }}'
wait_for_customization: yes
delegate_to: localhost
```
Without the following two components, the deployment wont attach the nic and activate it. So other plays having dependency on the network will fail:
```
connected: True
start_connected: True
```
| 2020-11-19T20:05:03 |
||
ansible-collections/community.vmware | 515 | ansible-collections__community.vmware-515 | [
"514"
] | e0b89e454e528629e01c8dbc6758a72addbff152 | diff --git a/plugins/modules/vmware_content_deploy_ovf_template.py b/plugins/modules/vmware_content_deploy_ovf_template.py
--- a/plugins/modules/vmware_content_deploy_ovf_template.py
+++ b/plugins/modules/vmware_content_deploy_ovf_template.py
@@ -29,6 +29,12 @@
type: str
required: True
aliases: ['ovf', 'template_src']
+ content_library:
+ description:
+ - The name of the content library from where the template resides.
+ type: str
+ required: False
+ version_added: '1.5.0'
name:
description:
- The name of the VM to be deployed.
@@ -132,6 +138,7 @@ def __init__(self, module):
"""Constructor."""
super(VmwareContentDeployOvfTemplate, self).__init__(module)
self.ovf_template_name = self.params.get('ovf_template')
+ self.content_library_name = self.params.get('content_library')
self.vm_name = self.params.get('name')
self.datacenter = self.params.get('datacenter')
self.datastore = self.params.get('datastore')
@@ -151,9 +158,15 @@ def deploy_vm_from_ovf_template(self):
if not self.datastore_id:
self.module.fail_json(msg="Failed to find the datastore %s" % self.datastore)
# Find the LibraryItem (Template) by the given LibraryItem name
- self.library_item_id = self.get_library_item_by_name(self.ovf_template_name)
- if not self.library_item_id:
- self.module.fail_json(msg="Failed to find the library Item %s" % self.ovf_template_name)
+ if self.content_library_name:
+ self.library_item_id = self.get_library_item_from_content_library_name(
+ self.ovf_template_name, self.content_library_name)
+ if not self.library_item_id:
+ self.module.fail_json(msg="Failed to find the library Item %s in content library %s" % (self.ovf_template_name, self.content_library_name))
+ else:
+ self.library_item_id = self.get_library_item_by_name(self.ovf_template_name)
+ if not self.library_item_id:
+ self.module.fail_json(msg="Failed to find the library Item %s" % self.ovf_template_name)
# Find the folder by the given folder name
self.folder_id = self.get_folder_by_name(self.datacenter, self.folder)
if not self.folder_id:
@@ -221,6 +234,7 @@ def main():
argument_spec = VmwareRestClient.vmware_client_argument_spec()
argument_spec.update(
ovf_template=dict(type='str', aliases=['template_src', 'ovf'], required=True),
+ content_library=dict(type='str', required=False),
name=dict(type='str', required=True, aliases=['vm_name']),
datacenter=dict(type='str', required=True),
datastore=dict(type='str', required=True),
| Allow Users to Choose Content Library for OVF Deployments
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Unlike the Module "community.vmware.vmware_content_deploy_template" Where you have the ability to pick which Subscription you wish to deploy from. The module Deploying OVF from Content Library doesn't have this feature, which it appears to pick at random a subscription to deploy from. In doing so, to double the time needed for a deployment workflow.
##### ISSUE TYPE
- Add the Feature content_library
##### COMPONENT NAME
community.vmware.vmware_content_deploy_ovf_template
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name:
hosts: localhost
gather_facts: false
connection: local
tasks:
- community.vmware.vmware_content_deploy_ovf_template:
host: "{{ esxi_hostname }}"
hostname: "{{ vcenter_hostname }}"
username: "{{ user }}"
password: "{{ pass }}"
datacenter: "{{ datacenter }}"
cluster: "{{ cluster }}"
content_library: "{{ library_name }}"
datastore: "{{ datastore }}"
resource_pool: "{{ vmname }}-resource"
validate_certs: False
folder: vm
ovf_template: "{{ TemplateName }}"
name: "{{ vmname }}"
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| 2020-11-20T17:21:11 |
||
ansible-collections/community.vmware | 524 | ansible-collections__community.vmware-524 | [
"492"
] | 4be445d3af8eacea0067b0a82f12514e3aafb342 | diff --git a/plugins/modules/vmware_resource_pool.py b/plugins/modules/vmware_resource_pool.py
--- a/plugins/modules/vmware_resource_pool.py
+++ b/plugins/modules/vmware_resource_pool.py
@@ -24,14 +24,21 @@
options:
datacenter:
description:
- - Name of the datacenter to add the host.
+ - Name of the datacenter.
required: True
type: str
cluster:
description:
- - Name of the cluster to add the host.
- required: True
+ - Name of the cluster to configure the resource pool.
+ - This parameter is required if C(esxi_hostname) is not specified.
+ type: str
+ esxi_hostname:
+ description:
+ - Name of the host to configure the resource pool.
+ - The host must not be member of a cluster.
+ - This parameter is required if C(cluster) is not specified.
type: str
+ version_added: '1.5.0'
resource_pool:
description:
- Resource pool name to manage.
@@ -190,7 +197,7 @@
from ansible.module_utils._text import to_native
from ansible_collections.community.vmware.plugins.module_utils.vmware import get_all_objs, vmware_argument_spec, find_datacenter_by_name, \
- find_cluster_by_name, wait_for_task, PyVmomi
+ find_cluster_by_name, find_object_by_name, wait_for_task, PyVmomi
from ansible.module_utils.basic import AnsibleModule
@@ -199,7 +206,6 @@ class VMwareResourcePool(PyVmomi):
def __init__(self, module):
super(VMwareResourcePool, self).__init__(module)
self.datacenter = module.params['datacenter']
- self.cluster = module.params['cluster']
self.resource_pool = module.params['resource_pool']
self.hostname = module.params['hostname']
self.username = module.params['username']
@@ -217,10 +223,22 @@ def __init__(self, module):
self.cpu_reservation = module.params['cpu_reservation']
self.cpu_expandable_reservations = module.params[
'cpu_expandable_reservations']
- self.dc_obj = None
- self.cluster_obj = None
self.resource_pool_obj = None
+ self.dc_obj = find_datacenter_by_name(self.content, self.datacenter)
+ if self.dc_obj is None:
+ self.module.fail_json(msg="Unable to find datacenter with name %s" % self.datacenter)
+
+ if module.params['cluster']:
+ self.compute_resource_obj = find_cluster_by_name(self.content, module.params['cluster'], datacenter=self.dc_obj)
+ if self.compute_resource_obj is None:
+ self.module.fail_json(msg="Unable to find cluster with name %s" % module.params['cluster'])
+
+ if module.params['esxi_hostname']:
+ self.compute_resource_obj = find_object_by_name(self.content, module.params['esxi_hostname'], [vim.ComputeResource], folder=self.dc_obj.hostFolder)
+ if self.compute_resource_obj is None:
+ self.module.fail_json(msg="Unable to find host with name %s" % module.params['esxi_hostname'])
+
def select_resource_pool(self):
pool_obj = None
@@ -397,17 +415,9 @@ def state_add_rp(self):
if self.module.check_mode:
self.module.exit_json(changed=changed)
- self.dc_obj = find_datacenter_by_name(self.content, self.datacenter)
- if self.dc_obj is None:
- self.module.fail_json(msg="Unable to find datacenter with name %s" % self.datacenter)
-
- self.cluster_obj = find_cluster_by_name(self.content, self.cluster, datacenter=self.dc_obj)
- if self.cluster_obj is None:
- self.module.fail_json(msg="Unable to find cluster with name %s" % self.cluster)
-
rp_spec = self.generate_rp_config()
- rootResourcePool = self.cluster_obj.resourcePool
+ rootResourcePool = self.compute_resource_obj.resourcePool
rootResourcePool.CreateResourcePool(self.resource_pool, rp_spec)
resource_pool_config = self.generate_rp_config_return_value(True)
@@ -424,7 +434,8 @@ def check_rp_state(self):
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(datacenter=dict(required=True, type='str'),
- cluster=dict(required=True, type='str'),
+ cluster=dict(type='str', required=False),
+ esxi_hostname=dict(type='str', required=False),
resource_pool=dict(required=True, type='str'),
mem_shares=dict(type='str', default="normal", choices=[
'high', 'custom', 'normal', 'low']),
@@ -447,6 +458,12 @@ def main():
['mem_shares', 'custom', ['mem_allocation_shares']],
['cpu_shares', 'custom', ['cpu_allocation_shares']]
],
+ required_one_of=[
+ ['cluster', 'esxi_hostname'],
+ ],
+ mutually_exclusive=[
+ ['cluster', 'esxi_hostname'],
+ ],
supports_check_mode=True)
if not HAS_PYVMOMI:
| vmware_resource_pool: cluster_name should not be required
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
when creating a resource pool using module **vmware_resource_pool** it should _not_ be required to specify **cluster** when instead an esxi-host is specified (e.g. `esxi_hostname: foobarhost` ) - this problem was part of this issue here but was not resolved - see: https://github.com/ansible/ansible/issues/38300#issuecomment-428048898
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_resource_pool
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = /home/user/gitrepos/therepo/ansible_roles/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/gitrepos/therepo/autopytoolchain/venvp3/lib/python3.6/site-packages/ansible
executable location = /home/user/gitrepos/therepo/autopytoolchain/venvp3/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORCE_HANDLERS(/home/user/gitrepos/therepo/ansible_roles/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/user/gitrepos/therepo/ansible_roles/ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/user/gitrepos/therepo/ansible_roles/ansible.cfg) = 30
HOST_KEY_CHECKING(/home/user/gitrepos/therepo/ansible_roles/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/user/gitrepos/therepo/ansible_roles/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
* ansible host OS: Ubuntu 1804 LTS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
* setup 1-n esxi hosts in a vmware-datacenter
* do _not_ create a vmware-cluster
* try to create a resource_pool via module vmware_resource_pool
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: ensure resourcepool exists
vmware_resource_pool:
hostname: "{{ _host.vcenter.hostname }}"
username: "{{ vmware_vcenter_username }}"
password: "{{ vmware_vcenter_password }}"
validate_certs: "{{ _host.vcenter.validate_certs }}"
datacenter: '{{ _host.vmware.datacenter }}'
esxi_hostname: "{{ _host.vmware.esxi_hostname }}"
resource_pool: "{{ _host.vmware.resource_pool }}"
state: present
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
* the specified resource_pool is created on the esxi host if it does not exist
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
* module errors out saying that cluster is required / esxi_hostname is not supported
<!--- Paste verbatim command output between quotes -->
```paste below
...
Unsupported parameters for (vmware_resource_pool) module: esxi_hostname Supported parameters include: cluster, cpu_expandable_reservations, cpu_limit, cpu_reservation, cpu_shares, datacenter, hostname, mem_expandable_reservations, mem_limit, mem_reservation, mem_shares, password, port, proxy_host, proxy_port, resource_pool, state, username, validate_certs
...
```
| @dmelha Thanks for reporting this issue. `resource_pool` is associated with the cluster and not with esxi (correct me if I am wrong). So it makes sense to specify the cluster name.
Also, while creating a new resource pool, we need to specify the parent resource pool under which new resource pool will be created. To get the value of the parent resource pool, cluster is required value.
We can add support `esxi_hostname` in the module, but AFAIK there is no VMware API to know parent cluster from the given ESXi host system. We will need to implement some logic to gather parent cluster when ESXi name is specified.
I hope this makes sense. Thanks.
@sky-joker @goneri @mariolenz What do you think about this?
@Akasurde
> @dmelha Thanks for reporting this issue. `resource_pool` is associated with the cluster and not with esxi (correct me if I am wrong). So it makes sense to specify the cluster name.
I'm working with multiple vcenters/datacenters/esxi hosts, that are not organized in clusters but having separate resource_pools defined per host. vmware does in no way require a cluster to be defined for a resourcepool to be created on an esxi host.
>
> Also, while creating a new resource pool, we need to specify the parent resource pool under which new resource pool will be created. To get the value of the parent resource pool, cluster is required value.
>
> We can add support `esxi_hostname` in the module, but AFAIK there is no VMware API to know parent cluster from the given ESXi host system. We will need to implement some logic to gather parent cluster when ESXi name is specified.
>
while i don't know the details of the vmware API(s) + pyvmomi (yet), looking at ***govc/govmomi*** maybe helps, as govc can create a resource_pool on an esxi host without specifying any cluster or parent resource pool - i just need to specify datacenter & esxi host there via `govc pool.create -dc foodc "myesxi/Resources/barpool"`
@dmelha Thanks for providing information.
I tried -
```
$ govc pool.create -dc Asia-Datacenter1 "10.65.201.106/Resources/nested"
govc: cannot create resource pool 'nested': parent not found
```
Where 10.65.201.106 is esxi. Do you know what is wrong here?
> @dmelha Thanks for providing information.
>
> I tried -
>
> ```
> $ govc pool.create -dc Asia-Datacenter1 "10.65.201.106/Resources/nested"
> govc: cannot create resource pool 'nested': parent not found
> ```
>
> Where 10.65.201.106 is esxi. Do you know what is wrong here?
@Akasurde I can only guess that the parent folder(?) (maybe of type vim.ComputeResource) named "Resources" is missing on that host!?
On my vmware infra some resourcepools are always manually created on the hosts via web-vsphere-client. only afterwards I interact via govc +pyvmomi with vcenters & hosts. I guess creating a resourec pool manually could create the folder(?) "Resources" on that esxi host.
@Akasurde
> `resource_pool` is associated with the cluster and not with esxi
Nope, a [ResourcePool](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.ResourcePool.html) is associated with a [ComputeResource](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.ComputeResource.html). In most cases, I should say this will be a cluster (there might be use cases for stand-alone ESXi hosts but HA and DRS are something I don't want to miss) but it doesn't have to be. A [ClusterComputeResource](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.ClusterComputeResource.html) is just sub-type of ComputeResource.
I'll try to have closer look at his.
@dmelha
I'm sorry, but this feature request (I don't consider it a bug) isn't 100% clear to me.
As far as I understand, you connect to a **vCenter** and want to configure a resource pool on an ESXi host instead of a cluster. You do **not** want to connect to the ESXi host directly and configure a resource pool. Is this correct?
> @dmelha
> I'm sorry, but this feature request (I don't consider it a bug) isn't 100% clear to me.
>
> As far as I understand, you connect to a **vCenter** and want to configure a resource pool on an ESXi host instead of a cluster. You do **not** want to connect to the ESXi host directly and configure a resource pool. Is this correct?
@mariolenz yes, I connect to a vcenter, and want to create resource pool on a specific esxi host. I do not want to directly connect to the esxi host for resource pool creation.
sidenote: While I understand this seems like a feature request to you, to me it seems like a bug, as the vsphere api supports this use case, but the resource_pool module inhibits this use case.
> yes, I connect to a vcenter, and want to create resource pool on a specific esxi host. I do not want to directly connect to the esxi host for resource pool creation.
Great! I hope this'll be easier to implement :-)
@sky-joker @Tomorrow9
As far as I can see, an ESXi host has two representations in vCenter. It's a [HostSystem](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.HostSystem.html) (which is what we're using in our modules to configure firewall rules and similar), but it's also a [ComputeResource](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.ComputeResource.html) (which we would need for this issue).
Do you, by any chance, know how to get the ComputeResource object of an ESXi host? Does it, possibly, have the same name? Unfortunately, I'm away on leave at the moment and don't have access to our environment to check this myself.
For the record:
```python
import ssl
from pyVim import connect
from pyVmomi import vim
context = ssl._create_unverified_context()
si = connect.SmartConnect(host="vcenter.example.com", user="[email protected]", pwd="T0pS3cret!", port=443, sslContext=context)
content = si.content
container = content.viewManager.CreateContainerView(content.rootFolder, [vim.ComputeResource], True)
for managed_object_ref in container.view:
print(managed_object_ref.name)
```
This finds all clusters (ClusterComputeResource is a sub-type of ComputeResource) and all hosts (ComputeResource) that are **not** part of a cluster. I've removed one host from a cluster (esx1.example.com) and now there's a ComputeResource with this name.
jfyi i used this hacky custom module as a (temporary) workaround for now: https://gist.github.com/dmelha/9b25d757e9f075c045b82dce784b8cd6 | 2020-11-26T11:41:12 |
|
ansible-collections/community.vmware | 528 | ansible-collections__community.vmware-528 | [
"527"
] | 0b4813b5a3dec6c3aa7c9a503461711ea5f6d576 | diff --git a/plugins/module_utils/vmware.py b/plugins/module_utils/vmware.py
--- a/plugins/module_utils/vmware.py
+++ b/plugins/module_utils/vmware.py
@@ -15,6 +15,7 @@
import ssl
import time
import traceback
+import datetime
from collections import OrderedDict
from distutils.version import StrictVersion
from random import randint
@@ -430,6 +431,36 @@ def gather_vm_facts(content, vm):
return facts
+def ansible_date_time_facts(timestamp):
+ # timestamp is a datetime.datetime object
+ date_time_facts = {}
+ if timestamp is None:
+ return date_time_facts
+
+ utctimestamp = timestamp.astimezone(datetime.timezone.utc)
+
+ date_time_facts['year'] = timestamp.strftime('%Y')
+ date_time_facts['month'] = timestamp.strftime('%m')
+ date_time_facts['weekday'] = timestamp.strftime('%A')
+ date_time_facts['weekday_number'] = timestamp.strftime('%w')
+ date_time_facts['weeknumber'] = timestamp.strftime('%W')
+ date_time_facts['day'] = timestamp.strftime('%d')
+ date_time_facts['hour'] = timestamp.strftime('%H')
+ date_time_facts['minute'] = timestamp.strftime('%M')
+ date_time_facts['second'] = timestamp.strftime('%S')
+ date_time_facts['epoch'] = timestamp.strftime('%s')
+ date_time_facts['date'] = timestamp.strftime('%Y-%m-%d')
+ date_time_facts['time'] = timestamp.strftime('%H:%M:%S')
+ date_time_facts['iso8601_micro'] = utctimestamp.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
+ date_time_facts['iso8601'] = utctimestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
+ date_time_facts['iso8601_basic'] = timestamp.strftime("%Y%m%dT%H%M%S%f")
+ date_time_facts['iso8601_basic_short'] = timestamp.strftime("%Y%m%dT%H%M%S")
+ date_time_facts['tz'] = timestamp.strftime("%Z")
+ date_time_facts['tz_offset'] = timestamp.strftime("%z")
+
+ return date_time_facts
+
+
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
diff --git a/plugins/modules/vmware_host_facts.py b/plugins/modules/vmware_host_facts.py
--- a/plugins/modules/vmware_host_facts.py
+++ b/plugins/modules/vmware_host_facts.py
@@ -232,7 +232,12 @@
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.common.text.formatters import bytes_to_human
-from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec, find_obj
+from ansible_collections.community.vmware.plugins.module_utils.vmware import (
+ PyVmomi,
+ vmware_argument_spec,
+ find_obj,
+ ansible_date_time_facts
+)
try:
from pyVmomi import vim
@@ -253,8 +258,10 @@ def __init__(self, module):
if len(self.host) > 1:
self.module.fail_json(msg="esxi_hostname matched multiple hosts")
self.host = self.host[0]
+ self.esxi_time = None
else:
self.host = find_obj(self.content, [vim.HostSystem], None)
+ self.esxi_time = self.si.CurrentTime()
if self.host is None:
self.module.fail_json(msg="Failed to find host system.")
@@ -268,6 +275,7 @@ def all_facts(self):
ansible_facts.update(self.get_system_facts())
ansible_facts.update(self.get_vsan_facts())
ansible_facts.update(self.get_cluster_facts())
+ ansible_facts.update({'host_date_time': ansible_date_time_facts(self.esxi_time)})
if self.params.get('show_tag'):
vmware_client = VmwareRestClient(self.module)
tag_info = {
| diff --git a/tests/integration/targets/vmware_host_facts/tasks/main.yml b/tests/integration/targets/vmware_host_facts/tasks/main.yml
--- a/tests/integration/targets/vmware_host_facts/tasks/main.yml
+++ b/tests/integration/targets/vmware_host_facts/tasks/main.yml
@@ -91,3 +91,32 @@
- "'numCpuCores' in facts['ansible_facts']['hardware']['cpuInfo']"
- "'product' in facts['ansible_facts']['config']"
- "'apiVersion' in facts['ansible_facts']['config']['product']"
+
+ - name: check host current time through a vcenter
+ vmware_host_facts:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ esxi_hostname: '{{ esxi1 }}'
+ register: facts
+ - debug: var=facts
+ - name: verify host_date_time
+ assert:
+ that:
+ - "'host_date_time' in facts['ansible_facts']"
+ - facts['ansible_facts']['host_date_time'] == {}
+
+ - name: check host current time through a host
+ vmware_host_facts:
+ validate_certs: false
+ hostname: '{{ esxi1 }}'
+ username: '{{ esxi_user }}'
+ password: '{{ esxi_password }}'
+ register: facts
+ - debug: var=facts
+ - name: verify host_date_time
+ assert:
+ that:
+ - "'host_date_time' in facts['ansible_facts']"
+ - facts['ansible_facts']['host_date_time'] != {}
| vmware_host_facts: add host current time info in returned host facts
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host_facts
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
We want to get the host current time returned by this module "vmware_host_facts", since not get the correct property of ESXi host to get the current time property, so request to add this info in gather facts through "self.si.CurrentTime()" when host is configured to a ESXi host.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| 2020-11-30T08:59:59 |
|
ansible-collections/community.vmware | 552 | ansible-collections__community.vmware-552 | [
"545"
] | 6f025b3f452e5cdc85cd0e6537b6880537062e64 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -1174,6 +1174,7 @@ def create_hard_disk(self, disk_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
+ diskspec.device.key = -randint(20000, 24999)
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = disk_ctl.device.key
@@ -1224,6 +1225,7 @@ def get_device(self, device_type, name):
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
+ nic.device.key = -randint(25000, 29999)
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
| vmware_guest: Fails to add two disks and 1 NIC
##### SUMMARY
`vmware_guest` fails to clone a template and add two disks and one NIC. Worked with vCenter 6.7U3, doesn't work with 7.0U1 any more.
Just two disks works, one disk and one NIC works also.
In vCenter I see:
```
Status: A specified parameter was not correct: deviceChange[2].device.key
Error stack:
-> Cannot add multiple devices using the same device key.
```
Possibly related: #373
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest
##### ANSIBLE VERSION
```
ansible 2.10.3
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /bin/ansible
python version = 3.6.1 (default, Oct 26 2017, 01:54:52) [GCC 6.3.0]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
vCenter 7.0U1
##### STEPS TO REPRODUCE
```
- name: "deploy: create vm"
vmware_guest:
hostname: "{{ vcenter_ip }}"
port: 443
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ vcenter_datacenter }}"
cluster: "{{ vcenter_cluster }}"
folder: "{{ vcenter_folder }}"
name: "{{ vm_name }}"
guest_id: "{{ vm_guest_id }}"
template: "{{ vm_template }}"
hardware:
boot_firmware: "efi"
hotadd_cpu: true
hotadd_memory: true
memory_mb: 4
memory_reservation_lock: true
nested_virt: true
num_cpus: 2
scsi: paravirtual
virt_based_security: true
disk:
- size_gb: 75
type: thin
datastore: "{{ datastore_cluster }}"
- size_gb: 5
type: thin
datastore: "{{ datastore_cluster }}"
networks:
- name: "{{ port_group }}"
device_type: vmxnet3
start_connected: true
state: poweredoff
validate_certs: false
```
##### EXPECTED RESULTS
Template gets cloned wirh two disks and one NIC.
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to create a virtual machine : A specified parameter was not correct: deviceChange[2].device.key"}
```
| I've had a closer look at `clonespec.config` used to clone the template:
https://github.com/ansible-collections/community.vmware/blob/0b4813b5a3dec6c3aa7c9a503461711ea5f6d576/plugins/modules/vmware_guest.py#L3098-L3106
with [q](https://pypi.org/project/q/) like this:
```python
for virtualDeviceSpec in clonespec.config.deviceChange:
q(virtualDeviceSpec.device)
q(virtualDeviceSpec.device.key)
```
And indeed, there are duplicate device keys:
```
0.8s deploy_vm:
(vim.vm.device.VirtualDisk) {
dynamicType = <... <unset>,
nativeUnmanagedLinkedClone = false
}
0.8s deploy_vm: 'key: 2000'
0.8s deploy_vm:
(vim.vm.device.VirtualDisk) {
dynamicType = <...unset>,
nativeUnmanagedLinkedClone = <unset>
}
0.8s deploy_vm: 'key: 0'
0.8s deploy_vm:
(vim.vm.device.VirtualVmxnet3) {
dynamicType ...= <unset>,
uptCompatibilityEnabled = <unset>
}
0.8s deploy_vm: 'key: 0'
```
First disk has key 2000 (maybe inherited from the template?), both the second disk and the NIC have key 0.
When I force the NIC's key to be something else, like this:
```python
clonespec.config.deviceChange[2].device.key = 42
```
it works.
Interesting, because the documentation clearly states that [this property is not read-only, but the client cannot control its value](https://code.vmware.com/apis/704/vsphere/vim.vm.device.VirtualDevice.html). Sounds like the duplicate keys would be ignored by vCenter, but obviously they aren't (any more).
_edit:_
Btw, it looks like the key really _is_ ignored. The NIC has a key 4000, **not** 42. | 2020-12-07T17:29:58 |
|
ansible-collections/community.vmware | 577 | ansible-collections__community.vmware-577 | [
"576"
] | 48a2799d5c4bef15c59d44d07ec09120989f4231 | diff --git a/plugins/modules/vmware_dvswitch.py b/plugins/modules/vmware_dvswitch.py
--- a/plugins/modules/vmware_dvswitch.py
+++ b/plugins/modules/vmware_dvswitch.py
@@ -277,10 +277,6 @@ def __init__(self, module):
self.switch_name = self.module.params['switch_name']
self.switch_version = self.module.params['switch_version']
- if self.content.about.version == '6.7.0':
- self.vcenter_switch_version = '6.6.0'
- else:
- self.vcenter_switch_version = self.content.about.version
folder = self.params['folder']
if folder:
self.folder_obj = self.content.searchIndex.FindByInventoryPath(folder)
@@ -653,10 +649,8 @@ def update_dvswitch(self):
changed_version = True
spec_product = self.create_product_spec(self.switch_version)
else:
- results['version'] = self.vcenter_switch_version
- if self.dvs.config.productInfo.version != self.vcenter_switch_version:
- changed_version = True
- spec_product = self.create_product_spec(self.vcenter_switch_version)
+ results['version'] = self.dvs.config.productInfo.version
+ changed_version = False
if changed_version:
changed = True
changed_list.append("switch version")
| vmware_dvswitch idempotency broken with vSphere 7.0.1
##### SUMMARY
The second call fails with: `"msg": "Failed to update DVS version : ('A specified parameter was not correct: ProductSpec.version', None)"`
https://67c852f73dcdf0c2dfb3-0ccf87873bf01f37e305b0f143521d81.ssl.cf2.rackcdn.com/568/dc635f81a8c04a631c8f98c9f9a6554822279fea/check/ansible-test-cloud-integration-vcenter7_only-python36/5651c62/job-output.html#l26599
```
TASK [vmware_dvswitch : add distributed vSwitch again] *************************
task path: /home/zuul/.ansible/collections/ansible_collections/community/vmware/tests/integration/targets/vmware_dvswitch/tasks/main.yml:67
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: zuul
<testhost> EXEC /bin/sh -c 'echo ~zuul && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zuul/.ansible/tmp `"&& mkdir "` echo /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798 `" && echo ansible-tmp-1608048501.0916786-36001-78830974297798="` echo /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798 `" ) && sleep 0'
Using module file /home/zuul/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_dvswitch.py
<testhost> PUT /home/zuul/.ansible/tmp/ansible-local-35652_aplezpg/tmpyxkceyip TO /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798/AnsiballZ_vmware_dvswitch.py
<testhost> EXEC /bin/sh -c 'chmod u+x /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798/ /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798/AnsiballZ_vmware_dvswitch.py && sleep 0'
<testhost> EXEC /bin/sh -c '/tmp/python-aewn9r7k-ansible/python /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798/AnsiballZ_vmware_dvswitch.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /home/zuul/.ansible/tmp/ansible-tmp-1608048501.0916786-36001-78830974297798/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_vmware_dvswitch_payload__g7z_igx/ansible_vmware_dvswitch_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_dvswitch.py", line 685, in update_dvswitch
File "/tmp/ansible_vmware_dvswitch_payload__g7z_igx/ansible_vmware_dvswitch_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py", line 83, in wait_for_task
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
File "<string>", line 3, in raise_from
fatal: [testhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"contact": null,
"datacenter_name": "DC0",
"description": null,
"discovery_operation": "both",
"discovery_proto": "lldp",
"folder": null,
"health_check": {
"teaming_failover": false,
"teaming_failover_interval": 0,
"vlan_mtu": false,
"vlan_mtu_interval": 0
},
"hostname": "vcenter.test",
"mtu": 9000,
"multicast_filtering_mode": "basic",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"state": "present",
"switch_name": "dvswitch_0001",
"switch_version": null,
"uplink_prefix": "Uplink ",
"uplink_quantity": 2,
"username": "[email protected]",
"validate_certs": false
}
},
"msg": "Failed to update DVS version : ('A specified parameter was not correct: ProductSpec.version', None)"
```
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_dvswitch
| 2020-12-15T17:59:27 |
||
ansible-collections/community.vmware | 588 | ansible-collections__community.vmware-588 | [
"539"
] | 7e2b51e4afd94233b5e8a2f755a8b89723635f59 | diff --git a/plugins/modules/vmware_host_ntp.py b/plugins/modules/vmware_host_ntp.py
--- a/plugins/modules/vmware_host_ntp.py
+++ b/plugins/modules/vmware_host_ntp.py
@@ -203,52 +203,56 @@ def check_host_state(self):
changed = False
for host in self.hosts:
self.results[host.name] = dict()
- ntp_servers_configured, ntp_servers_to_change = self.check_ntp_servers(host=host)
- # add/remove NTP servers
- if self.desired_state:
- self.results[host.name]['state'] = self.desired_state
- if ntp_servers_to_change:
- self.results[host.name]['ntp_servers_changed'] = ntp_servers_to_change
- operation = 'add' if self.desired_state == 'present' else 'delete'
- new_ntp_servers = self.update_ntp_servers(
- host=host,
- ntp_servers_configured=ntp_servers_configured,
- ntp_servers_to_change=ntp_servers_to_change,
- operation=operation
- )
- self.results[host.name]['ntp_servers_current'] = new_ntp_servers
- self.results[host.name]['changed'] = True
- change_list.append(True)
+ if host.runtime.connectionState == "connected":
+ ntp_servers_configured, ntp_servers_to_change = self.check_ntp_servers(host=host)
+ # add/remove NTP servers
+ if self.desired_state:
+ self.results[host.name]['state'] = self.desired_state
+ if ntp_servers_to_change:
+ self.results[host.name]['ntp_servers_changed'] = ntp_servers_to_change
+ operation = 'add' if self.desired_state == 'present' else 'delete'
+ new_ntp_servers = self.update_ntp_servers(
+ host=host,
+ ntp_servers_configured=ntp_servers_configured,
+ ntp_servers_to_change=ntp_servers_to_change,
+ operation=operation
+ )
+ self.results[host.name]['ntp_servers_current'] = new_ntp_servers
+ self.results[host.name]['changed'] = True
+ change_list.append(True)
+ else:
+ self.results[host.name]['ntp_servers_current'] = ntp_servers_configured
+ if self.verbose:
+ self.results[host.name]['msg'] = (
+ "NTP servers already added" if self.desired_state == 'present'
+ else "NTP servers already removed"
+ )
+ self.results[host.name]['changed'] = False
+ change_list.append(False)
+ # overwrite NTP servers
else:
- self.results[host.name]['ntp_servers_current'] = ntp_servers_configured
- if self.verbose:
- self.results[host.name]['msg'] = (
- "NTP servers already added" if self.desired_state == 'present'
- else "NTP servers already removed"
+ self.results[host.name]['ntp_servers'] = self.ntp_servers
+ if ntp_servers_to_change:
+ self.results[host.name]['ntp_servers_changed'] = self.get_differt_entries(
+ ntp_servers_configured,
+ ntp_servers_to_change
)
- self.results[host.name]['changed'] = False
- change_list.append(False)
- # overwrite NTP servers
+ self.update_ntp_servers(
+ host=host,
+ ntp_servers_configured=ntp_servers_configured,
+ ntp_servers_to_change=ntp_servers_to_change,
+ operation='overwrite'
+ )
+ self.results[host.name]['changed'] = True
+ change_list.append(True)
+ else:
+ if self.verbose:
+ self.results[host.name]['msg'] = "NTP servers already configured"
+ self.results[host.name]['changed'] = False
+ change_list.append(False)
else:
- self.results[host.name]['ntp_servers'] = self.ntp_servers
- if ntp_servers_to_change:
- self.results[host.name]['ntp_servers_changed'] = self.get_differt_entries(
- ntp_servers_configured,
- ntp_servers_to_change
- )
- self.update_ntp_servers(
- host=host,
- ntp_servers_configured=ntp_servers_configured,
- ntp_servers_to_change=ntp_servers_to_change,
- operation='overwrite'
- )
- self.results[host.name]['changed'] = True
- change_list.append(True)
- else:
- if self.verbose:
- self.results[host.name]['msg'] = "NTP servers already configured"
- self.results[host.name]['changed'] = False
- change_list.append(False)
+ self.results[host.name]['changed'] = False
+ self.results[host.name]['msg'] = "Host %s is disconnected and cannot be changed." % host.name
if any(change_list):
changed = True
| vSphere API breaks vmware_host_ntp module when a host is disconnected
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Running the vmware_host_ntp module against a vSphere 7.0.1+ server results in a failure and traceback.
This module works against vSphere 7.0.0 or if I point directly at the ESXi host instead of going through vSphere.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host_ntp
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/e393260/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux release 8.3 (Ootpa)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have tested this with the stock Ansible 2.9.15 module and the Galaxy Community module. Both fail the same way. I'll list the Community method I used.
1. ansible-galaxy collection install community.vmware
2. ansible-playbook -v ./test.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: false
tasks:
- name: Configure NTP
community.vmware.vmware_host_ntp:
hostname: "vcenter.host.com"
username: "[email protected]"
password: "<password>"
cluster_name: "Cluster"
state: present
ntp_servers:
- 1.2.3.4
- 5.6.7.8
- 9.10.11.12
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Configured NTP servers.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "/home/foouser/.ansible/tmp/ansible-tmp-1606854652.469246-309393-280139637752886/AnsiballZ_vmware_host_ntp.py", line 102, in <module>
_ansiballz_main()
File "/home/foouser/.ansible/tmp/ansible-tmp-1606854652.469246-309393-280139637752886/AnsiballZ_vmware_host_ntp.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/foouser/.ansible/tmp/ansible-tmp-1606854652.469246-309393-280139637752886/AnsiballZ_vmware_host_ntp.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_host_ntp', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py", line 387, in <module>
File "/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py", line 383, in main
File "/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py", line 206, in check_host_state
File "/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py", line 262, in check_ntp_servers
AttributeError: 'NoneType' object has no attribute 'ntpConfig'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/home/foouser/.ansible/tmp/ansible-tmp-1606854652.469246-309393-280139637752886/AnsiballZ_vmware_host_ntp.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/foouser/.ansible/tmp/ansible-tmp-1606854652.469246-309393-280139637752886/AnsiballZ_vmware_host_ntp.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/foouser/.ansible/tmp/ansible-tmp-1606854652.469246-309393-280139637752886/AnsiballZ_vmware_host_ntp.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_host_ntp', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py\", line 387, in <module>\n File \"/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py\", line 383, in main\n File \"/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py\", line 206, in check_host_state\n File \"/tmp/ansible_community.vmware.vmware_host_ntp_payload_7v10tmq0/ansible_community.vmware.vmware_host_ntp_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_host_ntp.py\", line 262, in check_ntp_servers\nAttributeError: 'NoneType' object has no attribute 'ntpConfig'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
| Sorry, I can't confirm this. Works for me, both with Ansible 2.9.14 and Ansible 2.10.1 (I think that includes version 1.2.0 of this collection).
Is this a newly installed ESXi vSphere 7.0.1+? Or did `vmware_host_ntp` work with ESXi vSphere 7.0.0, you updated the host and it stoped working?
Ansible 2.10.4 works for me, too. Both with pyvmomi 7.0 and 7.0.1.
The vSphere host was originally 7.0.0 and was working and then upgraded to 7.0.1. The bare metal ESXi host is still running 7.0.0.
I'm using pyvmomi 7.0.1 from EPEL.
> The vSphere host was originally 7.0.0 and was working and then upgraded to 7.0.1. The bare metal ESXi host is still running 7.0.0.
All our vCenters are now on 7.0U1; most of our ESXi hosts are still 6.7U3 although two clusters (managed by two different vCenters) are already running ESXi 7.0U1. I still can't reproduce your issue. Unfortunatelly, I can't test the combination vCenter 7.0U1 / ESXi 7.0GA at the moment.
> I'm using pyvmomi 7.0.1 from EPEL.
What a pity, I was hoping for a problem with an old pyvmomi version.
@Tomorrow9 @sky-joker Can you please have a look at this? Can you reproduce the issue?
It looks like the `'NoneType' object has no attribute 'ntpConfig'` error is here:
https://github.com/ansible-collections/community.vmware/blob/82eff8a728a82192417b43c6d084b61eaf251543/plugins/modules/vmware_host_ntp.py#L262
Interesting. Although `ntpConfig` is documented as optional ([Need not be set ](https://vdc-download.vmware.com/vmwb-repository/dcr-public/a5f4000f-1ea8-48a9-9221-586adff3c557/7ff50256-2cf2-45ea-aacd-87d231ab1ac7/vim.host.DateTimeInfo.html)) I've never seen this unset.
I do need to mention that this vCenter has 2 ESXi hosts and one of the two is offline at the moment (intentionally). The offline host was only made offline after the 7.0.1 upgrade so it may have nothing to do with versions. Could vCenter be sending back empty NTP info for the offline host? I haven't taken a capture of the returned data to confirm.
> I do need to mention that this vCenter has 2 ESXi hosts and one of the two is offline at the moment (intentionally). The offline host was only made offline after the 7.0.1 upgrade so it may have nothing to do with versions. Could vCenter be sending back empty NTP info for the offline host?
That might well be the case. But even if is this isn't a problem with 7.0.1, I consider this an issue: The module should skip hosts that are offline or fail, but it shouldn't crash.
> That might well be the case. But even if is this isn't a problem with 7.0.1, I consider this an issue: The module should skip hosts that are offline or fail, but it shouldn't crash.
I removed the offline/disconnected host and the module now works. | 2020-12-27T16:08:29 |
|
ansible-collections/community.vmware | 595 | ansible-collections__community.vmware-595 | [
"555"
] | 603a967e4a83b161c9be1882bbb20c29a7f93195 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -152,6 +152,7 @@
description:
- Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual).
- C(paravirtual) is default.
+ choices: [ 'buslogic', 'lsilogic', 'lsilogicsas', 'paravirtual' ]
memory_reservation_lock:
type: bool
description:
@@ -168,6 +169,7 @@
mem_reservation:
type: int
description: The amount of memory resource that is guaranteed available to the virtual machine.
+ aliases: [ 'memory_reservation' ]
cpu_limit:
type: int
description:
@@ -177,7 +179,7 @@
type: int
description: The amount of CPU resource that is guaranteed available to the virtual machine.
version:
- type: int
+ type: str
description:
- The Virtual machine hardware versions.
- Default is 10 (ESXi 5.5 and onwards).
@@ -188,6 +190,11 @@
boot_firmware:
type: str
description: Choose which firmware should be used to boot the virtual machine.
+ choices: [ 'bios', 'efi' ]
+ nested_virt:
+ type: bool
+ description:
+ - Enable nested virtualization.
virt_based_security:
type: bool
description:
@@ -575,13 +582,13 @@
description:
- Auto logon after virtual machine customization.
- Specific to Windows customization.
- default: False
autologoncount:
type: int
description:
- Number of autologon after reboot.
- Specific to Windows customization.
- default: 1
+ - Ignored if C(autologon) is unset or set to C(False).
+ - If unset, 1 will be used.
domainadmin:
type: str
description:
@@ -599,7 +606,7 @@
description:
- Server owner name.
- Specific to Windows customization.
- default: Administrator
+ - If unset, "Administrator" will be used as a fall-back.
joindomain:
type: str
description:
@@ -612,13 +619,13 @@
- Workgroup to join.
- Not compatible with C(joindomain).
- Specific to Windows customization.
- default: WORKGROUP
+ - If unset, "WORKGROUP" will be used as a fall-back.
orgname:
type: str
description:
- Organisation name.
- Specific to Windows customization.
- default: ACME
+ - If unset, "ACME" will be used as a fall-back.
password:
type: str
description:
@@ -1398,51 +1405,33 @@ def configure_resource_alloc_info(self, vm_obj):
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
- if 'hardware' in self.params:
- if 'mem_limit' in self.params['hardware']:
- mem_limit = None
- try:
- mem_limit = int(self.params['hardware'].get('mem_limit'))
- except ValueError:
- self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
- memory_allocation.limit = mem_limit
- if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
- rai_change_detected = True
-
- if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
- mem_reservation = self.params['hardware'].get('mem_reservation')
- if mem_reservation is None:
- mem_reservation = self.params['hardware'].get('memory_reservation')
- try:
- mem_reservation = int(mem_reservation)
- except ValueError:
- self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
-
- memory_allocation.reservation = mem_reservation
- if vm_obj is None or \
- memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
- rai_change_detected = True
-
- if 'cpu_limit' in self.params['hardware']:
- cpu_limit = None
- try:
- cpu_limit = int(self.params['hardware'].get('cpu_limit'))
- except ValueError:
- self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
- cpu_allocation.limit = cpu_limit
- if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
- rai_change_detected = True
-
- if 'cpu_reservation' in self.params['hardware']:
- cpu_reservation = None
- try:
- cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
- except ValueError:
- self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
- cpu_allocation.reservation = cpu_reservation
- if vm_obj is None or \
- cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
- rai_change_detected = True
+ mem_limit = self.params['hardware']['mem_limit']
+ if mem_limit is not None:
+ memory_allocation.limit = mem_limit
+ if vm_obj is None or \
+ memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
+ rai_change_detected = True
+
+ mem_reservation = self.params['hardware']['mem_reservation']
+ if mem_reservation is not None:
+ memory_allocation.reservation = mem_reservation
+ if vm_obj is None or \
+ memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
+ rai_change_detected = True
+
+ cpu_limit = self.params['hardware']['cpu_limit']
+ if cpu_limit is not None:
+ cpu_allocation.limit = cpu_limit
+ if vm_obj is None or \
+ cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
+ rai_change_detected = True
+
+ cpu_reservation = self.params['hardware']['cpu_reservation']
+ if cpu_reservation is not None:
+ cpu_allocation.reservation = cpu_reservation
+ if vm_obj is None or \
+ cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
+ rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
@@ -1451,101 +1440,89 @@ def configure_resource_alloc_info(self, vm_obj):
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
- if 'hardware' in self.params:
- if 'num_cpus' in self.params['hardware']:
- try:
- num_cpus = int(self.params['hardware']['num_cpus'])
- except ValueError:
- self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
- # check VM power state and cpu hot-add/hot-remove state before re-config VM
- if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
- if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
- self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
- "cpuHotRemove is not enabled")
- if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
- self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
- "cpuHotAdd is not enabled")
-
- if 'num_cpu_cores_per_socket' in self.params['hardware']:
- try:
- num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
- except ValueError:
- self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
- "should be an integer value.")
- if num_cpus % num_cpu_cores_per_socket != 0:
- self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
- "of hardware.num_cpu_cores_per_socket")
- self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
- if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
- self.change_detected = True
- self.configspec.numCPUs = num_cpus
- if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
- self.change_detected = True
- # num_cpu is mandatory for VM creation
- elif vm_creation and not self.params['template']:
- self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
-
- if 'memory_mb' in self.params['hardware']:
- try:
- memory_mb = int(self.params['hardware']['memory_mb'])
- except ValueError:
- self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
- " Please refer the documentation and provide"
- " correct value.")
- # check VM power state and memory hotadd state before re-config VM
- if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
- if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
- self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
- "operation is not supported")
- elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
- self.module.fail_json(msg="memoryHotAdd is not enabled")
- self.configspec.memoryMB = memory_mb
- if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
- self.change_detected = True
- # memory_mb is mandatory for VM creation
- elif vm_creation and not self.params['template']:
- self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
-
- if 'hotadd_memory' in self.params['hardware']:
- if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
- vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
- self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
- self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
- if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
+ num_cpus = self.params['hardware']['num_cpus']
+ if num_cpus is not None:
+ # check VM power state and cpu hot-add/hot-remove state before re-config VM
+ if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
+ if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
+ self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
+ "cpuHotRemove is not enabled")
+ if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
+ self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
+ "cpuHotAdd is not enabled")
+
+ num_cpu_cores_per_socket = self.params['hardware']['num_cpu_cores_per_socket']
+ if num_cpu_cores_per_socket is not None:
+ if num_cpus % num_cpu_cores_per_socket != 0:
+ self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
+ "of hardware.num_cpu_cores_per_socket")
+ self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
+ if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
+ self.configspec.numCPUs = num_cpus
+ if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
+ self.change_detected = True
+ # num_cpu is mandatory for VM creation
+ elif vm_creation and not self.params['template']:
+ self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
- if 'hotadd_cpu' in self.params['hardware']:
- if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
- vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
- self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
- self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
- if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
- self.change_detected = True
+ memory_mb = self.params['hardware']['memory_mb']
+ if memory_mb is not None:
+ # check VM power state and memory hotadd state before re-config VM
+ if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
+ if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
+ self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
+ "operation is not supported")
+ elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
+ self.module.fail_json(msg="memoryHotAdd is not enabled")
+ self.configspec.memoryMB = memory_mb
+ if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
+ self.change_detected = True
+ # memory_mb is mandatory for VM creation
+ elif vm_creation and not self.params['template']:
+ self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
+
+ hotadd_memory = self.params['hardware']['hotadd_memory']
+ if hotadd_memory is not None:
+ if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
+ vm_obj.config.memoryHotAddEnabled != hotadd_memory:
+ self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
+ self.configspec.memoryHotAddEnabled = hotadd_memory
+ if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
+ self.change_detected = True
- if 'hotremove_cpu' in self.params['hardware']:
- if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
- vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
- self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
- self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
- if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
- self.change_detected = True
+ hotadd_cpu = self.params['hardware']['hotadd_cpu']
+ if hotadd_cpu is not None:
+ if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
+ vm_obj.config.cpuHotAddEnabled != hotadd_cpu:
+ self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
+ self.configspec.cpuHotAddEnabled = hotadd_cpu
+ if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
+ self.change_detected = True
- if 'memory_reservation_lock' in self.params['hardware']:
- self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
- if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
- self.change_detected = True
+ hotremove_cpu = self.params['hardware']['hotremove_cpu']
+ if hotremove_cpu is not None:
+ if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
+ vm_obj.config.cpuHotRemoveEnabled != hotremove_cpu:
+ self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
+ self.configspec.cpuHotRemoveEnabled = hotremove_cpu
+ if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
+ self.change_detected = True
- if 'boot_firmware' in self.params['hardware']:
- # boot firmware re-config can cause boot issue
- if vm_obj is not None:
- return
- boot_firmware = self.params['hardware']['boot_firmware'].lower()
- if boot_firmware not in ('bios', 'efi'):
- self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
- " Need one of ['bios', 'efi']." % boot_firmware)
- self.configspec.firmware = boot_firmware
+ memory_reservation_lock = self.params['hardware']['memory_reservation_lock']
+ if memory_reservation_lock is not None:
+ self.configspec.memoryReservationLockedToMax = memory_reservation_lock
+ if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
+ boot_firmware = self.params['hardware']['boot_firmware']
+ if boot_firmware is not None:
+ # boot firmware re-config can cause boot issue
+ if vm_obj is not None:
+ return
+ self.configspec.firmware = boot_firmware
+ self.change_detected = True
+
def sanitize_cdrom_params(self):
cdrom_specs = []
expected_cdrom_spec = self.params.get('cdrom')
@@ -1750,82 +1727,82 @@ def configure_hardware_params(self, vm_obj):
Args:
vm_obj: virtual machine object
"""
- if 'hardware' in self.params:
- if 'max_connections' in self.params['hardware']:
- # maxMksConnections == max_connections
- self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
- if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
- self.change_detected = True
+ max_connections = self.params['hardware']['max_connections']
+ if max_connections is not None:
+ self.configspec.maxMksConnections = max_connections
+ if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
+ self.change_detected = True
- if 'nested_virt' in self.params['hardware']:
- self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
- if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
- self.change_detected = True
+ nested_virt = self.params['hardware']['nested_virt']
+ if nested_virt is not None:
+ self.configspec.nestedHVEnabled = nested_virt
+ if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
+ self.change_detected = True
- if 'version' in self.params['hardware']:
- hw_version_check_failed = False
- temp_version = self.params['hardware'].get('version', 10)
- if isinstance(temp_version, str) and temp_version.lower() == 'latest':
- # Check is to make sure vm_obj is not of type template
- if vm_obj and not vm_obj.config.template:
- try:
- task = vm_obj.UpgradeVM_Task()
- self.wait_for_task(task)
- if task.info.state == 'error':
- return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
- except vim.fault.AlreadyUpgraded:
- # Don't fail if VM is already upgraded.
- pass
- else:
+ temp_version = self.params['hardware']['version']
+ if temp_version is not None:
+ hw_version_check_failed = False
+ if temp_version.lower() == 'latest':
+ # Check is to make sure vm_obj is not of type template
+ if vm_obj and not vm_obj.config.template:
try:
- temp_version = int(temp_version)
- except ValueError:
- hw_version_check_failed = True
-
- if temp_version not in range(3, 18):
- hw_version_check_failed = True
-
- if hw_version_check_failed:
- self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
- " values range from 3 (ESX 2.x) to 17 (ESXi 7.0)." % temp_version)
- # Hardware version is denoted as "vmx-10"
- version = "vmx-%02d" % temp_version
- self.configspec.version = version
- if vm_obj is None or self.configspec.version != vm_obj.config.version:
- self.change_detected = True
- # Check is to make sure vm_obj is not of type template
- if vm_obj and not vm_obj.config.template:
- # VM exists and we need to update the hardware version
- current_version = vm_obj.config.version
- # current_version = "vmx-10"
- version_digit = int(current_version.split("-", 1)[-1])
- if temp_version < version_digit:
- self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
- " version '%d'. Downgrading hardware version is"
- " not supported. Please specify version greater"
- " than the current version." % (version_digit,
- temp_version))
- new_version = "vmx-%02d" % temp_version
- try:
- task = vm_obj.UpgradeVM_Task(new_version)
- self.wait_for_task(task)
- if task.info.state == 'error':
- return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
- except vim.fault.AlreadyUpgraded:
- # Don't fail if VM is already upgraded.
- pass
-
- if 'virt_based_security' in self.params['hardware']:
- virt_based_security_set = bool(self.params['hardware']['virt_based_security'])
- if vm_obj is None or vm_obj.config.flags.vbsEnabled != virt_based_security_set:
- self.configspec.flags = vim.vm.FlagInfo()
- self.configspec.flags.vbsEnabled = virt_based_security_set
- if virt_based_security_set:
- self.configspec.flags.vvtdEnabled = True
- self.configspec.nestedHVEnabled = True
- self.configspec.bootOptions = vim.vm.BootOptions()
- self.configspec.bootOptions.efiSecureBootEnabled = True
+ task = vm_obj.UpgradeVM_Task()
+ self.wait_for_task(task)
+ if task.info.state == 'error':
+ return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
+ except vim.fault.AlreadyUpgraded:
+ # Don't fail if VM is already upgraded.
+ pass
+ else:
+ try:
+ temp_version = int(temp_version)
+ except ValueError:
+ hw_version_check_failed = True
+
+ if temp_version not in range(3, 18):
+ hw_version_check_failed = True
+
+ if hw_version_check_failed:
+ self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
+ " values range from 3 (ESX 2.x) to 17 (ESXi 7.0)." % temp_version)
+ # Hardware version is denoted as "vmx-10"
+ version = "vmx-%02d" % temp_version
+ self.configspec.version = version
+ if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
+ # Check is to make sure vm_obj is not of type template
+ if vm_obj and not vm_obj.config.template:
+ # VM exists and we need to update the hardware version
+ current_version = vm_obj.config.version
+ # current_version = "vmx-10"
+ version_digit = int(current_version.split("-", 1)[-1])
+ if temp_version < version_digit:
+ self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
+ " version '%d'. Downgrading hardware version is"
+ " not supported. Please specify version greater"
+ " than the current version." % (version_digit,
+ temp_version))
+ new_version = "vmx-%02d" % temp_version
+ try:
+ task = vm_obj.UpgradeVM_Task(new_version)
+ self.wait_for_task(task)
+ if task.info.state == 'error':
+ return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
+ except vim.fault.AlreadyUpgraded:
+ # Don't fail if VM is already upgraded.
+ pass
+
+ virt_based_security = self.params['hardware']['virt_based_security']
+ if virt_based_security is not None:
+ if vm_obj is None or vm_obj.config.flags.vbsEnabled != virt_based_security:
+ self.configspec.flags = vim.vm.FlagInfo()
+ self.configspec.flags.vbsEnabled = virt_based_security
+ if virt_based_security:
+ self.configspec.flags.vvtdEnabled = True
+ self.configspec.nestedHVEnabled = True
+ self.configspec.bootOptions = vim.vm.BootOptions()
+ self.configspec.bootOptions.efiSecureBootEnabled = True
+ self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
device_list = []
@@ -1937,7 +1914,7 @@ def sanitize_network_params(self):
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
- if len(self.params['networks']) == 0:
+ if not self.params['networks']:
return
network_devices = self.sanitize_network_params()
@@ -2096,7 +2073,7 @@ def set_vapp_properties(self, property_spec):
return property_info
def configure_vapp_properties(self, vm_obj):
- if len(self.params['vapp_properties']) == 0:
+ if not self.params['vapp_properties']:
return
for x in self.params['vapp_properties']:
@@ -2193,7 +2170,7 @@ def configure_vapp_properties(self, vm_obj):
self.change_detected = True
def customize_customvalues(self, vm_obj, config_spec):
- if len(self.params['customvalues']) == 0:
+ if not self.params['customvalues']:
return
vm_custom_spec = config_spec
@@ -2251,38 +2228,37 @@ def customize_vm(self, vm_obj):
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
- elif 'domain' in self.params['customization']:
+ elif self.params['customization']['domain'] is not None:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
- elif 'dns_servers' in self.params['customization']:
+ elif self.params['customization']['dns_servers'] is not None:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
- if 'dns_servers' in self.params['customization']:
+ if self.params['customization']['dns_servers'] is not None:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
dns_suffixes = []
- if 'dns_suffix' in self.params['customization']:
- dns_suffix = self.params['customization']['dns_suffix']
- if dns_suffix:
- if isinstance(dns_suffix, list):
- dns_suffixes += dns_suffix
- else:
- dns_suffixes.append(dns_suffix)
+ dns_suffix = self.params['customization']['dns_suffix']
+ if dns_suffix:
+ if isinstance(dns_suffix, list):
+ dns_suffixes += dns_suffix
+ else:
+ dns_suffixes.append(dns_suffix)
- globalip.dnsSuffixList = dns_suffixes
+ globalip.dnsSuffixList = dns_suffixes
- if 'domain' in self.params['customization']:
+ if self.params['customization']['domain'] is not None:
dns_suffixes.insert(0, self.params['customization']['domain'])
globalip.dnsSuffixList = dns_suffixes
- if self.params['guest_id']:
+ if self.params['guest_id'] is not None:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
@@ -2308,16 +2284,16 @@ def customize_vm(self, vm_obj):
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
- if 'productid' in self.params['customization']:
+ if self.params['customization']['productid'] is not None:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
- if 'autologon' in self.params['customization']:
+ if self.params['customization']['autologon'] is not None:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
- if 'timezone' in self.params['customization']:
+ if self.params['customization']['timezone'] is not None:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
@@ -2330,21 +2306,21 @@ def customize_vm(self, vm_obj):
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
- if 'joindomain' in self.params['customization']:
- if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
+ if self.params['customization']['joindomain'] is not None:
+ if self.params['customization']['domainadmin'] is None or self.params['customization']['domainadminpassword'] is None:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
- ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
- ident.identification.joinDomain = str(self.params['customization']['joindomain'])
+ ident.identification.domainAdmin = self.params['customization']['domainadmin']
+ ident.identification.joinDomain = self.params['customization']['joindomain']
ident.identification.domainAdminPassword = vim.vm.customization.Password()
- ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
+ ident.identification.domainAdminPassword.value = self.params['customization']['domainadminpassword']
ident.identification.domainAdminPassword.plainText = True
- elif 'joinworkgroup' in self.params['customization']:
- ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
+ elif self.params['customization']['joinworkgroup'] is not None:
+ ident.identification.joinWorkgroup = self.params['customization']['joinworkgroup']
- if 'runonce' in self.params['customization']:
+ if self.params['customization']['runonce'] is not None:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
@@ -2356,8 +2332,8 @@ def customize_vm(self, vm_obj):
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
- if 'domain' in self.params['customization']:
- ident.domain = str(self.params['customization']['domain'])
+ if self.params['customization']['domain'] is not None:
+ ident.domain = self.params['customization']['domain']
ident.hostName = vim.vm.customization.FixedName()
default_name = ""
@@ -2372,9 +2348,9 @@ def customize_vm(self, vm_obj):
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
- if 'timezone' in self.params['customization']:
- ident.timeZone = str(self.params['customization']['timezone'])
- if 'hwclockUTC' in self.params['customization']:
+ if self.params['customization']['timezone'] is not None:
+ ident.timeZone = self.params['customization']['timezone']
+ if self.params['customization']['hwclockUTC'] is not None:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
@@ -2599,7 +2575,7 @@ def configure_multiple_controllers_disks(self, vm_obj):
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
- if len(self.params['disk']) == 0:
+ if not self.params['disk']:
return
# if one of 'controller_type', 'controller_number', 'unit_number' parameters set in one of disks' configuration
@@ -2773,7 +2749,7 @@ def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
- if len(self.params['disk']) != 0:
+ if self.params['disk']:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = []
@@ -2865,15 +2841,10 @@ def obj_has_parent(self, obj, parent):
return False
def get_scsi_type(self):
- disk_controller_type = "paravirtual"
- # set cpu/memory/etc
- if 'hardware' in self.params:
- if 'scsi' in self.params['hardware']:
- if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
- disk_controller_type = self.params['hardware']['scsi']
- else:
- self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
- return disk_controller_type
+ disk_controller_type = self.params['hardware']['scsi']
+ if disk_controller_type is not None:
+ return disk_controller_type
+ return "paravirtual"
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
@@ -3050,7 +3021,7 @@ def deploy_vm(self):
network_changes = True
break
- if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
+ if any(v is not None for v in self.params['customization'].values()) or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
@@ -3300,7 +3271,7 @@ def reconfigure_vm(self):
self.change_detected = True
# add customize existing VM after VM re-configure
- if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
+ if self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
@@ -3322,7 +3293,7 @@ def customize_exist_vm(self):
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected', 'dvswitch_name'):
network_changes = True
break
- if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
+ if any(v is not None for v in self.params['customization'].values()) or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
@@ -3421,7 +3392,28 @@ def main():
guest_id=dict(type='str'),
disk=dict(type='list', default=[], elements='dict'),
cdrom=dict(type='raw', default=[]),
- hardware=dict(type='dict', default={}),
+ hardware=dict(
+ type='dict',
+ default={},
+ options=dict(
+ boot_firmware=dict(type='str', choices=['bios', 'efi']),
+ cpu_limit=dict(type='int'),
+ cpu_reservation=dict(type='int'),
+ hotadd_cpu=dict(type='bool'),
+ hotadd_memory=dict(type='bool'),
+ hotremove_cpu=dict(type='bool'),
+ max_connections=dict(type='int'),
+ mem_limit=dict(type='int'),
+ mem_reservation=dict(type='int', aliases=['memory_reservation']),
+ memory_mb=dict(type='int'),
+ memory_reservation_lock=dict(type='bool'),
+ nested_virt=dict(type='bool'),
+ num_cpu_cores_per_socket=dict(type='int'),
+ num_cpus=dict(type='int'),
+ scsi=dict(type='str', choices=['buslogic', 'lsilogic', 'lsilogicsas', 'paravirtual']),
+ version=dict(type='str'),
+ virt_based_security=dict(type='bool')
+ )),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
@@ -3433,7 +3425,29 @@ def main():
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[], elements='dict'),
resource_pool=dict(type='str'),
- customization=dict(type='dict', default={}, no_log=True),
+ customization=dict(
+ type='dict',
+ default={},
+ options=dict(
+ autologon=dict(type='bool'),
+ autologoncount=dict(type='int'),
+ dns_servers=dict(type='list', elements='str'),
+ dns_suffix=dict(type='list', elements='str'),
+ domain=dict(type='str'),
+ domainadmin=dict(type='str'),
+ domainadminpassword=dict(type='str', no_log=True),
+ existing_vm=dict(type='bool'),
+ fullname=dict(type='str'),
+ hostname=dict(type='str'),
+ hwclockUTC=dict(type='bool'),
+ joindomain=dict(type='str'),
+ joinworkgroup=dict(type='str'),
+ orgname=dict(type='str'),
+ password=dict(type='str', no_log=True),
+ productid=dict(type='str'),
+ runonce=dict(type='list', elements='str'),
+ timezone=dict(type='str')
+ )),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
wait_for_customization_timeout=dict(type='int', default=3600),
| vmware_guest: OS Customization options are treated as no_log strings
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Specifying OS customization options can affect task return values.
The `customization` argument of vmware_guest is [defined](https://github.com/ansible-collections/community.vmware/blob/6f025b3f452e5cdc85cd0e6537b6880537062e64/plugins/modules/vmware_guest.py#L3426) as `no_log=True`.
This causes all values supplied via this dictionary to be treated as no_log strings which affects the return values of vmware_guest as well as any subsequent modules, e.g. supplying `autologoncount: 2` will cause all instances of character `2` to be replaced with `********`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = None
configured module search path = ['/home/radu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
No output
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Deploy a VM from a template while specifying guest customization options.
The task works fine however the return value will sometimes be altered.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
tasks:
- community.vmware.vmware_guest:
# Other arguments
customization:
fullname: "{{ vm_local_username }}"
password: "{{ vm_local_password }}"
autologon: true
autologoncount: 2
register: vm_deployment
- debug: var=vm_deployment
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Get back the regular return values e.g.
```
ok: [localhost] => {
"vm_deployment": {
"changed": true,
"instance": {
(...)
"hw_product_uuid": "422a67a3-7b2b-5e4b-3163-ca0cb6873a14",
"hw_processor_count": "2",
(...)
"instance_uuid": "502a9ed4-de9e-46aa-b060-f7a63d262ce4",
(...)
"annotation": "Created by ansible - 2020/12/07 12:58:11",
(...)
"moid": "vm-142438",
"vimref": "vim.VirtualMachine:vm-142438",
(...)
"hw_eth0": {
(...)
"macaddress": "00:50:56:aa:f4:b4",
(...)
"summary": "DVSwitch: 96 ad 2a 50 aa 38 93 da-9f 91 91 88 86 ec c6 ff",
"portgroup_portkey": "2078",
"portgroup_key": "dvportgroup-286"
}
},
"_ansible_no_log": false,
"_ansible_delegated_vars": {}
}
}
```
##### ACTUAL RESULTS
Each of the values in the `customization` dict are now being replaced with asterisks as if they are all sensitive pieces of information.
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ok: [localhost] => {
"vm_deployment": {
"changed": true,
"instance": {
(...)
"hw_product_uuid": "4****************a67a3-7b********b-5e4b-3163-ca0cb6873a14",
"hw_processor_count": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
(...)
"instance_uuid": "50********a9ed4-de9e-46aa-b060-f7a63d********6********ce4",
(...)
"annotation": "Created by ansible - ********0********0/1********/07 1********:58:11",
(...)
"moid": "vm-14********438",
"vimref": "vim.VirtualMachine:vm-14********438",
(...)
"hw_eth0": {
(...)
"macaddress": "00:50:56:aa:f4:b4",
(...)
"summary": "DVSwitch: 96 ad ********a 50 aa 38 93 da-9f 91 91 88 86 ec c6 ff",
"portgroup_portkey": "********078",
"portgroup_key": "dvportgroup-********86"
}
},
"_ansible_no_log": false,
"_ansible_delegated_vars": {}
}
}
```
##### ADDITIONAL INFORMATION
The same behaviour can be observed with this setup
```
$ tree
.
├── library
│ └── bug_repro.py
└── playbook.yml
1 directory, 2 files
```
```
$ cat library/bug_repro.py
#!/usr/bin/python
from ansible.module_utils.basic import AnsibleModule
def main():
module = AnsibleModule(
argument_spec=dict(
customization=dict(type='dict', default={}, no_log=True)
)
)
result = {
'failed': False,
'changed': False,
'alphabet': "abcdefghijklmnopqrstuvwxyz"
}
module.exit_json(**result)
if __name__ == '__main__':
main()
```
```
$ cat playbook.yml
---
- hosts: localhost
connection: local
tasks:
- bug_repro:
customization:
this_triggers_it: "c"
this_also_triggers_it: "pqr"
register: bug_repro_result
- debug: var=bug_repro_result
```
Notice how each of the values supplied within the customization dictionary is replaced by asterisks
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************
ok: [localhost]
TASK [bug_repro] ********************************************************************************************************************
ok: [localhost]
TASK [debug] ************************************************************************************************************************
ok: [localhost] => {
"bug_repro_result": {
"alphabet": "ab********defghijklmno********stuvwxyz",
"changed": false,
"failed": false
}
}
PLAY RECAP **************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
| @radugrecu Thanks for reporting this issue. Do asterisks affect the customization?
Asterisks in output is a known issue in Ansible Core Engine.
Thanks.
@Akasurde it doesn't affect the OS customization but it can make the output unusable.
In my particular use case I was trying to submit the details of the new VM to the CMDB however the `moid`, `hw_processor_count` and `macaddres` were all unsusable because parts of them had been replaced with asterisks.
I believe the fix is to explicitly declare the inner structure of the `customization` argument while only applying `no_log=True` to the relevant keys
i.e. move from this
```python
customization=dict(type='dict', default={}, no_log=True),
```
to something along these lines
```python
customization=dict(
type='dict',
default={},
options=dict(
autologon=dict(type='bool', default=False),
autologoncount=dict(type='int', default=1),
dns_servers=dict(type='list', default=[], elements='str')
# other args
password=dict(type='str', no_log=True),
# other args
domainadminpassword=dict(type='str', no_log=True)
# other args
)),
```
where no_log is only applied to the password fields as opposed to the entire customization argument.
@radugrecu Ok. I can understand. As a workaround you can `customization_spec` parameter in `vmware_guest` which will do the same.
> I believe the fix is to explicitly declare the inner structure of the `customization` argument while only applying `no_log=True` to the relevant keys
>
> i.e. move from this
>
> ```python
> customization=dict(type='dict', default={}, no_log=True),
> ```
>
> to something along these lines
>
> ```python
> customization=dict(
> type='dict',
> default={},
> options=dict(
> autologon=dict(type='bool', default=False),
> autologoncount=dict(type='int', default=1),
> dns_servers=dict(type='list', default=[], elements='str')
> # other args
> password=dict(type='str', no_log=True),
> # other args
> domainadminpassword=dict(type='str', no_log=True)
> # other args
> )),
> ```
>
> where no_log is only applied to the password fields as opposed to the entire customization argument.
I agree that it would be better to explicitly declare the inner structure of the `customization` argument. Not only for you specific use case but generally. I'll assign this issue to myself and will try to do something about it. | 2021-01-05T17:22:48 |
|
ansible-collections/community.vmware | 627 | ansible-collections__community.vmware-627 | [
"623"
] | 425ef66a754846c3bdc356238d54cff2782964a0 | diff --git a/plugins/modules/vmware_datastore_cluster_manager.py b/plugins/modules/vmware_datastore_cluster_manager.py
--- a/plugins/modules/vmware_datastore_cluster_manager.py
+++ b/plugins/modules/vmware_datastore_cluster_manager.py
@@ -161,7 +161,7 @@ def ensure(self):
changed_list = [ds.name for ds in datastore_obj_list]
temp_result['current_datastores'] = temp_result['previous_datastores'].extend(changed_list)
temp_result['changed_datastores'] = changed_list
- results['changed'] = True
+ results['changed'] = len(datastore_obj_list) > 0
results['datastore_cluster_info'] = temp_result
self.module.exit_json(**results)
@@ -191,7 +191,7 @@ def ensure(self):
for ds in changed_list:
temp_result['current_datastores'].pop(ds)
temp_result['changed_datastores'] = changed_list
- results['changed'] = True
+ results['changed'] = len(datastore_obj_list) > 0
results['datastore_cluster_info'] = temp_result
self.module.exit_json(**results)
| vmware_datastore_cluster_manager is always changed in checkmode
##### SUMMARY
When running in check mode the module is always reporting changed even when it is not detecting any changes.
I had a look at the code and I think there is an if statement missing to set changed=true depending on changed_datastores found or not when in check mode.
You can work around this issue by defining a changed_when clause:
```yaml
register: storage_cluster_datastore_result
changed_when: storage_cluster_datastore_result.datastore_cluster_info.changed_datastores|length > 0
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_datastore_cluster_manager
##### ANSIBLE VERSION
```paste below
ansible 2.10.2
config file = None
configured module search path = ['/Users/cneugum/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Oct 12 2020, 11:36:53) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
Default
##### OS / ENVIRONMENT
Ansible running on: OSX 10.15.7
vCenter running on: vSphere 6.7
##### STEPS TO REPRODUCE
1. Create a play that uses the module and run it so datastores are added to the datastore cluster
2. Run the same play again
3. Run the same play in check mode
```yaml
- name: Test Datastore Cluster
hosts: localhost
gather_facts: no
vars:
vcenter:
hostname: 'vcsa.lab.local'
username: '[email protected]'
password: 'VMware1!'
datacenter_name: 'DC01'
datastore_cluster_name: 'StorageCluster01'
datastores:
- "DS01"
- "DS02"
- "DS03"
tasks:
- name: Create datastore cluster
community.vmware.vmware_datastore_cluster:
hostname: '{{ vcenter.hostname }}'
username: '{{ vcenter.username }}'
password: '{{ vcenter.password }}'
validate_certs: no
datacenter_name: '{{ datacenter_name }}'
datastore_cluster_name: '{{ datastore_cluster_name }}'
- name: Add Datastores to datastore cluster
community.vmware.vmware_datastore_cluster_manager:
hostname: '{{ vcenter.hostname }}'
username: '{{ vcenter.username }}'
password: '{{ vcenter.password }}'
validate_certs: no
datacenter_name: '{{ datacenter_name }}'
datastore_cluster_name: '{{ datastore_cluster_name }}'
datastores: '{{ datastores }}'
```
##### EXPECTED RESULTS
1. Play Recap should report changed tasks
2. Play Recap should report no changed tasks
3. Play Recap should report no changed tasks
##### ACTUAL RESULTS
1. Play Recap reports changed tasks
2. Play Recap reports no changed tasks
3. **_Play Recap reports changed tasks_**
| Files identified in the description:
* [`plugins/modules/vmware_datastore_cluster_manager.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_datastore_cluster_manager.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I think I know where the code goes wrong. I'll try to do something about it. | 2021-01-24T16:20:36 |
|
ansible-collections/community.vmware | 639 | ansible-collections__community.vmware-639 | [
"636"
] | 0aeff852c8d62c9738ac33607ecd564a31503fc2 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -1741,7 +1741,6 @@ def configure_hardware_params(self, vm_obj):
temp_version = self.params['hardware']['version']
if temp_version is not None:
- hw_version_check_failed = False
if temp_version.lower() == 'latest':
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
@@ -1757,14 +1756,10 @@ def configure_hardware_params(self, vm_obj):
try:
temp_version = int(temp_version)
except ValueError:
- hw_version_check_failed = True
-
- if temp_version not in range(3, 18):
- hw_version_check_failed = True
-
- if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
- " values range from 3 (ESX 2.x) to 17 (ESXi 7.0)." % temp_version)
+ " values are either 'latest' or a number."
+ " Please check VMware documentation for valid VM hardware versions." % temp_version)
+
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
| vmware_guest: remove hardware version number check to avoid updating the restriction everytime
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Can we remove the hardware version number check in vmware_guest module? or find another better way to do this? or we need to change this restriction every time after vSphere new releases and new hardware version introduced.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 18):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 17 (ESXi 7.0)." % temp_version)
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
We do not need to change this `range(3, 18)` every time when there is new hardware version introduced in new vSphere release, e.g., hardware version 18, 19, etc.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
| Files identified in the description:
* [`plugins/modules/vmware_guest.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @goneri @lparkes @nerzhul @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I agree, this check doesn't make much sense. Even if the version is theoretically valid, it doesn't mean the module will work. Like using version 18 but the vCenter or ESXi is 6.5. I think we should just remove this check.
I'll do a PR and request your review. | 2021-02-02T17:02:19 |
|
ansible-collections/community.vmware | 679 | ansible-collections__community.vmware-679 | [
"676"
] | f6ed64f139addcabde07786b91478f3cacfcf258 | diff --git a/plugins/modules/vmware_cluster_ha.py b/plugins/modules/vmware_cluster_ha.py
--- a/plugins/modules/vmware_cluster_ha.py
+++ b/plugins/modules/vmware_cluster_ha.py
@@ -395,6 +395,10 @@ def configure_ha(self):
das_vm_config.vmComponentProtectionSettings = vim.cluster.VmComponentProtectionSettings()
das_vm_config.vmComponentProtectionSettings.vmStorageProtectionForAPD = self.params.get('apd_response')
das_vm_config.vmComponentProtectionSettings.vmStorageProtectionForPDL = self.params.get('pdl_response')
+ if (self.params['apd_response'] != "disabled" or self.params['pdl_response'] != "disabled"):
+ cluster_config_spec.dasConfig.vmComponentProtecting = 'enabled'
+ else:
+ cluster_config_spec.dasConfig.vmComponentProtecting = 'disabled'
cluster_config_spec.dasConfig.defaultVmSettings = das_vm_config
| vmware_cluster_ha and APD/PDL Warning not fully applied
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
module vmware_cluster_ha setting APD or PDL to Warning (default value) when the current cluster setting is Disabled does not fully show the setting change in HTML5 web interface.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_cluster_ha
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.16
config file = /home/username/AWX/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/username/AWX/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
COLLECTIONS_PATHS(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/custom_filters']
DEFAULT_ROLES_PATH(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
vCenter 6.7 U3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
HA currently is set for APD and PDL Response to "Disabled" in vCenter
module vmware_cluster_ha defaults these settings to "warning"
When applied to cluster in vCenter 6.7 Ansible reports changing the cluster, and an event is logged in vCenter that some settings are changing, but using the HTML5 gui to view the HA settings still shows both items as Disabled.
Changing the values in the GUI to some other value to Power Off and rerunning the playbook will result in Ansible showing a Change and the GUI properly reflects "Issue events"
Seems to only apply when existing setting is Disabled and Ansible is changing to "warning" that the GUI does not fully reflect it. Unsure if the setting is actually applied or not.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Cluster HA Setting
community.vmware.vmware_cluster_ha:
hostname: "{{ source_vcenter }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ datacenter }}"
cluster_name: "{{ inventory_hostname }}"
enable_ha: "{{ ha_settings.enable_ha }}"
ha_host_monitoring: "{{ ha_settings.ha_host_monitoring | default(omit) }}"
ha_restart_priority: "{{ ha_settings.ha_restart_priority | default(omit) }}"
host_isolation_response: "{{ ha_settings.host_isolation_response | default(omit) }}"
pdl_response: "{{ ha_settings.pdl_response | default(omit) }}"
apd_response: "{{ ha_settings.apd_response | default(omit) }}"
ha_vm_monitoring: "{{ ha_settings.ha_vm_monitoring | default(omit) }}"
reservation_based_admission_control: "{{ ha_settings.reservation_based_admission_control | default(omit) }}"
validate_certs: "{{ validate_certs }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
PDL and APD settings are changed to "Issue Events" in HTML5 web client view
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Events are logged in vCenter about changes, but PDL and APD settings in HA Settings still shows Disabled
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Cluster HA Setting] ***************************************************************************************************************************************************************************
task path: /home/username/AWX/VMware/cluster_configuration.yml:141
Trying secret FileVaultSecret(filename='/home/username/vault.txt') for vault_id=default
Trying secret FileVaultSecret(filename='/home/username/vault.txt') for vault_id=default
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: username
<localhost> EXEC /bin/sh -c 'echo ~username && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/username/.ansible/tmp `"&& mkdir "` echo /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659 `" && echo ansible-tmp-1613748144.8424242-2535585-251483034568659="` echo /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659 `" ) && sleep 0'
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: username
<localhost> EXEC /bin/sh -c 'echo ~username && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/username/.ansible/tmp `"&& mkdir "` echo /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471 `" && echo ansible-tmp-1613748144.865427-2535586-231000193206471="` echo /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471 `" ) && sleep 0'
Using module file /home/username/AWX/collections/ansible_collections/community/vmware/plugins/modules/vmware_cluster_ha.py
Using module file /home/username/AWX/collections/ansible_collections/community/vmware/plugins/modules/vmware_cluster_ha.py
<localhost> PUT /home/username/.ansible/tmp/ansible-local-2535478p1et0vol/tmpc_xids7a TO /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659/AnsiballZ_vmware_cluster_ha.py
<localhost> PUT /home/username/.ansible/tmp/ansible-local-2535478p1et0vol/tmpdoq7h_ap TO /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471/AnsiballZ_vmware_cluster_ha.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659/ /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659/AnsiballZ_vmware_cluster_ha.py && sleep 0'
<localhost> EXEC /bin/sh -c 'chmod u+x /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471/ /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471/AnsiballZ_vmware_cluster_ha.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471/AnsiballZ_vmware_cluster_ha.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659/AnsiballZ_vmware_cluster_ha.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/username/.ansible/tmp/ansible-tmp-1613748144.8424242-2535585-251483034568659/ > /dev/null 2>&1 && sleep 0'
ok: [New Cluster] => {
"changed": false,
"invocation": {
"module_args": {
"advanced_settings": {},
"apd_response": "warning",
"cluster_name": "New Cluster",
"datacenter": "LabDC",
"datacenter_name": "LabDC",
"enable_ha": true,
"failover_host_admission_control": null,
"ha_host_monitoring": "enabled",
"ha_restart_priority": "medium",
"ha_vm_failure_interval": 30,
"ha_vm_max_failure_window": -1,
"ha_vm_max_failures": 3,
"ha_vm_min_up_time": 120,
"ha_vm_monitoring": "vmMonitoringDisabled",
"host_isolation_response": "none",
"hostname": "stl-avclabt01.smrcy.com",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"pdl_response": "warning",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"reservation_based_admission_control": {
"auto_compute_percentages": true,
"cpu_failover_resources_percent": 50,
"failover_level": 1,
"memory_failover_resources_percent": 50
},
"slot_based_admission_control": null,
"username": "AnsibleTest",
"validate_certs": false
}
},
"result": null
}
<localhost> EXEC /bin/sh -c 'rm -f -r /home/username/.ansible/tmp/ansible-tmp-1613748144.865427-2535586-231000193206471/ > /dev/null 2>&1 && sleep 0'
changed: [Lab01] => {
"changed": true,
"invocation": {
"module_args": {
"advanced_settings": {},
"apd_response": "warning",
"cluster_name": "Lab01",
"datacenter": "LabDC",
"datacenter_name": "LabDC",
"enable_ha": true,
"failover_host_admission_control": null,
"ha_host_monitoring": "enabled",
"ha_restart_priority": "medium",
"ha_vm_failure_interval": 30,
"ha_vm_max_failure_window": -1,
"ha_vm_max_failures": 3,
"ha_vm_min_up_time": 120,
"ha_vm_monitoring": "vmMonitoringDisabled",
"host_isolation_response": "none",
"hostname": "stl-avclabt01.smrcy.com",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"pdl_response": "warning",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"reservation_based_admission_control": {
"auto_compute_percentages": true,
"cpu_failover_resources_percent": 50,
"failover_level": 1,
"memory_failover_resources_percent": 50
},
"slot_based_admission_control": null,
"username": "AnsibleTest",
"validate_certs": false
}
},
"result": null
}
```
vCenter logs show:
Reconfigured cluster Lab01 in datacenter LabDC Modified: configurationEx.dasConfig.defaultVmSettings.vmComponentProtectionSettings.vmStorageProtectionForAPD: "disabled" -> "warning"; configurationEx.dasConfig.defaultVmSettings.vmComponentProtectionSettings.vmStorageProtectionForPDL: "disabled" -> "warning"; Added: Deleted:
But t he HTML5 GUI still shows both as Disabled
| Files identified in the description:
* [`plugins/modules/vmware_cluster_ha.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_cluster_ha.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I've tested this with Ansible 3 and vSphere 7 and get the same result. What the module does to manage `apd_response` and `pdl_response` is setting `vmStorageProtectionForAPD` and `vmStorageProtectionForPDL` in [configurationEx.dasConfig.defaultVmSettings.vmComponentProtectionSettings](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.cluster.VmComponentProtectionSettings.html).
Now when I change APD or PDL settings in the UI, these two are updated. But the other way round doesn't work. It looks like there is an additional place where these are set. But I don't know where :-(
I think I've found the problem. If either `vmStorageProtectionForAPD ` or `vmStorageProtectionForPDL` isn't "disabled", I also need to set `vmComponentProtecting` in [configurationEx.dasConfig](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.cluster.DasConfigInfo.html) to "enabled". | 2021-02-22T14:39:14 |
|
ansible-collections/community.vmware | 680 | ansible-collections__community.vmware-680 | [
"673"
] | f6ed64f139addcabde07786b91478f3cacfcf258 | diff --git a/plugins/modules/vmware_cluster_info.py b/plugins/modules/vmware_cluster_info.py
--- a/plugins/modules/vmware_cluster_info.py
+++ b/plugins/modules/vmware_cluster_info.py
@@ -280,8 +280,8 @@ def gather_cluster_info(self):
drs_config = cluster.configurationEx.drsConfig
# VSAN
- if hasattr(cluster.configurationEx, 'vsanConfig'):
- vsan_config = cluster.configurationEx.vsanConfig
+ if hasattr(cluster.configurationEx, 'vsanConfigInfo'):
+ vsan_config = cluster.configurationEx.vsanConfigInfo
enabled_vsan = vsan_config.enabled,
vsan_auto_claim_storage = vsan_config.defaultConfig.autoClaimStorage,
| vmware_cluster_info does not provide correct data for enabled_vsan
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The data returned from vmware_cluster_info includes a property named "enabled_vsan" but it is not accurate and instead appears to always return false, even when vSAN is enabled in a cluster
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_cluster_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.16
config file = /home/username/AWX/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/username/AWX/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
COLLECTIONS_PATHS(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/custom_filters']
DEFAULT_ROLES_PATH(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
vCenter 6.7 U3
ESXi Hosts 6.7 U3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run playbook below and debug the registered output. All clusters show "enabled_vsan: false" even on clusters that are known for having vSAN enabled and in use.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather cluster information from Datacenters
vmware_cluster_info:
hostname: "{{ source_vcenter }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: "{{ validate_certs }}"
delegate_to: localhost
register: cluster_info
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
enabled_vsan property accurately reflect true when vSAN is enabled on a cluster
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather cluster information from Datacenters] **************************************************************************************************************************************************
task path: /home/username/AWX/VMware/cluster_configuration.yml:78
Trying secret FileVaultSecret(filename='/home/username/vault.txt') for vault_id=default
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: username
<localhost> EXEC /bin/sh -c 'echo ~username && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/username/.ansible/tmp `"&& mkdir "` echo /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732 `" && echo ansible-tmp-1613688264.6778567-2298045-177199244433732="` echo /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732 `" ) && sleep 0'
Using module file /home/username/AWX/collections/ansible_collections/community/vmware/plugins/modules/vmware_cluster_info.py
<localhost> PUT /home/username/.ansible/tmp/ansible-local-2298009jfqcszdk/tmp4jhv7ihk TO /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732/AnsiballZ_vmware_cluster_info.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732/ /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732/AnsiballZ_vmware_cluster_info.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732/AnsiballZ_vmware_cluster_info.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/username/.ansible/tmp/ansible-tmp-1613688264.6778567-2298045-177199244433732/ > /dev/null 2>&1 && sleep 0'
ok: [datacenter_LabDC] => {
"changed": false,
"clusters": {
"Lab01": {
"datacenter": "LabDC",
"drs_default_vm_behavior": "fullyAutomated",
"drs_enable_vm_behavior_overrides": true,
"drs_vmotion_rate": 3,
"enable_ha": true,
"enabled_drs": true,
"enabled_vsan": false,
"ha_admission_control_enabled": true,
"ha_failover_level": 1,
"ha_host_monitoring": "enabled",
"ha_restart_priority": [
"medium"
],
"ha_vm_failure_interval": [
30
],
"ha_vm_max_failure_window": [
-1
],
"ha_vm_max_failures": [
3
],
"ha_vm_min_up_time": [
120
],
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_tools_monitoring": [
"vmMonitoringDisabled"
],```
| Files identified in the description:
* [`plugins/modules/vmware_cluster_info.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_cluster_info.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @digifuchsi @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I think this:
https://github.com/ansible-collections/community.vmware/blob/f6ed64f139addcabde07786b91478f3cacfcf258/plugins/modules/vmware_cluster_info.py#L283-L284
should use [vsanConfigInfo](https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.cluster.ConfigInfoEx.html#field_detail) instead of just vsanConfig. | 2021-02-22T17:35:01 |
|
ansible-collections/community.vmware | 687 | ansible-collections__community.vmware-687 | [
"670"
] | 43fe8514c6b336eb3fe3ead8339502c807f000d7 | diff --git a/plugins/module_utils/vmware.py b/plugins/module_utils/vmware.py
--- a/plugins/module_utils/vmware.py
+++ b/plugins/module_utils/vmware.py
@@ -1483,7 +1483,7 @@ def get_recommended_datastore(self, datastore_cluster_obj=None):
return None
# Resource pool
- def find_resource_pool_by_name(self, resource_pool_name, folder=None):
+ def find_resource_pool_by_name(self, resource_pool_name='Resources', folder=None):
"""
Get resource pool managed object by name
Args:
diff --git a/plugins/modules/vmware_deploy_ovf.py b/plugins/modules/vmware_deploy_ovf.py
--- a/plugins/modules/vmware_deploy_ovf.py
+++ b/plugins/modules/vmware_deploy_ovf.py
@@ -30,6 +30,17 @@
cluster:
description:
- Cluster to deploy to.
+ - This is a required parameter, if C(esxi_hostname) is not set and C(hostname) is set to the vCenter server.
+ - C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
+ - This parameter is case sensitive.
+ type: str
+ esxi_hostname:
+ description:
+ - The ESXi hostname where the virtual machine will run.
+ - This is a required parameter, if C(cluster) is not set and C(hostname) is set to the vCenter server.
+ - C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
+ - This parameter is case sensitive.
+ version_added: '1.9.0'
type: str
datastore:
default: datastore1
@@ -152,6 +163,16 @@
power_on: no
ovf: /absolute/path/to/template/mytemplate.ova
delegate_to: localhost
+
+- community.vmware.vmware_deploy_ovf:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ datacenter: Datacenter1
+ esxi_hostname: test-server
+ datastore: test-datastore
+ ovf: /path/to/ubuntu-16.04-amd64.ovf
+ delegate_to: localhost
'''
@@ -321,12 +342,31 @@ def __init__(self, module):
self.entity = None
def get_objects(self):
+ # Get datacenter firstly
self.datacenter = self.find_datacenter_by_name(self.params['datacenter'])
- if not self.datacenter:
- self.module.fail_json(msg='%(datacenter)s could not be located' % self.params)
+ if self.datacenter is None:
+ self.module.fail_json(msg="Datacenter '%(datacenter)s' could not be located" % self.params)
+
+ # Get cluster in datacenter if cluster configured
+ if self.params['cluster']:
+ cluster = self.find_cluster_by_name(self.params['cluster'], datacenter_name=self.datacenter)
+ if cluster is None:
+ self.module.fail_json(msg="Unable to find cluster '%(cluster)s'" % self.params)
+ self.resource_pool = self.find_resource_pool_by_cluster(self.params['resource_pool'], cluster=cluster)
+ # Or get ESXi host in datacenter if ESXi host configured
+ elif self.params['esxi_hostname']:
+ host = self.find_hostsystem_by_name(self.params['esxi_hostname'])
+ if host is None:
+ self.module.fail_json(msg="Unable to find host '%(esxi_hostname)s'" % self.params)
+ self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'], folder=host.parent)
+ else:
+ self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'])
+
+ if not self.resource_pool:
+ self.module.fail_json(msg="Resource pool '%(resource_pool)s' could not be located" % self.params)
self.datastore = None
- datastore_cluster_obj = self.find_datastore_cluster_by_name(self.params['datastore'])
+ datastore_cluster_obj = self.find_datastore_cluster_by_name(self.params['datastore'], datacenter=self.datacenter)
if datastore_cluster_obj:
datastore = None
datastore_freespace = 0
@@ -340,21 +380,10 @@ def get_objects(self):
if datastore:
self.datastore = datastore
else:
- self.datastore = self.find_datastore_by_name(self.params['datastore'], self.datacenter)
-
- if not self.datastore:
- self.module.fail_json(msg='%(datastore)s could not be located' % self.params)
-
- if self.params['cluster']:
- cluster = self.find_cluster_by_name(self.params['cluster'], datacenter_name=self.datacenter)
- if cluster is None:
- self.module.fail_json(msg="Unable to find cluster '%(cluster)s'" % self.params)
- self.resource_pool = self.find_resource_pool_by_cluster(self.params['resource_pool'], cluster=cluster)
- else:
- self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'])
+ self.datastore = self.find_datastore_by_name(self.params['datastore'], datacenter_name=self.datacenter)
- if not self.resource_pool:
- self.module.fail_json(msg='%(resource_pool)s could not be located' % self.params)
+ if self.datastore is None:
+ self.module.fail_json(msg="Datastore '%(datastore)s' could not be located on specified ESXi host or datacenter" % self.params)
for key, value in self.params['networks'].items():
network = find_network_by_name(self.content, value, datacenter_name=self.datacenter)
@@ -628,6 +657,9 @@ def main():
'cluster': {
'default': None,
},
+ 'esxi_hostname': {
+ 'default': None,
+ },
'deployment_option': {
'default': None,
},
@@ -693,6 +725,9 @@ def main():
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
+ mutually_exclusive=[
+ ['cluster', 'esxi_hostname'],
+ ],
)
deploy_ovf = VMwareDeployOvf(module)
| vmware_deploy_ovf: The virtual machine is not supported on the target datastore
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_deploy_ovf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.5 (default, Apr 1 2018, 05:46:30) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
(1) one datacenter in VC,
(2) two hosts added in this datacenter, no cluster,
(3) "datastore2" is the datastore of host1, "datastore2 (1)" is the datastore of host2, "test-ds" is the shared datastore of these two hosts,
(4) deploy ovf template in "datastore2" and "test-ds" succeed, deploy ovf template in "datastore2 (1)" failed.
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Deploy VM from ovf template
vmware_deploy_ovf:
hostname: "xx.xx.xx.xx"
username: "[email protected]"
password: "xxxxxx"
validate_certs: "False"
datacenter: "vcDC"
datastore: "datastore2 (1)"
networks: "{{ ovf_networks | default({'VM Network': 'VM Network'}) }}"
ovf: "./openwrt_19.07.2_x86.ova"
name: test_ovf_deploy
allow_duplicates: False
disk_provisioning: "thin"
power_on: False
wait_for_ip_address: False
register: ovf_deploy
- debug: var=ovf_deploy
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
OVF can be deployed successfully
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************
ok: [localhost]
TASK [Deploy VM from ovf template] ***********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failure validating OVF import spec: The virtual machine is not supported on the target datastore."}
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
| Files identified in the description:
* [`plugins/modules/vmware_deploy_ovf.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_deploy_ovf.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I assume if add "esxi_hostname" to specify the exact ESXi host can resolve this issue in this scenario?
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (vmware_deploy_ovf) module: esxi_hostname
Supported parameters include: allow_duplicates, cluster, datacenter, datastore, deployment_option, disk_provisioning, fail_on_spec_warnings, folder, hostname, inject_ovf_env, name, networks, ovf, password, port, power_on, properties, proxy_host, proxy_port, resource_pool, username, validate_certs, wait, wait_for_ip_address"}
```
Like what defined in `vmware_guest` module:
```
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
type: str
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
type: str
``` | 2021-02-25T04:09:51 |
|
ansible-collections/community.vmware | 705 | ansible-collections__community.vmware-705 | [
"703"
] | f913056612c776b6527aa3c373fe7cb7358d2762 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -2477,7 +2477,7 @@ def sanitize_disk_parameters(self, vm_obj):
"""
controllers = []
for disk_spec in self.params.get('disk'):
- if not disk_spec['controller_type'] or not disk_spec['controller_number'] or not disk_spec['unit_number']:
+ if disk_spec['controller_type'] is None or disk_spec['controller_number'] is None or disk_spec['unit_number'] is None:
self.module.fail_json(msg="'disk.controller_type', 'disk.controller_number' and 'disk.unit_number' are"
" mandatory parameters when configure multiple disk controllers and disks.")
| diff --git a/tests/integration/targets/vmware_guest/defaults/main.yml b/tests/integration/targets/vmware_guest/defaults/main.yml
--- a/tests/integration/targets/vmware_guest/defaults/main.yml
+++ b/tests/integration/targets/vmware_guest/defaults/main.yml
@@ -19,6 +19,7 @@ vmware_guest_test_playbooks:
- mac_address_d1_c1_f0.yml
- max_connections.yml
- mem_reservation.yml
+ - multiple_disk_controllers_d1_c1_f0.yml
- network_negative_test.yml
- network_with_device.yml
# Currently, VCSIM doesn't support DVPG (as portkeys are not available) so commenting this test
diff --git a/tests/integration/targets/vmware_guest/tasks/multiple_disk_controllers_d1_c1_f0.yml b/tests/integration/targets/vmware_guest/tasks/multiple_disk_controllers_d1_c1_f0.yml
--- a/tests/integration/targets/vmware_guest/tasks/multiple_disk_controllers_d1_c1_f0.yml
+++ b/tests/integration/targets/vmware_guest/tasks/multiple_disk_controllers_d1_c1_f0.yml
@@ -1,215 +1,217 @@
# Test code for the vmware_guest module.
# Copyright: (c) 2020, Diane Wang <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-- name: create a new VM with multiple scsi controllers
- vmware_guest:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter: "{{ dc1 }}"
- folder: vm
- cluster: "{{ ccr1 }}"
- resource_pool: Resources
- name: test_vm1
- guest_id: centos64Guest
- datastore: "{{ rw_datastore }}"
- hardware:
- memory_mb: 512
- num_cpus: 1
- disk:
- - controller_type: lsilogicsas
- controller_number: 0
- unit_number: 0
- size_mb: 512
- type: thin
- - controller_type: paravirtual
- controller_number: 1
- unit_number: 0
- size_mb: 256
- type: eagerzeroedthick
- register: multi_scsi_disk_vm
-
-- debug: var=multi_scsi_disk_vm
-
-- name: assert that VM was deployed
- assert:
- that:
- - "multi_scsi_disk_vm.changed == true"
-
-- name: reconfigure created VM with multiple scsi controllers
- vmware_guest:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- datacenter: "{{ dc1 }}"
- folder: vm
- cluster: "{{ ccr1 }}"
- resource_pool: Resources
- name: test_vm1
- datastore: "{{ rw_datastore }}"
- state: present
- disk:
- - controller_type: lsilogicsas
- controller_number: 0
- unit_number: 0
- disk_mode: independent_persistent
- - controller_type: paravirtual
- controller_number: 1
- unit_number: 0
- size_mb: 512
- register: multi_scsi_disk_vm
-
-- debug: var=multi_scsi_disk_vm
-
-- name: assert that VM was configured
- assert:
- that:
- - "multi_scsi_disk_vm.changed == true"
-
-- name: create a new VM with multiple sata controllers
- vmware_guest:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- folder: vm
- datacenter: "{{ dc1 }}"
- cluster: "{{ ccr1 }}"
- resource_pool: Resources
- name: test_vm2
- guest_id: centos64Guest
- datastore: "{{ rw_datastore }}"
- hardware:
- memory_mb: 512
- num_cpus: 1
- disk:
- - controller_type: sata
- controller_number: 0
- unit_number: 0
- size_mb: 512
- disk_mode: independent_persistent
- - controller_type: sata
- controller_number: 1
- unit_number: 0
- size_mb: 256
- - controller_type: sata
- controller_number: 2
- unit_number: 0
- size_mb: 256
- register: multi_sata_disk_vm
-
-- debug: var=multi_sata_disk_vm
-
-- name: assert that VM was deployed
- assert:
- that:
- - "multi_sata_disk_vm.changed == true"
-
-- name: reconfigure created new VM with multiple sata controllers
- vmware_guest:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- folder: vm
- datacenter: "{{ dc1 }}"
- cluster: "{{ ccr1 }}"
- resource_pool: Resources
- name: test_vm2
- state: present
- datastore: "{{ rw_datastore }}"
- disk:
- - controller_type: sata
- controller_number: 0
- unit_number: 0
- size_mb: 512
- disk_mode: persistent
- - controller_type: sata
- controller_number: 1
- unit_number: 0
- size_mb: 512
- register: multi_sata_disk_vm
-
-- debug: var=multi_sata_disk_vm
-
-- name: assert that VM was deployed
- assert:
- that:
- - "multi_sata_disk_vm.changed == true"
-
-- name: create a new VM with multiple nvme controllers
- vmware_guest:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- folder: vm
- datacenter: "{{ dc1 }}"
- cluster: "{{ ccr1 }}"
- resource_pool: Resources
- name: test_vm3
- guest_id: centos64Guest
- datastore: "{{ rw_datastore }}"
- hardware:
- memory_mb: 512
- num_cpus: 1
- version: 14
- disk:
- - controller_type: nvme
- controller_number: 0
- unit_number: 0
- size_mb: 512
- - controller_type: nvme
- controller_number: 1
- unit_number: 0
- size_mb: 256
- type: thin
- - controller_type: nvme
- controller_number: 2
- unit_number: 0
- size_mb: 256
- register: multi_nvme_disk_vm
-
-- debug: var=multi_nvme_disk_vm
-
-- name: assert that VM was deployed
- assert:
- that:
- - "multi_nvme_disk_vm.changed == true"
-
-- name: reconfigure created new VM with multiple types of controllers
- vmware_guest:
- validate_certs: false
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- folder: vm
- datacenter: "{{ dc1 }}"
- cluster: "{{ ccr1 }}"
- resource_pool: Resources
- name: test_vm3
- state: present
- datastore: "{{ rw_datastore }}"
- disk:
- - controller_type: nvme
- controller_number: 0
- unit_number: 1
- size_mb: 512
- - controller_type: sata
- controller_number: 1
- unit_number: 0
- size_mb: 256
- - controller_type: paravirtual
- controller_number: 0
- unit_number: 0
- size_mb: 256
- register: multi_nvme_disk_vm
-
-- debug: var=multi_nvme_disk_vm
-
-- name: assert that VM was deployed
- assert:
- that:
- - "multi_nvme_disk_vm.changed == true"
+- when: vcsim is not defined
+ block:
+ - name: create a new VM with multiple scsi controllers
+ vmware_guest:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ folder: vm
+ cluster: "{{ ccr1 }}"
+ resource_pool: Resources
+ name: test_vm1
+ guest_id: centos64Guest
+ datastore: "{{ rw_datastore }}"
+ hardware:
+ memory_mb: 512
+ num_cpus: 1
+ disk:
+ - controller_type: lsilogicsas
+ controller_number: 0
+ unit_number: 0
+ size_mb: 512
+ type: thin
+ - controller_type: paravirtual
+ controller_number: 1
+ unit_number: 0
+ size_mb: 256
+ type: eagerzeroedthick
+ register: multi_scsi_disk_vm
+
+ - debug: var=multi_scsi_disk_vm
+
+ - name: assert that VM was deployed
+ assert:
+ that:
+ - "multi_scsi_disk_vm.changed == true"
+
+ - name: reconfigure created VM with multiple scsi controllers
+ vmware_guest:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ folder: vm
+ cluster: "{{ ccr1 }}"
+ resource_pool: Resources
+ name: test_vm1
+ datastore: "{{ rw_datastore }}"
+ state: present
+ disk:
+ - controller_type: lsilogicsas
+ controller_number: 0
+ unit_number: 0
+ disk_mode: independent_persistent
+ - controller_type: paravirtual
+ controller_number: 1
+ unit_number: 0
+ size_mb: 512
+ register: multi_scsi_disk_vm
+
+ - debug: var=multi_scsi_disk_vm
+
+ - name: assert that VM was configured
+ assert:
+ that:
+ - "multi_scsi_disk_vm.changed == true"
+
+ - name: create a new VM with multiple sata controllers
+ vmware_guest:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ folder: vm
+ datacenter: "{{ dc1 }}"
+ cluster: "{{ ccr1 }}"
+ resource_pool: Resources
+ name: test_vm2
+ guest_id: centos64Guest
+ datastore: "{{ rw_datastore }}"
+ hardware:
+ memory_mb: 512
+ num_cpus: 1
+ disk:
+ - controller_type: sata
+ controller_number: 0
+ unit_number: 0
+ size_mb: 512
+ disk_mode: independent_persistent
+ - controller_type: sata
+ controller_number: 1
+ unit_number: 0
+ size_mb: 256
+ - controller_type: sata
+ controller_number: 2
+ unit_number: 0
+ size_mb: 256
+ register: multi_sata_disk_vm
+
+ - debug: var=multi_sata_disk_vm
+
+ - name: assert that VM was deployed
+ assert:
+ that:
+ - "multi_sata_disk_vm.changed == true"
+
+ - name: reconfigure created new VM with multiple sata controllers
+ vmware_guest:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ folder: vm
+ datacenter: "{{ dc1 }}"
+ cluster: "{{ ccr1 }}"
+ resource_pool: Resources
+ name: test_vm2
+ state: present
+ datastore: "{{ rw_datastore }}"
+ disk:
+ - controller_type: sata
+ controller_number: 0
+ unit_number: 0
+ size_mb: 512
+ disk_mode: persistent
+ - controller_type: sata
+ controller_number: 1
+ unit_number: 0
+ size_mb: 512
+ register: multi_sata_disk_vm
+
+ - debug: var=multi_sata_disk_vm
+
+ - name: assert that VM was deployed
+ assert:
+ that:
+ - "multi_sata_disk_vm.changed == true"
+
+ - name: create a new VM with multiple nvme controllers
+ vmware_guest:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ folder: vm
+ datacenter: "{{ dc1 }}"
+ cluster: "{{ ccr1 }}"
+ resource_pool: Resources
+ name: test_vm3
+ guest_id: centos64Guest
+ datastore: "{{ rw_datastore }}"
+ hardware:
+ memory_mb: 512
+ num_cpus: 1
+ version: 14
+ disk:
+ - controller_type: nvme
+ controller_number: 0
+ unit_number: 0
+ size_mb: 512
+ - controller_type: nvme
+ controller_number: 1
+ unit_number: 0
+ size_mb: 256
+ type: thin
+ - controller_type: nvme
+ controller_number: 2
+ unit_number: 0
+ size_mb: 256
+ register: multi_nvme_disk_vm
+
+ - debug: var=multi_nvme_disk_vm
+
+ - name: assert that VM was deployed
+ assert:
+ that:
+ - "multi_nvme_disk_vm.changed == true"
+
+ - name: reconfigure created new VM with multiple types of controllers
+ vmware_guest:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ folder: vm
+ datacenter: "{{ dc1 }}"
+ cluster: "{{ ccr1 }}"
+ resource_pool: Resources
+ name: test_vm3
+ state: present
+ datastore: "{{ rw_datastore }}"
+ disk:
+ - controller_type: nvme
+ controller_number: 0
+ unit_number: 1
+ size_mb: 512
+ - controller_type: sata
+ controller_number: 1
+ unit_number: 0
+ size_mb: 256
+ - controller_type: paravirtual
+ controller_number: 0
+ unit_number: 0
+ size_mb: 256
+ register: multi_nvme_disk_vm
+
+ - debug: var=multi_nvme_disk_vm
+
+ - name: assert that VM was deployed
+ assert:
+ that:
+ - "multi_nvme_disk_vm.changed == true"
| disk.controller_number and disk.unit_number set to 0 results in validation error because 0 == false in v1.8
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Since v1.8 My playbook stopped working with the error
`disk.controller_type', 'disk.controller_number' and 'disk.unit_number' are mandatory parameters when configure multiple disk controllers and disks.
`
Checking my playbook it does have all the parameters
reverting to version 1.7 did fix the issue. I think this might be due to #640 but can"t pinpoint the issue
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
community.vmware/plugins/modules/vmware_guest.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/awx/awx_venvs/vmware_community/lib64/python3.6/site-packages/ansible
executable location = /data/awx/awx_venvs/vmware_community/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
empty
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```yaml
- name: Alter VM template
community.vmware.vmware_guest:
validate_certs: False
hostname: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
host: "{{ set_esxi_hostname | default(omit) }}"
cluster: "{{ set_cluster | default(omit) }}"
datacenter: "{{ datacenter }}"
folder: "{{ folder }}"
name: "{{ vmname }}"
customization:
existing_vm: True
hostname: "{{ vm_hostname | default(omit)}}"
disk:
- size_gb: "{{ disk_size }}"
type: "{{ disktype |lower }}"
controller_type : 'paravirtual'
datastore: "{{ datastore_recommended }}"
controller_number: 0
unit_number: 0
hardware:
hotadd_memory: True
memory_mb: "{{ vm_memory }}"
num_cpus: "{{ cpu }}"
boot_firmware: "{{ boot_firmware }}"
scsi: "{{ scsi }}"
networks:
- name: "{{ port_grp }}"
device_type: "vmxnet3"
start_connected: True
type: "{{ net_type }}"
ip: "{{ server_ip }}"
netmask: "{{ netmask }}"
gateway: "{{ gateway }}"
domain: "{{ searchdomain }}"
dns_servers:
- "{{ dns1 }}"
- "{{ dns2 }}"
state: present
```
output (some info censored)
```
"msg": "'disk.controller_type', 'disk.controller_number' and 'disk.unit_number' are mandatory parameters when configure multiple disk controllers and disks.",
"invocation": {
"module_args": {
"validate_certs": false,
"hostname": "censored",
"username": "censored",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"cluster": "censored",
"datacenter": "censored",
"folder": "vm",
"name": "censored",
"customization": {
"existing_vm": true,
"autologon": null,
"autologoncount": null,
"dns_servers": null,
"dns_suffix": null,
"domain": null,
"domainadmin": null,
"domainadminpassword": null,
"fullname": null,
"hostname": null,
"hwclockUTC": null,
"joindomain": null,
"joinworkgroup": null,
"orgname": null,
"password": null,
"productid": null,
"runonce": null,
"timezone": null
},
"disk": [
{
"size_gb": 45,
"type": "eagerzeroedthick",
"controller_type": "paravirtual",
"datastore": "censored",
"controller_number": 0,
"unit_number": 0,
"autoselect_datastore": null,
"disk_mode": null,
"filename": null,
"size": null,
"size_kb": null,
"size_mb": null,
"size_tb": null
}
],
"hardware": {
"hotadd_memory": true,
"memory_mb": 8096,
"num_cpus": 2,
"boot_firmware": "efi",
"scsi": "paravirtual",
"cpu_limit": null,
"cpu_reservation": null,
"hotadd_cpu": null,
"hotremove_cpu": null,
"max_connections": null,
"mem_limit": null,
"mem_reservation": null,
"memory_reservation_lock": null,
"nested_virt": null,
"num_cpu_cores_per_socket": null,
"version": null,
"virt_based_security": null
},
"networks": [
{
"name": "censored",
"device_type": "vmxnet3",
"start_connected": true,
"type": "static",
"ip": "censored",
"netmask": "censored",
"gateway": "censored",
"domain": "censored",
"dns_servers": [
"censored",
"censored"
]
}
],
"state": "present",
"port": 443,
"is_template": false,
"customvalues": [],
"advanced_settings": [],
"name_match": "first",
"use_instance_uuid": false,
"cdrom": [],
"force": false,
"wait_for_ip_address": false,
"wait_for_ip_address_timeout": 300,
"state_change_timeout": 0,
"linked_clone": false,
"wait_for_customization": false,
"wait_for_customization_timeout": 3600,
"vapp_properties": [],
"delete_from_inventory": false,
"proxy_host": null,
"proxy_port": null,
"template": null,
"annotation": null,
"uuid": null,
"guest_id": null,
"esxi_hostname": null,
"snapshot_src": null,
"resource_pool": null,
"customization_spec": null,
"datastore": null,
"convert": null
}
},
"_ansible_no_log": false,
"changed": false
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Files identified in the description:
* [`plugins/modules/vmware_guest.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @nerzhul @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Hi
https://github.com/ansible-collections/community.vmware/blob/f913056612c776b6527aa3c373fe7cb7358d2762/plugins/modules/vmware_guest.py#L2480
should be changed to
```
if not disk_spec['controller_type'] or (not disk_spec['controller_number'] and disk_spec['controller_number'] != 0) or (not disk_spec['unit_number'] and disk_spec['controller_number'] != 0):
```
Because it errors if the number is 0 | 2021-03-09T13:44:59 |
ansible-collections/community.vmware | 734 | ansible-collections__community.vmware-734 | [
"728"
] | 804fc67d330dba1739dc45f5c8261bcbf0d248d5 | diff --git a/plugins/modules/vmware_cluster_vsan.py b/plugins/modules/vmware_cluster_vsan.py
--- a/plugins/modules/vmware_cluster_vsan.py
+++ b/plugins/modules/vmware_cluster_vsan.py
@@ -162,11 +162,12 @@ def __init__(self, module):
if module.params['advanced_options'] is not None:
self.advanced_options = module.params['advanced_options']
- client_stub = self.si._GetStub()
- ssl_context = client_stub.schemeArgs.get('context')
- apiVersion = vsanapiutils.GetLatestVmodlVersion(module.params['hostname'])
- vcMos = vsanapiutils.GetVsanVcMos(client_stub, context=ssl_context, version=apiVersion)
- self.vsanClusterConfigSystem = vcMos['vsan-cluster-config-system']
+
+ client_stub = self.si._GetStub()
+ ssl_context = client_stub.schemeArgs.get('context')
+ apiVersion = vsanapiutils.GetLatestVmodlVersion(module.params['hostname'])
+ vcMos = vsanapiutils.GetVsanVcMos(client_stub, context=ssl_context, version=apiVersion)
+ self.vsanClusterConfigSystem = vcMos['vsan-cluster-config-system']
def check_vsan_config_diff(self):
"""
| community.vmware.vmware_cluster_vsan module fails with a newly created vSphere 7.0U2 cluster
##### SUMMARY
After deploying a new vSphere 7.0 Update 2 cluster, I try to use the vmware_cluster_vsan module to enable vSAN and automatically claim the storage, and the module fails. This failure also occurs even when attempt to enable vSAN by itself. Latest vSAN SDK v7.0U1 (Latest) is downloaded, and vsanmgmtObjects.py and vsanapiutils.py files were placed in the Python include path.
The following two configurations were tested, and both failed with the same message:
1) Cache Disk: Flash, CapacityDisk: Flash
2) Cache Disk: Flash, CapacityDisk: HDD
I have not tried running this against any other version of vSphere.
vSAN was not enabled on the cluster before the playbook was executed, as it's expected that the module would enable vsan, and when "vsan_auto_claim_storage" is specified, automatically claim the drives into a storage group.
The SDDC.Lab environment is a nested environment, but vSAN works without issues when manually configured.
One final comment. It would be really nice if this module had the ability to specify the vSAN Datastore name via an optional "datastore_name" variable, so that we could have the newly created datastore be named something other than "vsanDatastore". Of course, if "datastore_name" is not included, it would default to "vsanDatastore".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
community.vmware.vmware_cluster_vsan
##### ANSIBLE VERSION
```
adminuser@NetLab-Ansible:~/git/SDDC.Lab$ ansible --version
ansible 2.10.7
config file = /home/adminuser/git/SDDC.Lab/ansible.cfg
configured module search path = ['/home/adminuser/git/SDDC.Lab/library']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
```
##### CONFIGURATION
```
adminuser@NetLab-Ansible:~/git/SDDC.Lab$ ansible-config dump --only-changed
DEFAULT_CALLBACK_WHITELIST(/home/adminuser/git/SDDC.Lab/ansible.cfg) = ['profile_tasks']
DEFAULT_GATHERING(/home/adminuser/git/SDDC.Lab/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/adminuser/git/SDDC.Lab/ansible.cfg) = ['/home/adminuser/git/SDDC.Lab/hosts/hosts']
DEFAULT_JINJA2_EXTENSIONS(/home/adminuser/git/SDDC.Lab/ansible.cfg) = jinja2.ext.do,jinja2.ext.i18n
DEFAULT_MODULE_PATH(/home/adminuser/git/SDDC.Lab/ansible.cfg) = ['/home/adminuser/git/SDDC.Lab/library']
DEFAULT_MODULE_UTILS_PATH(/home/adminuser/git/SDDC.Lab/ansible.cfg) = ['/home/adminuser/git/SDDC.Lab/module_utils']
INTERPRETER_PYTHON(/home/adminuser/git/SDDC.Lab/ansible.cfg) = /usr/bin/python3
LOCALHOST_WARNING(/home/adminuser/git/SDDC.Lab/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ansible:
Linux Ubuntu 20.04.2 Desktop
vCenter Server:
v7.02 Update 2
ESXi:
```
[root@Pod-200-ComputeA-1:~] esxcli system version get
Product: VMware ESXi
Version: 7.0.2
Build: Releasebuild-17630552
Update: 2
Patch: 0
```
##### STEPS TO REPRODUCE
Either of the two following plays will demonstrate the same failure condition:
```yaml
- name: Enable vSAN
community.vmware.vmware_cluster_vsan:
hostname: pod-200-vcenter.sddc.lab
username: [email protected]
password: VMware1!
validate_certs: false
datacenter_name: Pod-200-DataCenter
cluster_name: Compute-A
enable_vsan: True
delegate_to: localhost
- name: Enable vSAN and claim storage automatically
community.vmware.vmware_cluster_vsan:
hostname: pod-200-vcenter.sddc.lab
username: [email protected]
password: VMware1!
validate_certs: false
datacenter_name: Pod-200-DataCenter
cluster_name: Compute-A
enable_vsan: True
vsan_auto_claim_storage: True
delegate_to: localhost
```
##### EXPECTED RESULTS
I was expecting the vSAN cluster to have the vSAN Service enabled, configured with default values, and in the case of the 2nd play, have all of the drives automatically claimed into a storage group. The module should be able to handle both Flash/HDD and Flash/Flash configurations.
##### ACTUAL RESULTS
When the plays are executed, they fail with the following exception error:
**"msg": "Failed to update cluster due to generic exception 'VMwareCluster' object has no attribute 'vsanClusterConfigSystem'"**
```paste below
adminuser@NetLab-Ansible:~/git/SDDC.Lab$
adminuser@NetLab-Ansible:~/git/SDDC.Lab$ ansible-playbook playbooks/configureVsan-test.yml -vvvv
ansible-playbook 2.10.7
config file = /home/adminuser/git/SDDC.Lab/ansible.cfg
configured module search path = ['/home/adminuser/git/SDDC.Lab/library']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
Using /home/adminuser/git/SDDC.Lab/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/adminuser/git/SDDC.Lab/hosts/hosts as it did not pass its verify_file() method
script declined parsing /home/adminuser/git/SDDC.Lab/hosts/hosts as it did not pass its verify_file() method
auto declined parsing /home/adminuser/git/SDDC.Lab/hosts/hosts as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/adminuser/git/SDDC.Lab/hosts/hosts inventory source with ini plugin
Loading collection community.vmware from /home/adminuser/.ansible/collections/ansible_collections/community/vmware
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible/plugins/callback/default.py
redirecting (type: callback) ansible.builtin.profile_tasks to ansible.posix.profile_tasks
Loading collection ansible.posix from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/posix
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
Loading callback plugin ansible.posix.profile_tasks of type aggregate, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible_collections/ansible/posix/plugins/callback/profile_tasks.py
PLAYBOOK: configureVsan-test.yml **********************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbooks/configureVsan-test.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/adminuser/git/SDDC.Lab/hosts/hosts',)
forks: 5
1 plays in playbooks/configureVsan-test.yml
PLAY [localhost] **************************************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Enable vSAN] ************************************************************************************************************************************************************************************************************************************************************************
task path: /home/adminuser/git/SDDC.Lab/playbooks/configureVsan-test.yml:10
Friday 19 March 2021 23:33:31 -0700 (0:00:00.047) 0:00:00.047 **********
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: adminuser
<localhost> EXEC /bin/sh -c 'echo ~adminuser && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/adminuser/.ansible/tmp `"&& mkdir "` echo /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210 `" && echo ansible-tmp-1616222011.4441807-127817-80833664684210="` echo /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210 `" ) && sleep 0'
Using module file /home/adminuser/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_cluster_vsan.py
<localhost> PUT /home/adminuser/.ansible/tmp/ansible-local-127813rstjmplb/tmpxbmpjd5j TO /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210/AnsiballZ_vmware_cluster_vsan.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210/ /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210/AnsiballZ_vmware_cluster_vsan.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210/AnsiballZ_vmware_cluster_vsan.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/adminuser/.ansible/tmp/ansible-tmp-1616222011.4441807-127817-80833664684210/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_community.vmware.vmware_cluster_vsan_payload_u9hb59h0/ansible_community.vmware.vmware_cluster_vsan_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cluster_vsan.py", line 236, in configure_vsan
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"advanced_options": null,
"cluster_name": "Compute-A",
"datacenter": "Pod-200-DataCenter",
"datacenter_name": "Pod-200-DataCenter",
"enable_vsan": true,
"hostname": "pod-200-vcenter.sddc.lab",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"username": "[email protected]",
"validate_certs": false,
"vsan_auto_claim_storage": false
}
},
"msg": "Failed to update cluster due to generic exception 'VMwareCluster' object has no attribute 'vsanClusterConfigSystem'"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Friday 19 March 2021 23:33:32 -0700 (0:00:00.747) 0:00:00.795 **********
===============================================================================
Enable vSAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.75s
/home/adminuser/git/SDDC.Lab/playbooks/configureVsan-test.yml:10 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
adminuser@NetLab-Ansible:~/git/SDDC.Lab$
```
| Files identified in the description:
* [`plugins/modules/vmware_cluster_vsan.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_cluster_vsan.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @mariolenz @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@luischanu Thanks for reporting this issue.
@mariolenz I think this is due to https://github.com/ansible-collections/community.vmware/pull/289/files#diff-b0727b9b194a50c77d9e7fe10435a35a072a90f0934c612b9fbaec111d4f0a7cR237. Would you be able to tackle this issue? Thanks in advance. | 2021-03-22T12:56:36 |
|
ansible-collections/community.vmware | 743 | ansible-collections__community.vmware-743 | [
"709"
] | 385021590f7593c6c67693aca8ee924281d22dcb | diff --git a/plugins/modules/vmware_dvs_portgroup.py b/plugins/modules/vmware_dvs_portgroup.py
--- a/plugins/modules/vmware_dvs_portgroup.py
+++ b/plugins/modules/vmware_dvs_portgroup.py
@@ -19,7 +19,7 @@
- Joseph Callen (@jcpowermac)
- Philippe Dellaert (@pdellaert) <[email protected]>
notes:
- - Tested on vSphere 5.5, 6.5
+ - Tested on vSphere 7.0
requirements:
- "python >= 2.6"
- PyVmomi
@@ -45,17 +45,35 @@
num_ports:
description:
- The number of ports the portgroup should contain.
- required: True
type: int
portgroup_type:
description:
- See VMware KB 1022312 regarding portgroup types.
- required: True
+ - Deprecated. Will be removed 2021-12-01.
choices:
- 'earlyBinding'
- 'lateBinding'
- 'ephemeral'
type: str
+ port_binding:
+ description:
+ - The type of port binding determines when ports in a port group are assigned to virtual machines.
+ - See VMware KB 1022312 U(https://kb.vmware.com/s/article/1022312) for more details.
+ type: str
+ choices:
+ - 'static'
+ - 'ephemeral'
+ version_added: '1.10.0'
+ port_allocation:
+ description:
+ - Elastic port groups automatically increase or decrease the number of ports as needed.
+ - Only valid if I(port_binding) is set to C(static).
+ - Will be C(elastic) if not specified and I(port_binding) is set to C(static).
+ type: str
+ choices:
+ - 'elastic'
+ - 'fixed'
+ version_added: '1.10.0'
state:
description:
- Determines if the portgroup should be present or not.
@@ -78,6 +96,31 @@
required: False
default: False
type: bool
+ mac_learning:
+ description:
+ - Dictionary which configures MAC learning for portgroup.
+ suboptions:
+ allow_unicast_flooding:
+ type: bool
+ description: The flag to allow flooding of unlearned MAC for ingress traffic.
+ required: False
+ enabled:
+ type: bool
+ description: The flag to indicate if source MAC address learning is allowed.
+ required: False
+ limit:
+ type: int
+ description: The maximum number of MAC addresses that can be learned.
+ required: False
+ limit_policy:
+ type: str
+ description: The default switching policy after MAC limit is exceeded.
+ required: False
+ choices:
+ - 'allow'
+ - 'drop'
+ type: dict
+ version_added: '1.10.0'
network_policy:
description:
- Dictionary which configures the different security values for portgroup.
@@ -126,16 +169,26 @@
description:
- Indicate whether or not the teaming policy is applied to inbound frames as well.
type: bool
- default: False
rolling_order:
description:
- Indicate whether or not to use a rolling policy when restoring links.
default: False
type: bool
+ active_uplinks:
+ description:
+ - List of active uplinks used for load balancing.
+ type: list
+ elements: str
+ version_added: '1.10.0'
+ standby_uplinks:
+ description:
+ - List of standby uplinks used for failover.
+ type: list
+ elements: str
+ version_added: '1.10.0'
default: {
'notify_switches': True,
'load_balance_policy': 'loadbalance_srcid',
- 'inbound_policy': False,
'rolling_order': False
}
type: dict
@@ -228,7 +281,7 @@
switch_name: dvSwitch
vlan_id: 123
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
state: present
delegate_to: localhost
@@ -242,7 +295,7 @@
vlan_id: 1-1000, 1005, 1100-1200
vlan_trunk: True
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
state: present
delegate_to: localhost
@@ -256,7 +309,7 @@
vlan_id: 1001
vlan_private: True
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
state: present
delegate_to: localhost
@@ -269,7 +322,7 @@
switch_name: dvSwitch
vlan_id: 0
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
state: present
delegate_to: localhost
@@ -282,7 +335,7 @@
switch_name: dvSwitch
vlan_id: 123
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
state: present
network_policy:
promiscuous: true
@@ -323,6 +376,21 @@ def __init__(self, module):
self.dvs_portgroup = None
self.dv_switch = None
+ # Some sanity checks
+ if self.module.params['port_allocation'] == 'elastic':
+ if self.module.params['port_binding'] == 'ephemeral':
+ self.module.fail_json(
+ msg="'elastic' port allocation is not supported on an 'ephemeral' portgroup."
+ )
+
+ if self.module.params['num_ports']:
+ self.module.fail_json(
+ msg="The number of ports cannot be configured when port allocation is set to 'elastic'."
+ )
+
+ def supports_mac_learning(self):
+ return hasattr(self.dv_switch.capability.featuresSupported, 'macLearningSupported') and self.dv_switch.capability.featuresSupported.macLearningSupported
+
def create_vlan_list(self):
vlan_id_list = []
for vlan_id_splitted in self.module.params['vlan_id'].split(','):
@@ -357,7 +425,9 @@ def build_config(self):
# Basic config
config.name = self.module.params['portgroup_name']
- config.numPorts = self.module.params['num_ports']
+
+ if self.module.params['port_allocation'] != 'elastic' and self.module.params['port_binding'] != 'ephemeral':
+ config.numPorts = self.module.params['num_ports']
# Default port config
config.defaultPortConfig = vim.dvs.VmwareDistributedVirtualSwitch.VmwarePortConfigPolicy()
@@ -377,18 +447,49 @@ def build_config(self):
else:
config.defaultPortConfig.vlan = vim.dvs.VmwareDistributedVirtualSwitch.VlanIdSpec()
config.defaultPortConfig.vlan.vlanId = int(self.module.params['vlan_id'])
+
config.defaultPortConfig.vlan.inherited = False
- config.defaultPortConfig.securityPolicy = vim.dvs.VmwareDistributedVirtualSwitch.SecurityPolicy()
- config.defaultPortConfig.securityPolicy.allowPromiscuous = vim.BoolPolicy(value=self.module.params['network_policy']['promiscuous'])
- config.defaultPortConfig.securityPolicy.forgedTransmits = vim.BoolPolicy(value=self.module.params['network_policy']['forged_transmits'])
- config.defaultPortConfig.securityPolicy.macChanges = vim.BoolPolicy(value=self.module.params['network_policy']['mac_changes'])
+
+ # If the dvSwitch supports MAC learning, it's a version where securityPolicy is deprecated
+ if self.supports_mac_learning():
+ config.defaultPortConfig.macManagementPolicy = vim.dvs.VmwareDistributedVirtualSwitch.MacManagementPolicy()
+ config.defaultPortConfig.macManagementPolicy.allowPromiscuous = self.module.params['network_policy']['promiscuous']
+ config.defaultPortConfig.macManagementPolicy.forgedTransmits = self.module.params['network_policy']['forged_transmits']
+ config.defaultPortConfig.macManagementPolicy.macChanges = self.module.params['network_policy']['mac_changes']
+
+ macLearning = self.module.params['mac_learning']
+ if macLearning:
+ macLearningPolicy = vim.dvs.VmwareDistributedVirtualSwitch.MacLearningPolicy()
+ if macLearning['allow_unicast_flooding'] is not None:
+ macLearningPolicy.allowUnicastFlooding = macLearning['allow_unicast_flooding']
+ if macLearning['enabled'] is not None:
+ macLearningPolicy.enabled = macLearning['enabled']
+ if macLearning['limit'] is not None:
+ macLearningPolicy.limit = macLearning['limit']
+ if macLearning['limit_policy']:
+ macLearningPolicy.limitPolicy = macLearning['limit_policy']
+ config.defaultPortConfig.macManagementPolicy.macLearningPolicy = macLearningPolicy
+ else:
+ config.defaultPortConfig.securityPolicy = vim.dvs.VmwareDistributedVirtualSwitch.SecurityPolicy()
+ config.defaultPortConfig.securityPolicy.allowPromiscuous = vim.BoolPolicy(value=self.module.params['network_policy']['promiscuous'])
+ config.defaultPortConfig.securityPolicy.forgedTransmits = vim.BoolPolicy(value=self.module.params['network_policy']['forged_transmits'])
+ config.defaultPortConfig.securityPolicy.macChanges = vim.BoolPolicy(value=self.module.params['network_policy']['mac_changes'])
# Teaming Policy
teamingPolicy = vim.dvs.VmwareDistributedVirtualSwitch.UplinkPortTeamingPolicy()
teamingPolicy.policy = vim.StringPolicy(value=self.module.params['teaming_policy']['load_balance_policy'])
- teamingPolicy.reversePolicy = vim.BoolPolicy(value=self.module.params['teaming_policy']['inbound_policy'])
+ if self.module.params['teaming_policy']['inbound_policy'] is not None:
+ teamingPolicy.reversePolicy = vim.BoolPolicy(value=self.module.params['teaming_policy']['inbound_policy'])
teamingPolicy.notifySwitches = vim.BoolPolicy(value=self.module.params['teaming_policy']['notify_switches'])
teamingPolicy.rollingOrder = vim.BoolPolicy(value=self.module.params['teaming_policy']['rolling_order'])
+
+ if self.module.params['teaming_policy']['active_uplinks'] or self.module.params['teaming_policy']['standby_uplinks']:
+ teamingPolicy.uplinkPortOrder = vim.dvs.VmwareDistributedVirtualSwitch.UplinkPortOrderPolicy()
+ if self.module.params['teaming_policy']['active_uplinks']:
+ teamingPolicy.uplinkPortOrder.activeUplinkPort = self.module.params['teaming_policy']['active_uplinks']
+ if self.module.params['teaming_policy']['standby_uplinks']:
+ teamingPolicy.uplinkPortOrder.standbyUplinkPort = self.module.params['teaming_policy']['standby_uplinks']
+
config.defaultPortConfig.uplinkTeamingPolicy = teamingPolicy
# PG policy (advanced_policy)
@@ -406,7 +507,19 @@ def build_config(self):
config.policy.vlanOverrideAllowed = self.module.params['port_policy']['vlan_override']
# PG Type
- config.type = self.module.params['portgroup_type']
+ # NOTE: 'portgroup_type' is deprecated.
+ if self.module.params['portgroup_type']:
+ config.type = self.module.params['portgroup_type']
+ elif self.module.params['port_binding'] == 'ephemeral':
+ config.type = 'ephemeral'
+ else:
+ config.type = 'earlyBinding'
+
+ if self.module.params['port_allocation']:
+ if self.module.params['port_allocation'] == 'elastic':
+ config.autoExpand = True
+ else:
+ config.autoExpand = False
return config
@@ -491,9 +604,9 @@ def check_dvspg_state(self):
return 'absent'
# Check config
- # Basic config
- if self.dvs_portgroup.config.numPorts != self.module.params['num_ports']:
- return 'update'
+ if self.module.params['port_allocation'] != 'elastic' and self.module.params['port_binding'] != 'ephemeral':
+ if self.dvs_portgroup.config.numPorts != self.module.params['num_ports']:
+ return 'update'
# Default port config
defaultPortConfig = self.dvs_portgroup.config.defaultPortConfig
@@ -513,19 +626,50 @@ def check_dvspg_state(self):
if defaultPortConfig.vlan.vlanId != int(self.module.params['vlan_id']):
return 'update'
- if defaultPortConfig.securityPolicy.allowPromiscuous.value != self.module.params['network_policy']['promiscuous'] or \
- defaultPortConfig.securityPolicy.forgedTransmits.value != self.module.params['network_policy']['forged_transmits'] or \
- defaultPortConfig.securityPolicy.macChanges.value != self.module.params['network_policy']['mac_changes']:
- return 'update'
+ # If the dvSwitch supports MAC learning, it's a version where securityPolicy is deprecated
+ if self.supports_mac_learning():
+ if defaultPortConfig.macManagementPolicy.allowPromiscuous != self.module.params['network_policy']['promiscuous'] or \
+ defaultPortConfig.macManagementPolicy.forgedTransmits != self.module.params['network_policy']['forged_transmits'] or \
+ defaultPortConfig.macManagementPolicy.macChanges != self.module.params['network_policy']['mac_changes']:
+ return 'update'
+
+ macLearning = self.module.params['mac_learning']
+ if macLearning:
+ macLearningPolicy = defaultPortConfig.macManagementPolicy.macLearningPolicy
+ if macLearning['allow_unicast_flooding'] is not None and macLearningPolicy.allowUnicastFlooding != macLearning['allow_unicast_flooding']:
+ return 'update'
+ if macLearning['enabled'] is not None and macLearningPolicy.enabled != macLearning['enabled']:
+ return 'update'
+ if macLearning['limit'] is not None and macLearningPolicy.limit != macLearning['limit']:
+ return 'update'
+ if macLearning['limit_policy'] and macLearningPolicy.limitPolicy != macLearning['limit_policy']:
+ return 'update'
+ else:
+ if defaultPortConfig.securityPolicy.allowPromiscuous.value != self.module.params['network_policy']['promiscuous'] or \
+ defaultPortConfig.securityPolicy.forgedTransmits.value != self.module.params['network_policy']['forged_transmits'] or \
+ defaultPortConfig.securityPolicy.macChanges.value != self.module.params['network_policy']['mac_changes']:
+ return 'update'
# Teaming Policy
teamingPolicy = self.dvs_portgroup.config.defaultPortConfig.uplinkTeamingPolicy
+
+ if self.module.params['teaming_policy']['inbound_policy'] is not None and \
+ teamingPolicy.reversePolicy.value != self.module.params['teaming_policy']['inbound_policy']:
+ return 'update'
+
if teamingPolicy.policy.value != self.module.params['teaming_policy']['load_balance_policy'] or \
- teamingPolicy.reversePolicy.value != self.module.params['teaming_policy']['inbound_policy'] or \
teamingPolicy.notifySwitches.value != self.module.params['teaming_policy']['notify_switches'] or \
teamingPolicy.rollingOrder.value != self.module.params['teaming_policy']['rolling_order']:
return 'update'
+ if self.module.params['teaming_policy']['active_uplinks'] and \
+ teamingPolicy.uplinkPortOrder.activeUplinkPort != self.module.params['teaming_policy']['active_uplinks']:
+ return 'update'
+
+ if self.module.params['teaming_policy']['standby_uplinks'] and \
+ teamingPolicy.uplinkPortOrder.standbyUplinkPort != self.module.params['teaming_policy']['standby_uplinks']:
+ return 'update'
+
# PG policy (advanced_policy)
policy = self.dvs_portgroup.config.policy
if policy.blockOverrideAllowed != self.module.params['port_policy']['block_override'] or \
@@ -542,9 +686,23 @@ def check_dvspg_state(self):
return 'update'
# PG Type
- if self.dvs_portgroup.config.type != self.module.params['portgroup_type']:
+ # NOTE: 'portgroup_type' is deprecated.
+ if self.module.params['portgroup_type']:
+ if self.dvs_portgroup.config.type != self.module.params['portgroup_type']:
+ return 'update'
+ elif self.module.params['port_binding'] == 'ephemeral':
+ if self.dvs_portgroup.config.type != 'ephemeral':
+ return 'update'
+ elif self.dvs_portgroup.config.type != 'earlyBinding':
return 'update'
+ # Check port allocation
+ if self.module.params['port_allocation']:
+ if self.module.params['port_allocation'] == 'elastic' and self.dvs_portgroup.config.autoExpand is False:
+ return 'update'
+ elif self.module.params['port_allocation'] == 'fixed' and self.dvs_portgroup.config.autoExpand is True:
+ return 'update'
+
return 'present'
@@ -555,8 +713,15 @@ def main():
portgroup_name=dict(required=True, type='str'),
switch_name=dict(required=True, type='str'),
vlan_id=dict(required=True, type='str'),
- num_ports=dict(required=True, type='int'),
- portgroup_type=dict(required=True, choices=['earlyBinding', 'lateBinding', 'ephemeral'], type='str'),
+ num_ports=dict(type='int'),
+ portgroup_type=dict(
+ type='str',
+ choices=['earlyBinding', 'lateBinding', 'ephemeral'],
+ removed_at_date='2021-12-01',
+ removed_from_collection='community.vmware',
+ ),
+ port_binding=dict(type='str', choices=['static', 'ephemeral']),
+ port_allocation=dict(type='str', choices=['fixed', 'elastic']),
state=dict(required=True, choices=['present', 'absent'], type='str'),
vlan_trunk=dict(type='bool', default=False),
vlan_private=dict(type='bool', default=False),
@@ -576,7 +741,7 @@ def main():
teaming_policy=dict(
type='dict',
options=dict(
- inbound_policy=dict(type='bool', default=False),
+ inbound_policy=dict(type='bool'),
notify_switches=dict(type='bool', default=True),
rolling_order=dict(type='bool', default=False),
load_balance_policy=dict(type='str',
@@ -588,10 +753,11 @@ def main():
'loadbalance_loadbased',
'failover_explicit',
],
- )
+ ),
+ active_uplinks=dict(type='list', elements='str'),
+ standby_uplinks=dict(type='list', elements='str'),
),
default=dict(
- inbound_policy=False,
notify_switches=True,
rolling_order=False,
load_balance_policy='loadbalance_srcid',
@@ -624,14 +790,28 @@ def main():
uplink_teaming_override=False,
vendor_config_override=False,
vlan_override=False
- )
+ ),
+ ),
+ mac_learning=dict(
+ type='dict',
+ options=dict(
+ allow_unicast_flooding=dict(type='bool'),
+ enabled=dict(type='bool'),
+ limit=dict(type='int'),
+ limit_policy=dict(type='str', choices=['allow', 'drop']),
+ ),
)
)
)
module = AnsibleModule(argument_spec=argument_spec,
+ required_one_of=[
+ ['portgroup_type', 'port_binding'],
+ ],
mutually_exclusive=[
- ['vlan_trunk', 'vlan_private']
+ ['portgroup_type', 'port_binding'],
+ ['portgroup_type', 'port_allocation'],
+ ['vlan_trunk', 'vlan_private'],
],
supports_check_mode=True)
diff --git a/plugins/modules/vmware_dvs_portgroup_info.py b/plugins/modules/vmware_dvs_portgroup_info.py
--- a/plugins/modules/vmware_dvs_portgroup_info.py
+++ b/plugins/modules/vmware_dvs_portgroup_info.py
@@ -18,7 +18,7 @@
author:
- Abhijeet Kasurde (@Akasurde)
notes:
-- Tested on vSphere 6.5
+- Tested on vSphere 7.0
requirements:
- python >= 2.6
- PyVmomi
@@ -33,6 +33,12 @@
- Name of a dvswitch to look for.
required: false
type: str
+ show_mac_learning:
+ description:
+ - Show or hide MAC learning information of the DVS portgroup.
+ type: bool
+ default: True
+ version_added: '1.10.0'
show_network_policy:
description:
- Show or hide network policies of DVS portgroup.
@@ -48,6 +54,12 @@
- Show or hide teaming policies of DVS portgroup.
type: bool
default: True
+ show_uplinks:
+ description:
+ - Show or hide uplinks of DVS portgroup.
+ type: bool
+ default: True
+ version_added: '1.10.0'
show_vlan_info:
description:
- Show or hide vlan information of the DVS portgroup.
@@ -153,6 +165,9 @@ def __init__(self, module):
# default behaviour, gather information about all dvswitches
self.dvsls = get_all_objs(self.content, [vim.DistributedVirtualSwitch], folder=datacenter.networkFolder)
+ def supports_mac_learning(self, dvs):
+ return hasattr(dvs.capability.featuresSupported, 'macLearningSupported') and dvs.capability.featuresSupported.macLearningSupported
+
def get_vlan_info(self, vlan_obj=None):
"""
Return vlan information from given object
@@ -184,19 +199,51 @@ def gather_dvs_portgroup_info(self):
dvs_lists = self.dvsls
result = dict()
for dvs in dvs_lists:
+ switch_supports_mac_learning = self.supports_mac_learning(dvs)
result[dvs.name] = list()
for dvs_pg in dvs.portgroup:
+ mac_learning = dict()
network_policy = dict()
teaming_policy = dict()
port_policy = dict()
vlan_info = dict()
+ active_uplinks = list()
+ standby_uplinks = list()
+
+ if dvs_pg.config.type == 'ephemeral':
+ port_binding = 'ephemeral'
+ else:
+ port_binding = 'static'
+
+ if dvs_pg.config.autoExpand is True:
+ port_allocation = 'elastic'
+ else:
+ port_allocation = 'fixed'
+
+ # If the dvSwitch supports MAC learning, it's a version where securityPolicy is deprecated
+ if self.module.params['show_network_policy']:
+ if switch_supports_mac_learning and dvs_pg.config.defaultPortConfig.macManagementPolicy:
+ network_policy = dict(
+ forged_transmits=dvs_pg.config.defaultPortConfig.macManagementPolicy.forgedTransmits,
+ promiscuous=dvs_pg.config.defaultPortConfig.macManagementPolicy.allowPromiscuous,
+ mac_changes=dvs_pg.config.defaultPortConfig.macManagementPolicy.macChanges
+ )
+ elif dvs_pg.config.defaultPortConfig.securityPolicy:
+ network_policy = dict(
+ forged_transmits=dvs_pg.config.defaultPortConfig.securityPolicy.forgedTransmits.value,
+ promiscuous=dvs_pg.config.defaultPortConfig.securityPolicy.allowPromiscuous.value,
+ mac_changes=dvs_pg.config.defaultPortConfig.securityPolicy.macChanges.value
+ )
- if self.module.params['show_network_policy'] and dvs_pg.config.defaultPortConfig.securityPolicy:
- network_policy = dict(
- forged_transmits=dvs_pg.config.defaultPortConfig.securityPolicy.forgedTransmits.value,
- promiscuous=dvs_pg.config.defaultPortConfig.securityPolicy.allowPromiscuous.value,
- mac_changes=dvs_pg.config.defaultPortConfig.securityPolicy.macChanges.value
+ if self.module.params['show_mac_learning'] and switch_supports_mac_learning:
+ macLearningPolicy = dvs_pg.config.defaultPortConfig.macManagementPolicy.macLearningPolicy
+ mac_learning = dict(
+ allow_unicast_flooding=macLearningPolicy.allowUnicastFlooding,
+ enabled=macLearningPolicy.enabled,
+ limit=macLearningPolicy.limit,
+ limit_policy=macLearningPolicy.limitPolicy
)
+
if self.module.params['show_teaming_policy']:
# govcsim does not have uplinkTeamingPolicy, remove this check once
# PR https://github.com/vmware/govmomi/pull/1524 merged.
@@ -208,6 +255,12 @@ def gather_dvs_portgroup_info(self):
rolling_order=dvs_pg.config.defaultPortConfig.uplinkTeamingPolicy.rollingOrder.value,
)
+ if self.module.params['show_uplinks'] and \
+ dvs_pg.config.defaultPortConfig.uplinkTeamingPolicy and \
+ dvs_pg.config.defaultPortConfig.uplinkTeamingPolicy.uplinkPortOrder:
+ active_uplinks = dvs_pg.config.defaultPortConfig.uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort
+ standby_uplinks = dvs_pg.config.defaultPortConfig.uplinkTeamingPolicy.uplinkPortOrder.standbyUplinkPort
+
if self.params['show_port_policy']:
# govcsim does not have port policy
if dvs_pg.config.policy:
@@ -234,11 +287,16 @@ def gather_dvs_portgroup_info(self):
dvswitch_name=dvs_pg.config.distributedVirtualSwitch.name,
description=dvs_pg.config.description,
type=dvs_pg.config.type,
+ port_binding=port_binding,
+ port_allocation=port_allocation,
teaming_policy=teaming_policy,
port_policy=port_policy,
+ mac_learning=mac_learning,
network_policy=network_policy,
vlan_info=vlan_info,
key=dvs_pg.key,
+ active_uplinks=active_uplinks,
+ standby_uplinks=standby_uplinks,
)
result[dvs.name].append(dvpg_details)
@@ -249,8 +307,10 @@ def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
datacenter=dict(type='str', required=True),
+ show_mac_learning=dict(type='bool', default=True),
show_network_policy=dict(type='bool', default=True),
show_teaming_policy=dict(type='bool', default=True),
+ show_uplinks=dict(type='bool', default=True),
show_port_policy=dict(type='bool', default=True),
dvswitch=dict(),
show_vlan_info=dict(type='bool', default=False),
| diff --git a/tests/integration/targets/prepare_vmware_tests/tasks/setup_dvswitch.yml b/tests/integration/targets/prepare_vmware_tests/tasks/setup_dvswitch.yml
--- a/tests/integration/targets/prepare_vmware_tests/tasks/setup_dvswitch.yml
+++ b/tests/integration/targets/prepare_vmware_tests/tasks/setup_dvswitch.yml
@@ -3,7 +3,7 @@
vmware_dvswitch:
datacenter_name: '{{ dc1 }}'
switch_name: '{{ dvswitch1 }}'
- switch_version: 6.5.0
+ switch_version: 6.6.0
mtu: 9000
uplink_quantity: 2
discovery_proto: lldp
diff --git a/tests/integration/targets/vmware_dvs_portgroup/tasks/main.yml b/tests/integration/targets/vmware_dvs_portgroup/tasks/main.yml
--- a/tests/integration/targets/vmware_dvs_portgroup/tasks/main.yml
+++ b/tests/integration/targets/vmware_dvs_portgroup/tasks/main.yml
@@ -17,7 +17,7 @@
portgroup_name: "basic"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0001
@@ -29,6 +29,104 @@
that:
- dvs_pg_result_0001.changed
+- when: vcsim is not defined
+ block:
+ - name: enable MAC learning
+ vmware_dvs_portgroup:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ switch_name: "{{ dvswitch1 }}"
+ portgroup_name: "basic"
+ vlan_id: 0
+ num_ports: 32
+ port_binding: static
+ mac_learning:
+ enabled: true
+ state: present
+ register: enable_mac_learning_result
+
+ - debug:
+ var: enable_mac_learning_result
+
+ - name: ensure MAC learning is enabled
+ assert:
+ that:
+ - enable_mac_learning_result.changed
+
+ - name: enable MAC learning again (idempotency)
+ vmware_dvs_portgroup:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ switch_name: "{{ dvswitch1 }}"
+ portgroup_name: "basic"
+ vlan_id: 0
+ num_ports: 32
+ port_binding: static
+ mac_learning:
+ enabled: true
+ state: present
+ register: enable_mac_learning_again_result
+
+ - debug:
+ var: enable_mac_learning_again_result
+
+ - name: ensure MAC learning is not enabled again
+ assert:
+ that:
+ - not enable_mac_learning_again_result.changed
+
+ - name: disable MAC learning
+ vmware_dvs_portgroup:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ switch_name: "{{ dvswitch1 }}"
+ portgroup_name: "basic"
+ vlan_id: 0
+ num_ports: 32
+ port_binding: static
+ mac_learning:
+ enabled: false
+ state: present
+ register: disable_mac_learning_result
+
+ - debug:
+ var: disable_mac_learning_result
+
+ - name: ensure MAC learning is disabled
+ assert:
+ that:
+ - disable_mac_learning_result.changed
+
+ - name: disable MAC learning again (idempotency)
+ vmware_dvs_portgroup:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ switch_name: "{{ dvswitch1 }}"
+ portgroup_name: "basic"
+ vlan_id: 0
+ num_ports: 32
+ port_binding: static
+ mac_learning:
+ enabled: false
+ state: present
+ register: disable_mac_learning_again_result
+
+ - debug:
+ var: disable_mac_learning_again_result
+
+ - name: ensure MAC learning is not disabled again
+ assert:
+ that:
+ - not disable_mac_learning_again_result.changed
+
- name: create basic VLAN portgroup
vmware_dvs_portgroup:
validate_certs: false
@@ -39,7 +137,7 @@
portgroup_name: "basic-vlan10"
vlan_id: 10
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0002
@@ -59,7 +157,7 @@
vlan_id: 1-4094
vlan_trunk: true
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0003
@@ -78,7 +176,7 @@
portgroup_name: "basic"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0004
@@ -97,7 +195,7 @@
portgroup_name: "basic-all-enabled"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
network_policy:
promiscuous: true
@@ -132,7 +230,7 @@
portgroup_name: "basic-some-enabled"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
network_policy:
promiscuous: true
@@ -157,7 +255,7 @@
portgroup_name: "basic-some-enabled"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
network_policy:
promiscuous: true
@@ -182,7 +280,7 @@
portgroup_name: "basic-some-enabled"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
network_policy:
promiscuous: true
@@ -207,7 +305,7 @@
portgroup_name: "basic-some-enabled"
vlan_id: 0
num_ports: 16
- portgroup_type: earlyBinding
+ port_binding: static
state: present
network_policy:
promiscuous: true
@@ -232,7 +330,7 @@
portgroup_name: "basic-some-enabled"
vlan_id: 0
num_ports: 16
- portgroup_type: ephemeral
+ port_binding: ephemeral
state: present
network_policy:
promiscuous: true
@@ -257,7 +355,7 @@
portgroup_name: "basic"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: absent
register: dvs_pg_result_0011
@@ -276,7 +374,7 @@
portgroup_name: "basic"
vlan_id: 0
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: absent
register: dvs_pg_result_0012
@@ -296,7 +394,7 @@
vlan_id: 1-4096
vlan_trunk: true
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0013
ignore_errors: true
@@ -317,7 +415,7 @@
portgroup_name: "basic-vlan10"
vlan_id: 20
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0014
@@ -337,7 +435,7 @@
vlan_id: 1000-2000
vlan_trunk: true
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0015
@@ -357,7 +455,7 @@
vlan_id: 1-1000, 1005, 1100-1200
vlan_trunk: true
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0016
@@ -377,7 +475,7 @@
vlan_id: 1-1000, 1006, 1100-1200
vlan_trunk: true
num_ports: 32
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: dvs_pg_result_0017
@@ -396,7 +494,7 @@
vlan_id: 10
vlan_private: true
num_ports: 12
- portgroup_type: earlyBinding
+ port_binding: static
state: present
validate_certs: false
register: dvs_pg_result_0018
@@ -460,7 +558,7 @@
vlan_id: 1
vlan_private: true
num_ports: 12
- portgroup_type: earlyBinding
+ port_binding: static
state: present
validate_certs: false
register: dvs_pg_result_0019
@@ -480,7 +578,7 @@
vlan_id: 2
vlan_private: true
num_ports: 12
- portgroup_type: earlyBinding
+ port_binding: static
state: present
validate_certs: false
register: dvs_pg_result_0020
@@ -499,7 +597,7 @@
switch_name: dvswitch_0001
vlan_id: 5
num_ports: 12
- portgroup_type: earlyBinding
+ port_binding: static
state: present
validate_certs: false
register: dvs_pg_result_0021
@@ -550,7 +648,7 @@
portgroup_name: 'dvportgroup\/%'
vlan_id: 1
num_ports: 8
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: create_dvportgroup_with_special_characters_result
@@ -568,7 +666,7 @@
portgroup_name: 'dvportgroup\/%'
vlan_id: 1
num_ports: 8
- portgroup_type: earlyBinding
+ port_binding: static
state: present
register: create_dvportgroup_with_special_characters_idempotency_check_result
@@ -586,7 +684,7 @@
portgroup_name: 'dvportgroup\/%'
vlan_id: 1
num_ports: 8
- portgroup_type: earlyBinding
+ port_binding: static
state: absent
register: delete_dvportgroup_with_special_characters_result
@@ -604,7 +702,7 @@
portgroup_name: 'dvportgroup\/%'
vlan_id: 1
num_ports: 8
- portgroup_type: earlyBinding
+ port_binding: static
state: absent
register: delete_dvportgroup_with_special_characters_idempotency_check_result
@@ -643,7 +741,7 @@
vlan_id: "1500,1501"
vlan_trunk: true
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
network_policy:
promiscuous: false
forged_transmits: true
@@ -666,7 +764,7 @@
vlan_id: "1500,1501"
vlan_trunk: true
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
network_policy:
promiscuous: false
forged_transmits: true
@@ -689,7 +787,7 @@
vlan_id: "1500,1501"
vlan_trunk: true
num_ports: 120
- portgroup_type: earlyBinding
+ port_binding: static
network_policy:
promiscuous: false
forged_transmits: true
| vmware_dvs_portgroup add possibility to override the failover
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Currently it is not possible to override the NIC Teaming failover order of a distributed port group.
As far as I know the failover order can be specified with the usage of
`nicTeaming.nicOrder.activeNic` and` nicTeaming.nicOrder.standbyNic`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_dvs_portgroup
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
With the new setting "activeNic" only the LAG1 inside the failover order should be set to active. With the empty value of "standbyNic" all other Interfaces should be set to "unused"
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create vlan portgroup
community.vmware.vmware_dvs_portgroup:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
portgroup_name: vlan-123-portrgoup
switch_name: dvSwitch
vlan_id: 123
num_ports: 120
portgroup_type: earlyBinding
state: present
teaming_policy:
activeNic: ['LAG1']
standbyNic: []
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
* [`plugins/modules/vmware_dvs_portgroup.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_dvs_portgroup.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify ---> | 2021-03-25T17:20:27 |
ansible-collections/community.vmware | 804 | ansible-collections__community.vmware-804 | [
"803"
] | 2782d08e0fece820c70a90b8b0a9c4b5ea412868 | diff --git a/plugins/modules/vmware_vmotion.py b/plugins/modules/vmware_vmotion.py
--- a/plugins/modules/vmware_vmotion.py
+++ b/plugins/modules/vmware_vmotion.py
@@ -166,6 +166,8 @@ def __init__(self, module):
if dest_host_name is not None:
self.host_object = find_hostsystem_by_name(content=self.content,
hostname=dest_host_name)
+ if self.host_object is None:
+ self.module.fail_json(msg="Unable to find destination host %s" % dest_host_name)
# Get Destination Datastore if specified by user
dest_datastore = self.params.get('destination_datastore', None)
@@ -189,7 +191,7 @@ def __init__(self, module):
self.resourcepool_object = self.host_object.parent.resourcePool
# Fail if resourcePool object is not found
if self.resourcepool_object is None:
- self.module.fail_json(msg="Unable to destination resource pool object which is required")
+ self.module.fail_json(msg="Unable to find destination resource pool object which is required")
# Check if datastore is required, this check is required if destination
# and source host system does not share same datastore.
| Error message with bad destination host in vmotion module
##### SUMMARY
The error message we get when we provide a wrong ESXi node name as `destination_host` attribute to `vmware_vmotion` module is not helpful. I assume that we should have a clean exception handler with a correct error message.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- module `vmware_vmotion`
##### ANSIBLE VERSION
```paste below
ansible 2.10.7
config file = /home/xenlo/Projects/619/etc/ansible.cfg
configured module search path = ['/home/xenlo/Projects/619/library']
ansible python module location = /home/xenlo/.virtualenvs/619/lib/python3.6/site-packages/ansible
executable location = /home/xenlo/.virtualenvs/619/bin/ansible
python version = 3.6.12 (default, Dec 02 2020, 09:44:23) [GCC]
```
##### CONFIGURATION
```paste below
(619) ahgff@bootstrap:~/Projects/619> ansible-config dump --only-changed | sed 's/ahgff/xenlo/'
ANSIBLE_SSH_ARGS(/home/xenlo/Projects/619/etc/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=60 -o ConnectionAttempts=2
CACHE_PLUGIN(/home/xenlo/Projects/619/etc/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/xenlo/Projects/619/etc/ansible.cfg) = $HOME/.ansible/facts/
CACHE_PLUGIN_TIMEOUT(/home/xenlo/Projects/619/etc/ansible.cfg) = 3600
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/home/xenlo/Projects/619/plugins/action', '/home/ahgff/.virtualenvs/619/lib/python3.6/site-packages/ara/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = ['/home/xenlo/.virtualenvs/619/lib/python3.6/site-packages/ara/plugins/callback']
DEFAULT_CALLBACK_WHITELIST(/home/xenlo/Projects/619/etc/ansible.cfg) = ['profile_tasks']
DEFAULT_FILTER_PLUGIN_PATH(env: ANSIBLE_FILTER_PLUGINS) = ['/home/xenlo/Projects/619/plugins/filter']
DEFAULT_GATHERING(/home/xenlo/Projects/619/etc/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/xenlo/Projects/619/etc/ansible.cfg) = ['/home/ahgff/Projects/619/inventories/dev', '/home/ahgff/Projects/619/inventories/setupenv']
DEFAULT_LOG_PATH(/home/xenlo/Projects/619/etc/ansible.cfg) = /home/ahgff/.ansible/log/ansible.log
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/home/xenlo/Projects/619/library']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['/home/xenlo/Projects/619/roles.galaxy', '/home/ahgff/Projects/619/roles']
DEFAULT_STDOUT_CALLBACK(/home/xenlo/Projects/619/etc/ansible.cfg) = yaml
GALAXY_ROLE_SKELETON(env: ANSIBLE_GALAXY_ROLE_SKELETON) = /home/xenlo/Projects/619/etc/skel/default
HOST_KEY_CHECKING(/home/xenlo/Projects/619/etc/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Target infra is vSphere 6.7
##### STEPS TO REPRODUCE
Play a `vmware_vmotion`task by providing an non-existing ESXi node name as `destination_host` (In my case I was providing the short dns name when the vcenter knows the hosts with the FQDN name).
```yaml
- name: Ensure that my VM is running on the right node and disk images are stored on SAN
community.vmware.vmware_vmotion:
hostname: "{{ hostvars['vcenter'].fqdn }}"
username: "Administrator@{{ hostvars['vcenter'].dns_domain }}"
password: "{{ vcenter_pass }}"
validate_certs: false
vm_name: "my-vm"
destination_datastore: "{{ vcenter_datastore }}"
destination_host: "this-is-NOT-an-esxi-node"
delegate_to: localhost
```
##### EXPECTED RESULTS
I would expect having a error telling me that the "`this-is-NOT-an-esxi-node`" destination host does not exist in the cluster.
##### ACTUAL RESULTS
I get the following python error which is not really helpfull to understand the problem.
```
AttributeError: 'NoneType' object has no attribute 'parent'
```
Here is the output I get in one of my playbook:
```
TASK [set_vmware_cluster : Ensure that VSP and vCenter are running on the right node and disk images are stored on SAN] ****************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv.
failed: [node01] (item=vcenter) => changed=false
ansible_loop_var: vmotion_host
module_stderr: |-
Traceback (most recent call last):
File "/home/xenlo/.ansible/tmp/ansible-tmp-1618828944.0270574-24089-200331838688996/AnsiballZ_vmware_vmotion.py", line 102, in <module>
_ansiballz_main()
File "/home/xenlo/.ansible/tmp/ansible-tmp-1618828944.0270574-24089-200331838688996/AnsiballZ_vmware_vmotion.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/xenlo/.ansible/tmp/ansible-tmp-1618828944.0270574-24089-200331838688996/AnsiballZ_vmware_vmotion.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.vmware.vmware_vmotion_payload_xsxp1s95/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py", line 375, in <module>
File "/tmp/ansible_community.vmware.vmware_vmotion_payload_xsxp1s95/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py", line 371, in main
File "/tmp/ansible_community.vmware.vmware_vmotion_payload_xsxp1s95/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py", line 200, in __init__
AttributeError: 'NoneType' object has no attribute 'parent'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
vmotion_host: vcenter
```
| The error was generated on [line 189 in vmware_vmotion.py](https://github.com/ansible-collections/community.vmware/blob/main/plugins/modules/vmware_vmotion.py#L189) but in fact the None value comes from the [assigned `host_object` with find_hostsystem_by_name`](https://github.com/ansible-collections/community.vmware/blob/main/plugins/modules/vmware_vmotion.py#L167).
I assume that either in the `find_hostsystem_by_name` function or just after the definition to `host_object` we should have a test `if not None` otherwise through a helpful error like "destination_host %s not found in the cluster!".
cc @Akasurde @Tomorrow9 @bedecarroll @goneri @lparkes @oboukili @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Hi @xenlo, It looks like you've already done the investigation :-). Would you like to push a PR that improve the error message? | 2021-04-19T16:28:10 |
|
ansible-collections/community.vmware | 806 | ansible-collections__community.vmware-806 | [
"805"
] | 2782d08e0fece820c70a90b8b0a9c4b5ea412868 | diff --git a/plugins/modules/vmware_cluster_info.py b/plugins/modules/vmware_cluster_info.py
--- a/plugins/modules/vmware_cluster_info.py
+++ b/plugins/modules/vmware_cluster_info.py
@@ -282,8 +282,8 @@ def gather_cluster_info(self):
# VSAN
if hasattr(cluster.configurationEx, 'vsanConfigInfo'):
vsan_config = cluster.configurationEx.vsanConfigInfo
- enabled_vsan = vsan_config.enabled,
- vsan_auto_claim_storage = vsan_config.defaultConfig.autoClaimStorage,
+ enabled_vsan = vsan_config.enabled
+ vsan_auto_claim_storage = vsan_config.defaultConfig.autoClaimStorage
tag_info = []
if self.params.get('show_tag'):
| vmware_cluster_info returns enabled_vsan as a list and not string
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Previous to 1.8.0 the module vmware_cluster_info always returned false even when it was enabled. In 1.8.0 this was fixed to accurately reflect the vSAN status, but now the value is a list with true in it, rather than just a string, like enable_ha and enabled_drs returns currently. When checking this value you have to specify
when: enabled_vsan[0] == true
rather than
when: enabled_vsan == true
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_cluster_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.19
config file = /home/postj1/AWX/ansible.cfg
configured module search path = ['/home/postj1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/username/AWX/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
COLLECTIONS_PATHS(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/custom_filters']
DEFAULT_ROLES_PATH(/home/username/AWX/ansible.cfg) = ['/home/username/AWX/roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run vmware_cluster_info against a cluster that has vSAN enabled and debug the registered output or run another task using
when: enabled_vsan == true
to only run the task on vSAN clusters
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
"enable_ha": true,
"enabled_drs": true,
"enabled_vsan": true,
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
"enable_ha": true,
"enabled_drs": true,
"enabled_vsan": [
true
],
```
| 2021-04-20T07:07:45 |
||
ansible-collections/community.vmware | 831 | ansible-collections__community.vmware-831 | [
"655"
] | eb500141f29212ea1ebe4aa91f334bf8f2d0beba | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -2324,7 +2324,12 @@ def customize_vm(self, vm_obj):
default_name = vm_obj.name.replace(' ', '')
punctuation = string.punctuation.replace('-', '')
default_name = ''.join([c for c in default_name if c not in punctuation])
- ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
+
+ if self.params['customization']['hostname'] is not None:
+ ident.userData.computerName.name = self.params['customization']['hostname'][0:15]
+ else:
+ ident.userData.computerName.name = default_name[0:15]
+
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
@@ -2385,7 +2390,12 @@ def customize_vm(self, vm_obj):
default_name = self.params['name']
elif vm_obj:
default_name = vm_obj.name
- hostname = str(self.params['customization'].get('hostname', default_name.split('.')[0]))
+
+ if self.params['customization']['hostname'] is not None:
+ hostname = self.params['customization']['hostname'].split('.')[0]
+ else:
+ hostname = default_name.split('.')[0]
+
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
| vmware_guest create vm with hostname None
##### SUMMARY
From ansible 2.10.7 vmware_guest module can't set guest os hostname. Hostname looks like 'root@None'. So it seems that something became None in module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest
##### ANSIBLE VERSION
ansible --version is also broken. Now we can't ran --version without passing some other args!!
##### OS / ENVIRONMENT
os x 11.1
ansible 2.10.7 (brew)
python 3.9
##### STEPS TO REPRODUCE
create vm with vmware_guest module
##### EXPECTED RESULTS
should see hostname like set in inventory
##### ACTUAL RESULTS
hostname inside vm would be None.
| Files identified in the description:
* [`plugins/modules/vmware_guest.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @nerzhul @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I am also seeing the same issue with a playbook like this:
```yaml
- hosts: localhost
tasks:
- name: Test vm provision
community.vmware.vmware_guest:
hostname: "{{ vmware_orchestration_login.hostname }}"
username: "{{ vmware_orchestration_login.username }}"
password: "{{ vmware_orchestration_login.password }}"
validate_certs: "{{ vmware_orchestration_login.validate_certs }}"
folder: "{{ vmware_folder }}"
name: "test-host-name"
state: poweredon
template: "{{ vmware_template }}"
cluster: "{{ vmware_orchestration_env.cluster }}"
datacenter: "{{ vmware_orchestration_env.datacenter }}"
disk:
- size_gb: 50
hardware:
memory_mb: 32768
num_cpus: 8
scsi: paravirtual
networks:
- name: "{{ vmware_network_name }}"
device_type: vmxnet3
datastore: vsanDatastore
```
ansible --version
```
$ ansible --version
ansible 2.10.7rc1
config file = ./ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.6/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
It looks like the issue is https://github.com/ansible-collections/community.vmware/blob/902710c6490f075d53e5cb532ad98d4034f13227/plugins/modules/vmware_guest.py#L2379
```python
hostname = str(self.params['customization'].get('hostname', default_name.split('.')[0]))
```
It seems that now, ansible returns "None" from this rather than using the default_name variable.
As a workaround, I can set the customization.hostname variable in my playbook, and everything works fine, but it's not using the "name" variable as a default hostname correctly as it should do according to the docs.
@ollie1 Would you be willing to submit a PR for this? | 2021-05-05T23:57:59 |
|
ansible-collections/community.vmware | 840 | ansible-collections__community.vmware-840 | [
"536"
] | bb9220deb9df7963e29d7f154ba8fd971feced7d | diff --git a/plugins/modules/vmware_vswitch.py b/plugins/modules/vmware_vswitch.py
--- a/plugins/modules/vmware_vswitch.py
+++ b/plugins/modules/vmware_vswitch.py
@@ -292,17 +292,24 @@ def state_update_vswitch(self):
"""
results = dict(changed=False, result="No change in vSwitch '%s'" % self.switch)
vswitch_pnic_info = self.available_vswitches[self.switch]
- remain_pnic = []
+ pnic_add = []
for desired_pnic in self.nics:
if desired_pnic not in vswitch_pnic_info['pnic']:
- remain_pnic.append(desired_pnic)
-
+ pnic_add.append(desired_pnic)
+ pnic_remove = []
+ for configured_pnic in vswitch_pnic_info['pnic']:
+ if configured_pnic not in self.nics:
+ pnic_remove.append(configured_pnic)
diff = False
# Update all nics
all_nics = vswitch_pnic_info['pnic']
- if remain_pnic:
- all_nics += remain_pnic
+ if pnic_add or pnic_remove:
diff = True
+ if pnic_add:
+ all_nics += pnic_add
+ if pnic_remove:
+ for pnic in pnic_remove:
+ all_nics.remove(pnic)
if vswitch_pnic_info['mtu'] != self.mtu or \
vswitch_pnic_info['num_ports'] != self.number_of_ports:
| When using vmware_vswitch, allow removing NICs
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Same request as described[ here.]( https://github.com/ansible/ansible/issues/44589)
Request looks to have been closed due to repo move, but still would be nice to have.
(copying steps to reproduce from linked issue that was closed):
```
- name: Add host to a standard vSwitch
delegate_to: localhost
vmware_vswitch:
hostname: '{{ ansible_nodename }}'
mtu: '{{ vswitch.mtu }}'
nics: '{{ vswitch.vmnics }}'
password: '{{ esx_password }}'
state: '{{ vswitch.state }}'
switch: '{{ vswitch.name }}'
username: '{{ esx_user }}'
validate_certs: '{{ validate_certs }}'
```
1. Run the above where vswitch.vmnics is this:
```
vswitch:
vmnics:
- vmnic0
- vmnic1
- vmnic2
```
2. Run it again, but remove one of the nics to look like this:
```
vswitch:
vmnics:
- vmnic0
- vmnic2
```
You'll see success / no change.
EXPECTED RESULTS
I'd expect vmnic1 to be removed from the switch.
ACTUAL RESULTS
Nothing changes. This is because the vmware_vswitch module only adds NICs that aren't already on the switch. It doesn't remove a NIC from the switch.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vswitch
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
* [`plugins/modules/vmware_vswitch.py`](https://github.com/['ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_vswitch.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Files identified in the description:
* [`plugins/modules/vmware_vswitch.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_vswitch.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Hello,
I confirm this issue with this env :
- ansible 2.9.20
- python version = 3.6.9
- community.vmware 1.9 (release)
It's would be a nice feature because there is no warning about this comportement !
Hello, i confirm it could be useful | 2021-05-11T20:36:32 |
|
ansible-collections/community.vmware | 841 | ansible-collections__community.vmware-841 | [
"819"
] | c6912d3fb1dcaaa37dbe765dbc1ebd522776a6df | diff --git a/plugins/modules/vmware_vcenter_settings.py b/plugins/modules/vmware_vcenter_settings.py
--- a/plugins/modules/vmware_vcenter_settings.py
+++ b/plugins/modules/vmware_vcenter_settings.py
@@ -227,6 +227,12 @@
type: str
choices: ['none', 'error', 'warning', 'info', 'verbose', 'trivia']
default: 'info'
+ advanced_settings:
+ description:
+ - A dictionary of advanced settings.
+ default: {}
+ type: dict
+ version_added: '1.11.0'
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -267,6 +273,15 @@
long_operations: 120
logging_options: info
delegate_to: localhost
+
+- name: Enable Retreat Mode for cluster with MOID domain-c8 (https://kb.vmware.com/kb/80472)
+ community.vmware.vmware_vcenter_settings:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ advanced_settings:
+ 'config.vcls.clusters.domain-c8.enabled': 'false'
+ delegate_to: localhost
'''
RETURN = r'''
@@ -388,7 +403,7 @@
pass
from ansible.module_utils.basic import AnsibleModule
-from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec
+from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, option_diff, vmware_argument_spec
from ansible.module_utils._text import to_native
@@ -455,6 +470,7 @@ def ensure(self):
timeout_normal_operations = self.params['timeout_settings'].get('normal_operations')
timeout_long_operations = self.params['timeout_settings'].get('long_operations')
logging_options = self.params.get('logging_options')
+
changed = False
changed_list = []
@@ -499,6 +515,18 @@ def ensure(self):
exec("diff_config['after']['snmp_receiver_%s_community'] = snmp_receiver_%s_community" % (n, n))
result['diff'] = {}
+ advanced_settings = self.params['advanced_settings']
+ changed_advanced_settings = option_diff(advanced_settings, self.option_manager.setting, False)
+
+ if changed_advanced_settings:
+ changed = True
+ change_option_list += changed_advanced_settings
+
+ for advanced_setting in advanced_settings:
+ result[advanced_setting] = advanced_settings[advanced_setting]
+ diff_config['before'][advanced_setting] = result[advanced_setting]
+ diff_config['after'][advanced_setting] = result[advanced_setting]
+
for setting in self.option_manager.setting:
# Database
if setting.key == 'VirtualCenter.MaxDBConnection' and setting.value != db_max_connections:
@@ -783,6 +811,19 @@ def ensure(self):
)
diff_config['before']['logging_options'] = setting.value
+ # Advanced settings
+ for advanced_setting in changed_advanced_settings:
+ if setting.key == advanced_setting.key and setting.value != advanced_setting.value:
+ changed_list.append(advanced_setting.key)
+ result[advanced_setting.key + '_previous'] = advanced_setting.value
+ diff_config['before'][advanced_setting.key] = advanced_setting.value
+
+ for advanced_setting in changed_advanced_settings:
+ if advanced_setting.key not in changed_list:
+ changed_list.append(advanced_setting.key)
+ result[advanced_setting.key + '_previous'] = "N/A"
+ diff_config['before'][advanced_setting.key] = "N/A"
+
if changed:
if self.module.check_mode:
changed_suffix = ' would be changed'
@@ -927,6 +968,7 @@ def main():
),
),
logging_options=dict(default='info', choices=['none', 'error', 'warning', 'info', 'verbose', 'trivia']),
+ advanced_settings=dict(type='dict', default=dict(), required=False),
)
module = AnsibleModule(
| diff --git a/tests/integration/targets/vmware_vcenter_settings/tasks/main.yml b/tests/integration/targets/vmware_vcenter_settings/tasks/main.yml
--- a/tests/integration/targets/vmware_vcenter_settings/tasks/main.yml
+++ b/tests/integration/targets/vmware_vcenter_settings/tasks/main.yml
@@ -6,7 +6,7 @@
name: prepare_vmware_tests
- name: Configure general settings in check mode
- vmware_vcenter_settings:
+ community.vmware.vmware_vcenter_settings:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -45,7 +45,7 @@
- debug: var=all_settings_results_check_mode
- name: Configure general settings
- vmware_vcenter_settings:
+ community.vmware.vmware_vcenter_settings:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -83,7 +83,7 @@
- debug: var=all_settings_results
- name: Configure settings with out runtime_settings parameter
- vmware_vcenter_settings:
+ community.vmware.vmware_vcenter_settings:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -96,7 +96,7 @@
- debug: var=without_runtime_settings_results
- name: Configure settings with check mode and diff mode
- vmware_vcenter_settings:
+ community.vmware.vmware_vcenter_settings:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -121,7 +121,7 @@
- configure_settings_check_diff_mode_result.diff is defined
- name: Configure settings without check mode and diff mode
- vmware_vcenter_settings:
+ community.vmware.vmware_vcenter_settings:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -143,7 +143,7 @@
- configure_settings_result.changed is sameas true
- name: Configure settings without check mode and diff mode(idempotency check)
- vmware_vcenter_settings:
+ community.vmware.vmware_vcenter_settings:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -163,3 +163,53 @@
assert:
that:
- configure_settings_idempotency_check_result.changed is sameas false
+
+- when: vcsim is not defined
+ block:
+ - name: Configure advanced settings
+ community.vmware.vmware_vcenter_settings:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ database:
+ max_connections: 50
+ task_cleanup: true
+ task_retention: 180
+ event_cleanup: true
+ event_retention: 180
+ mail:
+ server: mail.example.local
+ sender: vcenter@{{ inventory_hostname }}
+ advanced_settings:
+ 'config.vcls.clusters.domain-c8.enabled': 'false'
+ register: configure_advanced_settings
+
+ - name: Make sure that advanced settings are configured
+ assert:
+ that:
+ - configure_advanced_settings.changed
+
+ - name: Configure advanced settings again (idempotency)
+ community.vmware.vmware_vcenter_settings:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ database:
+ max_connections: 50
+ task_cleanup: true
+ task_retention: 180
+ event_cleanup: true
+ event_retention: 180
+ mail:
+ server: mail.example.local
+ sender: vcenter@{{ inventory_hostname }}
+ advanced_settings:
+ 'config.vcls.clusters.domain-c8.enabled': 'false'
+ register: configure_advanced_settings_again
+
+ - name: Make sure that advanced settings are not configured (idempotency)
+ assert:
+ that:
+ - not configure_advanced_settings_again.changed
| Add option to disable vCLS for a given cluster
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
With vCenter 7, VMware introduced the VMware cluster services concept that uses "vCLS" vm's. Unfortunately these spin up automatically, but it is not possible to - easily - remove them (they spin back up automatically as well), while they count as actual VM's when trying to remove hosts/clusters. This makes it fairly impossible to remove a cluster automatically purely via ansible at the moment.
This [page](https://communities.vmware.com/t5/VMware-vCenter-Discussions/disable-vCLS-need-to-delete-vCenter-advanced-option-name/td-p/2305547) states that setting some entry in a config file and restarting vpxd would clean the VM's up, but it would be way nicer to have an Ansible module that implements management for this for a given cluster.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Could be part of community.vmware.vmware_cluster, but a new community.vmware.vmware_cluster_vcls is also an option.
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The feature should probably be on by default, but a switch to disable it could be usable as follows:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Disable cluster vCLS
community.vmware.vmware_cluster:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_vcls: false
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@lvlie Thanks for reporting this feature.
@lvlie For now, you can use this playbook to enable retreat mode -
```yaml
---
- hosts: localhost
vars_files:
- vcenter_vars.yml
vars:
c1: "Asia-Cluster1"
tasks:
- name: Get Cluster Managed Object ID
vmware_cluster_info:
cluster_name: "{{ c1 }}"
schema: vsphere
properties:
- name
- _moId
register: r
- set_fact:
key: "config.vcls.clusters.{{ r.clusters[c1]['moid'] }}.enabled"
value: "false"
- name: Enable retreat mode on cluster {{ c1 }}
vmware_guest:
datacenter: "{{ dc1 }}"
cluster: "{{ c1 }}"
name: "{{ vcsa }}"
folder: /Asia-Datacenter1/vm/
advanced_settings:
- key: "{{ key }}"
value: "{{ value }}"
state: present
```
Target your VCSA virtual machine. Let me know if it works for you.
Thanks.
needs_info
Oh, cool, didn't know you could do that. Will try it later today or tomorrow
Applied the above on the VM that is the vcsa instance where the cluster is configured in. This results in the following extra config parameters on the vcsa vm:

Which seems to be correct. However, the vCLS VM's do not disappear in the vcsa instance. As the page says to restart vpxd I restarted the vcsa VM (that should do it right? :) ) but to no avail.
As per https://kb.vmware.com/s/article/80472, point 7 in 'Using the vSphere Client',
```
vCLS monitoring service will initiate the clean-up of vCLS VMs and user will start noticing the tasks with the VM deletion.
```
Do you get the same behavior when you do it manually?
btw, here is the crude patch (untested) -
```diff
diff --git a/plugins/modules/vmware_vcenter_settings.py b/plugins/modules/vmware_vcenter_settings.py
index e31c43f..af084ce 100644
--- a/plugins/modules/vmware_vcenter_settings.py
+++ b/plugins/modules/vmware_vcenter_settings.py
@@ -783,6 +783,34 @@ class VmwareVcenterSettings(PyVmomi):
)
diff_config['before']['logging_options'] = setting.value
+ for option in self.module.params.get('options'):
+ option_name = option.get('name', '')
+ option_value = option.get('value')
+ if option_name in [setting.key for setting in self.option_manager.setting]:
+ try:
+ option_obj = self.option_manager.QueryView(name=option_name)
+ except vim.fault.InvalidName as invalid_name:
+ self.module.fail_json(
+ msg="Failed to query option(s) as one or more OptionValue objects refers to a "
+ "non-existent option : %s" % to_native(invalid_name.msg)
+ )
+ if option_obj[0].value != option_value:
+ changed = True
+ changed_list.append(option_name)
+ result[option_name] = option_value
+ change_option_list.append(
+ vim.option.OptionValue(key=option_name, value=option_value)
+ )
+ diff_config['before'][option_name] = option_value
+ else:
+ changed = True
+ changed_list.append(option_name)
+ result[option_name] = option_value
+ change_option_list.append(
+ vim.option.OptionValue(key=option_name, value=option_value)
+ )
+ diff_config['before'][option_name] = option_value
+
if changed:
if self.module.check_mode:
changed_suffix = ' would be changed'
@@ -927,6 +955,7 @@ def main():
),
),
logging_options=dict(default='info', choices=['none', 'error', 'warning', 'info', 'verbose', 'trivia']),
+ options=dict(type='list', elements='dict'),
)
module = AnsibleModule(
```
Which will work with the following playbook -
```yaml
---
- hosts: localhost
vars_files:
- vcenter_vars.yml
tasks:
- name: Set values
community.vmware.vmware_vcenter_settings:
options:
- name: 'config.vcls.clusters.domain-c8.enabled'
value: 'false'
- name: Get values
community.vmware.vmware_vcenter_settings_info:
```
[vSphere Cluster Services (vCLS)](http://www.yellow-bricks.com/2020/10/09/vmware-vsphere-clustering-services-vcls-considerations-questions-and-answers/) VMs are managed by vCenter and an important part of how DRS works. Over time, they will be the backbone for all clustering services. I should say that it's generally a good idea to **not** mess around with them manually. Having an option to do this in `vmware_cluster` sounds to me like an accident waiting to happen. Before we implement this, we should really understand the possible implications. And, quite frankly, at the moment _I_ don't. vCLS VMs are just to new.
> This makes it fairly impossible to remove a cluster automatically purely via ansible at the moment.
I don't think so. Duncan Epping:
> If I need to power off my cluster, what do I do with these VMs?
>
> These VMs are migrated by DRS to the next host until the last host needs to go into maintenance mode and then they are automatically powered off by EAM.
So it looks like you just have to place all the hosts in the cluster in maintenance mode (there is a module for this, `vmware_maintenancemode`) and the vCLS VMs will be powered off automatically.
Thanks for your answer. I have disabled drs and ha on the specific cluster, which is a nested lab where I would like to frequently redeploy most of the resources, but not always vcsa itself.
When putting the nodes into maintenance mode one by one, the vCLS vm's hang often, making the maintenance mode task hang and eventually fail. The task continues when I poweroff the vCLS vm's manually when this happens, which is why I ventured into this avenue. I can test further, but it seems like the vCLS vm's get relocated and this starts to fail after 2 or 3 nodes. Could be some other issue, but since I'm tearing everything down I don't want this to fail...
If there is some other way to *force* the maintenance mode of all hosts, or forcefully remove all content of a vdc without properly tearing down services one by one, I'd be happy to hear if you know of other options.
Thanks
@mariolenz I totally agree with your opinion. But like @lvlie is facing issues, I think we should have the facility to modify configurations.
My proposed solutions allow changing existing configurations (not just vCLS enabling retreat mode). Let me know if you feel otherwise.
Ah, did not think about why it started to fail because of the ongoing relocations (but focussed on preventing them), but that is probably because the vCLS vm's are running on a vsan datastore and I'm using
```yaml
- name: Enter maintenance mode
community.vmware.vmware_maintenancemode:
vsan: noAction
evacuate: false
```
This would make the datastore inaccessible after 2 nodes are down in my setup. However, I cannot remove the datastore or shutdown vsan before starting maintenance mode because of the vCLS vm's... Which I think brings me back to having to go into "retreat mode".
> This would make the datastore inaccessible after 2 nodes are down in my setup. However, I cannot remove the datastore or shutdown vsan before starting maintenance mode because of the vCLS vm's... Which I think brings me back to having to go into "retreat mode".
Yes. I can see the problem there. We also had issues with those vCLS VMs in our CI pipeline. I think they were fixed with PR #568 but we don't have VSAN which makes it easier for us.
> My proposed solutions allow changing existing configurations (not just vCLS enabling retreat mode). Let me know if you feel otherwise.
If I understand your solution correct, you propose a new parameter `options` to `vmware_vcenter_settings` to configure advanced settings on the vCenter itself. Well, personally I would prefer to call the parameter `advanced_settings` but apart from this: I should say it's probably a very good idea to have a way to manage them on the vCenter. I wonder why this hasn't been implemented yet.
However, I'm afk from now on for a couple of days and can't work on it. But feel free to open a PR implementing this yourself.
@lvlie
> However, I'm afk from now on for a couple of days and can't work on it. But feel free to open a PR implementing this yourself.
I'm back :-) Did you work on a PR to implement your solution? If not, I'll do if that's OK for you.
@mariolenz Not yet. Please go ahead. Thanks for asking.
Ah if that question was aimed at me: sorry, not that good at python unfortunately and didn't catch it was aimed at me. I'd be happy to test anything in my env tho. Have a week off but can scrape some time together :-)
@Akasurde @lvlie I've assigned this issue to myself and will work on it. Please give me a couple of days. | 2021-05-12T19:58:40 |
ansible-collections/community.vmware | 855 | ansible-collections__community.vmware-855 | [
"854"
] | bbd98e5e182b6b965f52eedc4a112370872d55eb | diff --git a/plugins/inventory/vmware_host_inventory.py b/plugins/inventory/vmware_host_inventory.py
--- a/plugins/inventory/vmware_host_inventory.py
+++ b/plugins/inventory/vmware_host_inventory.py
@@ -35,13 +35,17 @@
- name: VMWARE_HOST
- name: VMWARE_SERVER
username:
- description: Name of vSphere user.
+ description:
+ - Name of vSphere user.
+ - Accepts vault encrypted variable.
required: True
env:
- name: VMWARE_USER
- name: VMWARE_USERNAME
password:
- description: Password of vSphere user.
+ description:
+ - Password of vSphere user.
+ - Accepts vault encrypted variable.
required: True
env:
- name: VMWARE_PASSWORD
@@ -202,14 +206,18 @@ def parse(self, inventory, loader, path, cache=True):
# set _options from config data
self._consume_options(config_data)
+ username = self.get_option("username")
password = self.get_option("password")
+ if isinstance(username, AnsibleVaultEncryptedUnicode):
+ username = username.data
+
if isinstance(password, AnsibleVaultEncryptedUnicode):
password = password.data
self.pyv = BaseVMwareInventory(
hostname=self.get_option("hostname"),
- username=self.get_option("username"),
+ username=username,
password=password,
port=self.get_option("port"),
with_tags=self.get_option("with_tags"),
diff --git a/plugins/inventory/vmware_vm_inventory.py b/plugins/inventory/vmware_vm_inventory.py
--- a/plugins/inventory/vmware_vm_inventory.py
+++ b/plugins/inventory/vmware_vm_inventory.py
@@ -33,13 +33,17 @@
- name: VMWARE_HOST
- name: VMWARE_SERVER
username:
- description: Name of vSphere user.
+ description:
+ - Name of vSphere user.
+ - Accepts vault encrypted variable.
required: True
env:
- name: VMWARE_USER
- name: VMWARE_USERNAME
password:
- description: Password of vSphere user.
+ description:
+ - Password of vSphere user.
+ - Accepts vault encrypted variable.
required: True
env:
- name: VMWARE_PASSWORD
@@ -716,14 +720,18 @@ def parse(self, inventory, loader, path, cache=True):
# set _options from config data
self._consume_options(config_data)
+ username = self.get_option('username')
password = self.get_option('password')
+ if isinstance(username, AnsibleVaultEncryptedUnicode):
+ username = username.data
+
if isinstance(password, AnsibleVaultEncryptedUnicode):
password = password.data
self.pyv = BaseVMwareInventory(
hostname=self.get_option('hostname'),
- username=self.get_option('username'),
+ username=username,
password=password,
port=self.get_option('port'),
with_tags=self.get_option('with_tags'),
| Allow encrypted variables in dynamic inventory configuraiton
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Support ansible vault variables for username and password
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vm_inventory
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Currently, you can not use encrypted variables in the inventory file. You receive the following error.
I was attempting to encrypt just the username and password for the inventory, but that was unable to function. If you encrypt the entire contents of the file, it does work okay.
vmwaretest ansible -m ansible.windows.win_whoami _domain_controllers -l dtw
[WARNING]: * Failed to parse /vmwaretest/inventories/production/{{ vcenter }}.vmware.yml with
ansible_collections.community.vmware.plugins.inventory.vmware_vm_inventory plugin: Unknown error while connecting to vCenter or ESXi API at {{ vcenter }}:443: For "userName" expected type str, but got AnsibleVaultEncryptedUnicode
[WARNING]: Unable to parse /vmwaretest/inventories/production/{{ vcenter }}.vmware.yml as an inventory source
[WARNING]: Could not match supplied host pattern, ignoring: dtw
<!--- Paste example playbooks or commands between quotes below -->
```yaml
-- NA --
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| @usscarter Thanks for reporting this issue. `password` is supported as encrypted variables - https://github.com/ansible-collections/community.vmware/blob/bbd98e5e182b6b965f52eedc4a112370872d55eb/plugins/inventory/vmware_vm_inventory.py#L721
Well, that is good to know atleast.
We also prefer to encrypt the username field as we use a service account so that would also be preferable in the future as well 👍
ok. On it.
cc @Tomorrow9 @goneri @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify ---> | 2021-05-24T16:53:49 |
|
ansible-collections/community.vmware | 870 | ansible-collections__community.vmware-870 | [
"869",
"869"
] | 33a15f851b0a50da5920713018172f1b2daf1ebb | diff --git a/plugins/modules/vmware_guest_tools_wait.py b/plugins/modules/vmware_guest_tools_wait.py
--- a/plugins/modules/vmware_guest_tools_wait.py
+++ b/plugins/modules/vmware_guest_tools_wait.py
@@ -69,6 +69,12 @@
- Max duration of the waiting period (seconds).
default: 500
type: int
+ datacenter:
+ description:
+ - Name of the datacenter.
+ - The datacenter to search for a virtual machine.
+ type: str
+ version_added: '1.15.0'
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -112,6 +118,7 @@
password: "{{ vcenter_password }}"
name: test-vm
folder: "/{{datacenter}}/vm"
+ datacenter: "{{ datacenter }}"
delegate_to: localhost
register: facts
'''
@@ -165,6 +172,7 @@ def main():
moid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
timeout=dict(type='int', default=500),
+ datacenter=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
| vmware_guest_tools_wait KeyError: 'datacenter'
##### SUMMARY
vmware_guest_tools_wait throwing a KeyError: 'datacenter' because multiple vms with the same name are returned but datacenter is not being passed in from the vmware_guest_tools_wait module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_tools_wait
##### ANSIBLE VERSION
```
ansible 2.10.10
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/core/tfp-env/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
```
None
```
##### OS / ENVIRONMENT
```
Red Hat Enterprise Linux release 8.2 (Ootpa)
```
##### STEPS TO REPRODUCE
1. Create multiple vms with the same name in different folders
2. Run community.vmware.vmware_guest_tools_wait module using the vm name with folder like below.
```
- name: Wait for {{ node }}
community.vmware.vmware_guest_tools_wait:
hostname: "{{ esxi.ipaddress }}"
username: "{{ esxi.username }}"
password: "{{ esxi.password }}"
name: "{{ node }}"
folder: "{{ folder }}"
validate_certs: no
timeout: 900
register: vm_facts
```
##### EXPECTED RESULTS
That is doesnt blow up because of keyerror.
##### ACTUAL RESULTS
```
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.vmware.vmware_guest_tools_wait_payload_sovyp9io/ansible_community.vmware.vmware_guest_tools_wait_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_tools_wait.py", line 205, in <module>
File "/tmp/ansible_community.vmware.vmware_guest_tools_wait_payload_sovyp9io/ansible_community.vmware.vmware_guest_tools_wait_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_tools_wait.py", line 183, in main
File "/tmp/ansible_community.vmware.vmware_guest_tools_wait_payload_sovyp9io/ansible_community.vmware.vmware_guest_tools_wait_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py", line 1078, in get_vm
KeyError: 'datacenter'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
```
vmware_guest_tools_wait KeyError: 'datacenter'
##### SUMMARY
vmware_guest_tools_wait throwing a KeyError: 'datacenter' because multiple vms with the same name are returned but datacenter is not being passed in from the vmware_guest_tools_wait module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_tools_wait
##### ANSIBLE VERSION
```
ansible 2.10.10
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/core/tfp-env/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
```
None
```
##### OS / ENVIRONMENT
```
Red Hat Enterprise Linux release 8.2 (Ootpa)
```
##### STEPS TO REPRODUCE
1. Create multiple vms with the same name in different folders
2. Run community.vmware.vmware_guest_tools_wait module using the vm name with folder like below.
```
- name: Wait for {{ node }}
community.vmware.vmware_guest_tools_wait:
hostname: "{{ esxi.ipaddress }}"
username: "{{ esxi.username }}"
password: "{{ esxi.password }}"
name: "{{ node }}"
folder: "{{ folder }}"
validate_certs: no
timeout: 900
register: vm_facts
```
##### EXPECTED RESULTS
That is doesnt blow up because of keyerror.
##### ACTUAL RESULTS
```
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.vmware.vmware_guest_tools_wait_payload_sovyp9io/ansible_community.vmware.vmware_guest_tools_wait_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_tools_wait.py", line 205, in <module>
File "/tmp/ansible_community.vmware.vmware_guest_tools_wait_payload_sovyp9io/ansible_community.vmware.vmware_guest_tools_wait_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_tools_wait.py", line 183, in main
File "/tmp/ansible_community.vmware.vmware_guest_tools_wait_payload_sovyp9io/ansible_community.vmware.vmware_guest_tools_wait_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py", line 1078, in get_vm
KeyError: 'datacenter'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
```
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Files identified in the description:
* [`plugins/modules/vmware_guest_tools_wait.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest_tools_wait.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Files identified in the description:
* [`plugins/modules/vmware_guest_tools_wait.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest_tools_wait.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pdellaert @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify ---> | 2021-06-02T00:10:28 |
|
ansible-collections/community.vmware | 892 | ansible-collections__community.vmware-892 | [
"891"
] | 99eef38130f23be53d0d8577c9e8b62c223371f2 | diff --git a/plugins/modules/vmware_vmkernel.py b/plugins/modules/vmware_vmkernel.py
--- a/plugins/modules/vmware_vmkernel.py
+++ b/plugins/modules/vmware_vmkernel.py
@@ -751,12 +751,7 @@ def host_vmk_update(self):
if changed_service_vsan:
if self.vnic.device in service_type_vmks['vsan']:
services_previous.append('VSAN')
- if self.enable_vsan:
- results['vsan'] = self.set_vsan_service_type()
- else:
- self.set_service_type(
- vnic_manager=vnic_manager, vmk=self.vnic, service_type='vsan', operation=operation
- )
+ results['vsan'] = self.set_vsan_service_type(self.enable_vsan)
results['services_previous'] = ', '.join(services_previous)
else:
@@ -792,7 +787,7 @@ def find_dvspg_by_key(self, dv_switch, portgroup_key):
return None
- def set_vsan_service_type(self):
+ def set_vsan_service_type(self, enable_vsan):
"""
Set VSAN service type
Returns: result of UpdateVsan_Task
@@ -801,20 +796,43 @@ def set_vsan_service_type(self):
result = None
vsan_system = self.esxi_host_obj.configManager.vsanSystem
- vsan_port_config = vim.vsan.host.ConfigInfo.NetworkInfo.PortConfig()
- vsan_port_config.device = self.vnic.device
-
+ vsan_system_config = vsan_system.config
vsan_config = vim.vsan.host.ConfigInfo()
- vsan_config.networkInfo = vim.vsan.host.ConfigInfo.NetworkInfo()
- vsan_config.networkInfo.port = [vsan_port_config]
- if not self.module.check_mode:
+
+ vsan_config.networkInfo = vsan_system_config.networkInfo
+ current_vsan_vnics = [portConfig.device for portConfig in vsan_system_config.networkInfo.port]
+ changed = False
+ result = "%s NIC %s (currently enabled NICs: %s) : " % ("Enable" if enable_vsan else "Disable", self.vnic.device, current_vsan_vnics)
+ if not enable_vsan:
+ if self.vnic.device in current_vsan_vnics:
+ vsan_config.networkInfo.port = list(filter(lambda portConfig: portConfig.device != self.vnic.device, vsan_config.networkInfo.port))
+ changed = True
+ else:
+ if self.vnic.device not in current_vsan_vnics:
+ vsan_port_config = vim.vsan.host.ConfigInfo.NetworkInfo.PortConfig()
+ vsan_port_config.device = self.vnic.device
+
+ if vsan_config.networkInfo is None:
+ vsan_config.networkInfo = vim.vsan.host.ConfigInfo.NetworkInfo()
+ vsan_config.networkInfo.port = [vsan_port_config]
+ else:
+ vsan_config.networkInfo.port.append(vsan_port_config)
+ changed = True
+
+ if not self.module.check_mode and changed:
try:
vsan_task = vsan_system.UpdateVsan_Task(vsan_config)
- wait_for_task(vsan_task)
+ task_result = wait_for_task(vsan_task)
+ if task_result[0]:
+ result += "Success"
+ else:
+ result += "Failed"
except TaskError as task_err:
self.module.fail_json(
msg="Failed to set service type to vsan for %s : %s" % (self.vnic.device, to_native(task_err))
)
+ if self.module.check_mode:
+ result += "Dry-run"
return result
def host_vmk_create(self):
@@ -909,7 +927,7 @@ def host_vmk_create(self):
# VSAN
if self.enable_vsan:
- results['vsan'] = self.set_vsan_service_type()
+ results['vsan'] = self.set_vsan_service_type(self.enable_vsan)
# Other service type
host_vnic_manager = self.esxi_host_obj.configManager.virtualNicManager
| vmware_vmkernel has inconsistent behavior with multiple vSAN interfaces
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using vmware_vmkernel to configure several vmk interfaces in a loop, it is extremely hard to keep a consistent set of configuration across runs :
- It is not possible to define two interfaces (say, vmk0 or vmk3) with `enable_vsan: True`, only the last one (vmk3) defined is reflected
- Not specifying the argument enable_vsan results in `enable_vsan: False` being processed, which means upon next run even if vmk0 is manually set as a vSAN interface, the flag will be removed.
This can be extremely problematic for migrations where one would need to have multiple vSAN interfaces active at the same time.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_vmkernel
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/users/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python/3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.3 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/users/ansible/.vault_pass
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hqt Enterprise Linux release 8.2 (Ootpa)
Attempted on a ESXi 7.0.2 host
I am in an air-gapped environment and stuck with this specific version of Ansible, unable to use collections.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Configure vmkernel port {{ item.device }}"
vmware_vmkernel:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
esxi_hostname: "{{ esxi_hostname }}"
password: "{{ esxi_password }}"
vswitch_name: "{{ item.vswitch_name }}"
portgroup_name: "{{ item.portgroup_name }}"
network: "{{ item.network }}"
device: "{{ item.device }}"
state: present
enable_mgmt: "{{ item.enable_mgmt|default(omit) }}"
enable_vsan: "{{ item.enable_vsan|default(omit) }}"
delegate_to: localhost
with_items:
- { vswitch_name: "vSwitch0", portgroup_name: "Management Network", network: { type: "static", ip_address: "10.0.0.1", subnet_mask: "255.255.255.0" }, device: "vmk0", enable_mgmt: True, enable_vsan: True }
- { vswitch_name: "vSwitch1", portgroup_name: "vSAN", network: { type: "static", ip_address: "192.168.0.1", subnet_mask: "255.255.255.0" }, device: "vmk3", enable_vsan: True }
tags:
- esxi_vmk_interfaces
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
After running the playbook I would expect the following command on the host to return this result, having both interfaces enabled as vSAN.
```
[root@esxi_host:~] esxcli vsan network list | grep VmkNic
VmkNic Name: vmk0
VmkNic Name: vmk3
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
The module itself executes successfully (no Ansible error), but :
- With no interfaces configured
```
$ ansible-playbook -i development ./provision.yml -v -t esxi_vmk_interfaces -l esxi_host
PLAY [vsan]
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk0]
changed: [esxi_host -> localhost] => {"changed": true, "device": "vmk0", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "10.0.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter services updated", "mtu": 1500, "portgroup": "Management Network", "services": "Mgmt, vSAN", "services_previous": "Mgmt", "switch": "vSwitch0", "tcpip_stack": "default", "vsan": null}
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk3]
changed: [esxi_host -> localhost] => {"changed": false, "device": "vmk3", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "192.168.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter already configured properly", "mtu": 1500, "portgroup": "vSAN", "services": "VSAN", "switch": "vSwitch1", "tcpip_stack": "default", "vsan": null}
```
- Running again (with vmk0 configured as per above)
```
$ ansible-playbook -i development ./provision.yml -v -t esxi_vmk_interfaces -l esxi_host
PLAY [vsan]
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk0]
changed: [esxi_host -> localhost] => {"changed": false, "device": "vmk0", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "10.0.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter already configured properly", "mtu": 1500, "portgroup": "Management Network", "services": "Mgmt,VSAN", "switch": "vSwitch0", "tcpip_stack": "default", "vsan": null}
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk3]
changed: [esxi_host -> localhost] => {"changed": true, "device": "vmk3", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "192.168.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter services updated", "mtu": 1500, "portgroup": "vSAN", "services": "VSAN", "services_previous": "", "switch": "vSwitch1", "tcpip_stack": "default", "vsan": null}
```
- Running again (with vmk3 configured as per above)
```
$ ansible-playbook -i development ./provision.yml -v -t esxi_vmk_interfaces -l esxi_host
PLAY [vsan]
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk0]
changed: [esxi_host -> localhost] => {"changed": true, "device": "vmk0", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "10.0.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter services updated", "mtu": 1500, "portgroup": "Management Network", "services": "Mgmt, VSAN", "services_previous": "Mgmt", "switch": "vSwitch0", "tcpip_stack": "default", "vsan": null}
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk3]
changed: [esxi_host -> localhost] => {"changed": false, "device": "vmk3", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "192.168.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter already configured properly", "mtu": 1500, "portgroup": "vSAN", "services": "VSAN", "switch": "vSwitch1", "tcpip_stack": "default", "vsan": null}
```
- Running again (with vmk0 configured as per above)
```
$ ansible-playbook -i development ./provision.yml -v -t esxi_vmk_interfaces -l esxi_host
PLAY [vsan]
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk0]
changed: [esxi_host -> localhost] => {"changed": false, "device": "vmk0", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "10.0.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter already configured properly", "mtu": 1500, "portgroup": "Management Network", "services": "Mgmt, VSAN", "switch": "vSwitch0", "tcpip_stack": "default", "vsan": null}
TASK [esxi_vmk_intrefaces : Configure vmkernel port vmk3]
changed: [esxi_host -> localhost] => {"changed": true, "device": "vmk3", "ipv4": "static", "ipv4_gw": "No override", "ipv4_ip": "192.168.0.1", "ipv4_sm": "255.255.255.0", "msg": "VMkernel Adapter services updated", "mtu": 1500, "portgroup": "vSAN", "services": "VSAN", "services_previous": "", "switch": "vSwitch1", "tcpip_stack": "default", "vsan": null}
```
When confirming the result on the ESXi host itself, it becomes obvious that enabling vSAN on one interface disables vSAN on the others :
```
[root@esxi_host:~] esxcli vsan network list | grep VmkNic
VmkNic Name: vmk3
```
In a live vSAN cluster, this has the result of splitting the cluster since esxi_host's vSAN service is now unable to talk to other hosts. I have to manually recover by running `esxcli vsan network ip add -i vmk0`.
It should also be noted execution is not consistent across runs of identical configuration because it only checks the "current" status of service once in the loop before applying it for all interfaces.
I am trying to work around by setting a default dominant interface which I expect the module to set up for me, and running commands over SSH via the shell module to enforce the vSAN flag for other interfaces I need, however his leads to another bug (line 758 where local variable operation is referenced before assignment because that bit of code never initializes it), making it impossible to manually override the behavior of the module for vSAN services, since it will insist on touching the interface.
It would be nice if :
- the behavior could be made consistent
- the module could be used merely for creating a vmk interface, and letting another task configure the services (as of now, I understand it works in a all-in-one-fell-swoop fashion)
Looking at the code, I assume it is because the `set_vsan_service_type` method crafts a new port configuration while ignoring any potentially previously existing configuration in the vSAN system, which conveniently also "removed" interfaces.
The code on line 758 seems to be pasted from other services, but does not do proper removal : the "operation" variable is never defined, so when it reaches this line (should it need to remove the flag from an interface that had it), it fails with an "unassigned variable" error. Replacing "operation=operation" with "operation='deselect'" does not work, it fails with a "Cannot change the host configuration" error, indicating this is not the proper way to proceed.
The proper way to proceed would likely to make set_vsan_service_type generic, capable of handling both adding/removal of interfaces.
| Files identified in the description:
* [`plugins/modules/vmware_vmkernel.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_vmkernel.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify ---> | 2021-06-12T04:52:28 |
|
ansible-collections/community.vmware | 903 | ansible-collections__community.vmware-903 | [
"112"
] | 628efca84638eb9c4128962168587823f2deb3cc | diff --git a/plugins/modules/vmware_dvs_host.py b/plugins/modules/vmware_dvs_host.py
--- a/plugins/modules/vmware_dvs_host.py
+++ b/plugins/modules/vmware_dvs_host.py
@@ -43,6 +43,25 @@
required: False
type: list
elements: str
+ lag_uplinks:
+ version_added: '1.12.0'
+ required: False
+ type: list
+ elements: dict
+ description:
+ - The ESXi hosts vmnics to use with specific LAGs.
+ suboptions:
+ lag:
+ description:
+ - Name of the LAG.
+ type: str
+ required: True
+ vmnics:
+ description:
+ - The ESXi hosts vmnics to use with the LAG.
+ required: False
+ type: list
+ elements: str
state:
description:
- If the host should be present or absent attached to the vSwitch.
@@ -85,6 +104,25 @@
state: present
delegate_to: localhost
+- name: Add vmnics to LAGs
+ community.vmware.vmware_dvs_host:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ esxi_hostname: '{{ esxi_hostname }}'
+ switch_name: dvSwitch
+ lag_uplinks:
+ - lag: lag1
+ vmnics:
+ - vmnic0
+ - vmnic1
+ - lag: lag2
+ vmnics:
+ - vmnic2
+ - vmnic3
+ state: present
+ delegate_to: localhost
+
- name: Add Host to dVS/enable learnswitch (https://labs.vmware.com/flings/learnswitch)
community.vmware.vmware_dvs_host:
hostname: '{{ vcenter_hostname }}'
@@ -102,12 +140,6 @@
delegate_to: localhost
'''
-try:
- from collections import Counter
- HAS_COLLECTIONS_COUNTER = True
-except ImportError:
- HAS_COLLECTIONS_COUNTER = False
-
try:
from pyVmomi import vim, vmodl
except ImportError:
@@ -126,17 +158,32 @@
class VMwareDvsHost(PyVmomi):
def __init__(self, module):
super(VMwareDvsHost, self).__init__(module)
- self.dv_switch = None
self.uplink_portgroup = None
self.host = None
self.dv_switch = None
- self.nic = None
+ self.desired_state = {}
+
self.state = self.module.params['state']
self.switch_name = self.module.params['switch_name']
self.esxi_hostname = self.module.params['esxi_hostname']
self.vmnics = self.module.params['vmnics']
+ self.lag_uplinks = self.module.params['lag_uplinks']
self.vendor_specific_config = self.module.params['vendor_specific_config']
+ self.dv_switch = find_dvs_by_name(self.content, self.switch_name)
+
+ if self.dv_switch is None:
+ self.module.fail_json(msg="A distributed virtual switch %s "
+ "does not exist" % self.switch_name)
+
+ self.lags = {}
+ for lag in self.dv_switch.config.lacpGroupConfig:
+ self.lags[lag.name] = lag
+
+ for lag_uplink in self.lag_uplinks:
+ if lag_uplink['lag'] not in self.lags:
+ self.module.fail_json(msg="LAG %s not found" % lag_uplink['lag'])
+
def process_state(self):
dvs_host_states = {
'absent': {
@@ -179,15 +226,15 @@ def modify_dvs_host(self, operation):
config.append(vim.dvs.KeyedOpaqueBlob(key=item['key'], opaqueData=item['value']))
spec.host[0].vendorSpecificConfig = config
- if operation in ("edit", "add"):
+ if operation == "edit":
spec.host[0].backing = vim.dvs.HostMember.PnicBacking()
- count = 0
- for nic in self.vmnics:
- spec.host[0].backing.pnicSpec.append(vim.dvs.HostMember.PnicSpec())
- spec.host[0].backing.pnicSpec[count].pnicDevice = nic
- spec.host[0].backing.pnicSpec[count].uplinkPortgroupKey = self.uplink_portgroup.key
- count += 1
+ for nic, uplinkPortKey in self.desired_state.items():
+ pnicSpec = vim.dvs.HostMember.PnicSpec()
+ pnicSpec.pnicDevice = nic
+ pnicSpec.uplinkPortgroupKey = self.uplink_portgroup.key
+ pnicSpec.uplinkPortKey = uplinkPortKey
+ spec.host[0].backing.pnicSpec.append(pnicSpec)
try:
task = self.dv_switch.ReconfigureDvs_Task(spec)
@@ -220,6 +267,12 @@ def state_create_dvs_host(self):
if not self.module.check_mode:
changed, result = self.modify_dvs_host(operation)
+ if changed:
+ self.set_desired_state()
+ changed, result = self.modify_dvs_host("edit")
+ else:
+ self.module.exit_json(changed=changed, result=to_native(result))
+
self.module.exit_json(changed=changed, result=to_native(result))
def find_host_attached_dvs(self):
@@ -229,23 +282,62 @@ def find_host_attached_dvs(self):
return None
+ def set_desired_state(self):
+ lag_uplinks = []
+ switch_uplink_ports = {'non_lag': []}
+
+ portCriteria = vim.dvs.PortCriteria()
+ portCriteria.host = [self.host]
+ portCriteria.portgroupKey = self.uplink_portgroup.key
+ portCriteria.uplinkPort = True
+ ports = self.dv_switch.FetchDVPorts(portCriteria)
+
+ for name, lag in self.lags.items():
+ switch_uplink_ports[name] = []
+ for uplinkName in lag.uplinkName:
+ for port in ports:
+ if port.config.name == uplinkName:
+ switch_uplink_ports[name].append(port.key)
+ lag_uplinks.append(port.key)
+
+ for port in ports:
+ if port.key in self.uplink_portgroup.portKeys and port.key not in lag_uplinks:
+ switch_uplink_ports['non_lag'].append(port.key)
+
+ count = 0
+ for vmnic in self.vmnics:
+ self.desired_state[vmnic] = switch_uplink_ports['non_lag'][count]
+ count += 1
+
+ for lag in self.lag_uplinks:
+ count = 0
+ for vmnic in lag['vmnics']:
+ self.desired_state[vmnic] = switch_uplink_ports[lag['lag']][count]
+ count += 1
+
def check_uplinks(self):
pnic_device = []
+ self.set_desired_state()
+
for dvs_host_member in self.dv_switch.config.host:
- if dvs_host_member.config.host == self.host:
- for pnicSpec in dvs_host_member.config.backing.pnicSpec:
- pnic_device.append(pnicSpec.pnicDevice)
+ if dvs_host_member.config.host.name == self.esxi_hostname:
+ break
- return Counter(pnic_device) == Counter(self.vmnics)
+ for pnicSpec in dvs_host_member.config.backing.pnicSpec:
+ pnic_device.append(pnicSpec.pnicDevice)
+ if pnicSpec.pnicDevice not in self.desired_state:
+ return False
+ if pnicSpec.uplinkPortKey != self.desired_state[pnicSpec.pnicDevice]:
+ return False
- def check_dvs_host_state(self):
- self.dv_switch = find_dvs_by_name(self.content, self.switch_name)
+ for vmnic in self.desired_state:
+ if vmnic not in pnic_device:
+ return False
- if self.dv_switch is None:
- self.module.fail_json(msg="A distributed virtual switch %s "
- "does not exist" % self.switch_name)
+ return True
+ def check_dvs_host_state(self):
self.uplink_portgroup = self.find_dvs_uplink_pg()
if self.uplink_portgroup is None:
@@ -262,6 +354,9 @@ def check_dvs_host_state(self):
self.module.fail_json(msg="The esxi_hostname %s does not exist "
"in vCenter" % self.esxi_hostname)
return 'absent'
+ # Skip checking uplinks if the host should be absent, anyway
+ elif self.state == 'absent':
+ return 'present'
else:
if self.check_uplinks():
return 'present'
@@ -286,15 +381,30 @@ def main():
value=dict(type='str', required=True),
),
),
+ lag_uplinks=dict(
+ type='list',
+ default=[],
+ required=False,
+ elements='dict',
+ options=dict(
+ lag=dict(
+ type='str',
+ required=True,
+ ),
+ vmnics=dict(
+ type='list',
+ required=False,
+ elements='str',
+ default=[],
+ ),
+ ),
+ ),
)
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
- if not HAS_COLLECTIONS_COUNTER:
- module.fail_json(msg='collections.Counter from Python-2.7 is required for this module')
-
vmware_dvs_host = VMwareDvsHost(module)
vmware_dvs_host.process_state()
| diff --git a/tests/integration/targets/vmware_dvs_host/tasks/main.yml b/tests/integration/targets/vmware_dvs_host/tasks/main.yml
--- a/tests/integration/targets/vmware_dvs_host/tasks/main.yml
+++ b/tests/integration/targets/vmware_dvs_host/tasks/main.yml
@@ -3,8 +3,8 @@
name: prepare_vmware_tests
vars:
setup_attach_host: true
-- name: 'Attach the hosts to the DVSwitch without specifying any vnics (Issue #185)'
- when: vcsim is not defined
+
+- when: vcsim is not defined
block:
- name: Create the DVSwitch
vmware_dvswitch:
@@ -16,7 +16,17 @@
discovery_operation: both
state: present
- - name: 'Attach the hosts to the DVSwitch without specifying any vnics'
+ - name: Create LAG
+ vmware_dvswitch_lacp:
+ switch: 'dvs_host_test_switch'
+ support_mode: enhanced
+ link_aggregation_groups:
+ - name: lag1
+ mode: active
+ uplink_number: 2
+ load_balancing_mode: srcDestIpTcpUdpPortVlan
+
+ - name: 'Attach host to the DVSwitch without specifying any vmnics (Issue #185)'
vmware_dvs_host:
esxi_hostname: '{{ esxi1 }}'
switch_name: 'dvs_host_test_switch'
@@ -26,9 +36,71 @@
that:
- host_dvs_attachment is changed
+ - name: 'Attach hosts to the DVSwitch without specifying any vmnics again (idempotency)'
+ vmware_dvs_host:
+ esxi_hostname: '{{ esxi1 }}'
+ switch_name: 'dvs_host_test_switch'
+ state: present
+ register: host_dvs_attachment_again
+ - assert:
+ that:
+ - host_dvs_attachment_again is not changed
+
+ - name: 'Add vmnic to uplink'
+ vmware_dvs_host:
+ esxi_hostname: '{{ esxi1 }}'
+ switch_name: 'dvs_host_test_switch'
+ vmnics:
+ - vmnic1
+ state: present
+ register: host_dvs_uplink
+ - assert:
+ that:
+ - host_dvs_uplink is changed
+
+ - name: 'Add vmnic to uplink again (idempotency)'
+ vmware_dvs_host:
+ esxi_hostname: '{{ esxi1 }}'
+ switch_name: 'dvs_host_test_switch'
+ vmnics:
+ - vmnic1
+ state: present
+ register: host_dvs_uplink_again
+ - assert:
+ that:
+ - host_dvs_uplink_again is not changed
+
+ - name: 'Add vmnic to LAG uplink'
+ vmware_dvs_host:
+ esxi_hostname: '{{ esxi1 }}'
+ switch_name: 'dvs_host_test_switch'
+ lag_uplinks:
+ - lag: lag1
+ vmnics:
+ - vmnic1
+ state: present
+ register: host_dvs_lag_uplink
+ - assert:
+ that:
+ - host_dvs_lag_uplink is changed
+
+ - name: 'Add vmnic to LAG uplink again (idempotency)'
+ vmware_dvs_host:
+ esxi_hostname: '{{ esxi1 }}'
+ switch_name: 'dvs_host_test_switch'
+ lag_uplinks:
+ - lag: lag1
+ vmnics:
+ - vmnic1
+ state: present
+ register: host_dvs_lag_uplink_again
+ - assert:
+ that:
+ - host_dvs_lag_uplink_again is not changed
+
# Cleanup
always:
- - name: 'Cleanup: detaching hosts from dvs'
+ - name: 'Cleanup: detaching host from dvs'
vmware_dvs_host:
esxi_hostname: "{{ esxi1 }}"
switch_name: 'dvs_host_test_switch'
| vmware_dvswitch_lacp, map vmnic to lag
_From @Aglidic on Apr 09, 2020 20:26_
SUMMARY
Really would like to be able to map vmnic to lag memeber during creation of dvswitch.
ISSUE TYPE
Feature Idea
COMPONENT NAME
vmware_dvswitch_lacp
ADDITIONAL INFORMATION
If we enable the enhanced vmotion and create a lag we have no possibility to map our vmnic to this lag so we need to do it manually. Would be helpful to add option in the module for that
_Copied from original issue: ansible/ansible#68824_
| Files identified in the description:
* [`plugins/modules/vmware_dvswitch_lacp.py`](https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_dvswitch_lacp.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I would like to have this feature as well.
But should it not be handled by vmware_dvs_host. Just add an uplink to the vmnic (lag-0, lag-1) and if nothing defined, just use the default (Uplink 1, Uplink 2)
@Aglidic I'm not sure I understand this issue. Could you please explain what exactly you have to do manually?
@noesberger
> But should it not be handled by vmware_dvs_host.
I'm not sure since I don't understand the problem here. But I agree with you, assigning physical NICs of an ESXi hosts to LAG uplinks looks like a job for `vmware_dvs_host` to me. For example, you might want (or have) to use vmnic0 and vmnic2 for lag-0 and lag-1 on one host, but vmnic1 and vmnic3 on another. So that's clearly something that's host specific and can't be done on the LAG level.
> Just add an uplink to the vmnic (lag-0, lag-1) and if nothing defined, just use the default (Uplink 1, Uplink 2)
I'm not sure if a lot of people are doing this, but as far as I know you can have more than just one LAG on a virtual switch. If I understand your proposal correct, we couldn't support this. I haven't put much thought into this but what do you think about something like this:
```
vmnic:
- vmnic0
- vmnic1
lag_uplinks:
- lag: foo
vmnic:
- vmnic2
- vmnic3
- lag: bar
vmnic:
- vmnic4
- vmnic5
```
What do you think? I'm not really sure myself, I've just thought that up on the spur of the moment.
Hi, I am also interested in this. It is exactly as @mariolenz explained. When using 'community.vmware.vmware_dvs_host' module to add host vmnics to DVSwitch, they are added as plain uplink ports. There is no possibility to add them to a already created LAG uplink port. This would be perfect:
```
vmnic:
- vmnic0
- vmnic1
lag_uplinks:
- lag: foo
vmnic:
- vmnic2
- vmnic3
- lag: bar
vmnic:
- vmnic4
- vmnic5
```
Sorry @miguelgmedina but this turned out to be trickier to implement than I originally expected. I think I'm making some progress, but only slowly. | 2021-06-18T11:29:34 |
ansible-collections/community.vmware | 960 | ansible-collections__community.vmware-960 | [
"853"
] | a4909c96ffda20161fb7aac35fb064ce6b01f765 | diff --git a/plugins/module_utils/vm_device_helper.py b/plugins/module_utils/vm_device_helper.py
--- a/plugins/module_utils/vm_device_helper.py
+++ b/plugins/module_utils/vm_device_helper.py
@@ -313,3 +313,60 @@ def integer_value(self, input_value, name):
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
+
+ def create_nvdimm_controller(self):
+ nvdimm_ctl = vim.vm.device.VirtualDeviceSpec()
+ nvdimm_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
+ nvdimm_ctl.device = vim.vm.device.VirtualNVDIMMController()
+ nvdimm_ctl.device.deviceInfo = vim.Description()
+ nvdimm_ctl.device.key = -randint(27000, 27999)
+
+ return nvdimm_ctl
+
+ @staticmethod
+ def is_nvdimm_controller(device):
+ return isinstance(device, vim.vm.device.VirtualNVDIMMController)
+
+ def create_nvdimm_device(self, nvdimm_ctl_dev_key, pmem_profile_id, nvdimm_dev_size_mb=1024):
+ nvdimm_dev_spec = vim.vm.device.VirtualDeviceSpec()
+ nvdimm_dev_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
+ nvdimm_dev_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
+ nvdimm_dev_spec.device = vim.vm.device.VirtualNVDIMM()
+ nvdimm_dev_spec.device.controllerKey = nvdimm_ctl_dev_key
+ nvdimm_dev_spec.device.key = -randint(28000, 28999)
+ nvdimm_dev_spec.device.capacityInMB = nvdimm_dev_size_mb
+ nvdimm_dev_spec.device.deviceInfo = vim.Description()
+ nvdimm_dev_spec.device.backing = vim.vm.device.VirtualNVDIMM.BackingInfo()
+ profile = vim.vm.DefinedProfileSpec()
+ profile.profileId = pmem_profile_id
+ nvdimm_dev_spec.profile = [profile]
+
+ return nvdimm_dev_spec
+
+ @staticmethod
+ def is_nvdimm_device(device):
+ return isinstance(device, vim.vm.device.VirtualNVDIMM)
+
+ def find_nvdimm_by_label(self, nvdimm_label, nvdimm_devices):
+ nvdimm_dev = None
+ for nvdimm in nvdimm_devices:
+ if nvdimm.deviceInfo.label == nvdimm_label:
+ nvdimm_dev = nvdimm
+
+ return nvdimm_dev
+
+ def remove_nvdimm(self, nvdimm_device):
+ nvdimm_spec = vim.vm.device.VirtualDeviceSpec()
+ nvdimm_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
+ nvdimm_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.destroy
+ nvdimm_spec.device = nvdimm_device
+
+ return nvdimm_spec
+
+ def update_nvdimm_config(self, nvdimm_device, nvdimm_size):
+ nvdimm_spec = vim.vm.device.VirtualDeviceSpec()
+ nvdimm_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+ nvdimm_spec.device = nvdimm_device
+ nvdimm_device.capacityInMB = nvdimm_size
+
+ return nvdimm_spec
diff --git a/plugins/module_utils/vmware_spbm.py b/plugins/module_utils/vmware_spbm.py
--- a/plugins/module_utils/vmware_spbm.py
+++ b/plugins/module_utils/vmware_spbm.py
@@ -41,3 +41,19 @@ def get_spbm_connection(self):
self.spbm_si = pbm.ServiceInstance("ServiceInstance", stub)
self.spbm_content = self.spbm_si.PbmRetrieveServiceContent()
+
+ def find_storage_profile_by_name(self, profile_name):
+ storage_profile = None
+ self.get_spbm_connection()
+ pm = self.spbm_content.profileManager
+ profile_ids = pm.PbmQueryProfile(resourceType=pbm.profile.ResourceType(resourceType="STORAGE"),
+ profileCategory="REQUIREMENT")
+ if len(profile_ids) > 0:
+ storage_profiles = pm.PbmRetrieveContent(profileIds=profile_ids)
+ for profile in storage_profiles:
+ if profile.name == profile_name:
+ storage_profile = profile
+ else:
+ self.module.warn("Unable to get storage profile IDs with STORAGE resource type and REQUIREMENT profile category.")
+
+ return storage_profile
diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -314,6 +314,33 @@
- C(controller_type), C(controller_number) and C(unit_number) are required when creating or reconfiguring VMs
with multiple types of disk controllers and disks.
- When creating new VM, the first configured disk in the C(disk) list will be "Hard Disk 1".
+ nvdimm:
+ description:
+ - Add or remove a virtual NVDIMM device to the virtual machine.
+ - VM virtual hardware version must be 14 or higher on vSphere 6.7 or later.
+ - Verify that guest OS of the virtual machine supports PMem before adding virtual NVDIMM device.
+ - Verify that you have the I(Datastore.Allocate) space privilege on the virtual machine.
+ - Make sure that the host or the cluster on which the virtual machine resides has available PMem resources.
+ - To add or remove virtual NVDIMM device to the existing virtual machine, it must be in power off state.
+ type: dict
+ version_added: '1.13.0'
+ suboptions:
+ state:
+ type: str
+ description:
+ - Valid value is C(present) or C(absent).
+ - If set to C(absent), then the NVDIMM device with specified C(label) will be removed.
+ choices: ['present', 'absent']
+ size_mb:
+ type: int
+ description: Virtual NVDIMM device size in MB.
+ default: 1024
+ label:
+ type: str
+ description:
+ - The label of the virtual NVDIMM device to be removed or configured, e.g., "NVDIMM 1".
+ - 'This parameter is required when C(state) is set to C(absent), or C(present) to reconfigure NVDIMM device
+ size. When add a new device, please do not set C(label).'
cdrom:
description:
- A list of CD-ROM configurations for the virtual machine. Added in version 2.9.
@@ -967,6 +994,29 @@
device_type: vmxnet3
delegate_to: localhost
register: deploy_vm
+
+- name: Create a VM with NVDIMM device
+ community.vmware.vmware_guest:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ folder: /DC1/vm/
+ name: test_vm_nvdimm
+ state: poweredoff
+ guest_id: centos7_64Guest
+ datastore: datastore1
+ hardware:
+ memory_mb: 512
+ num_cpus: 4
+ version: 14
+ networks:
+ - name: VM Network
+ device_type: vmxnet3
+ nvdimm:
+ state: present
+ size_mb: 2048
+ delegate_to: localhost
+ register: deploy_vm
'''
RETURN = r'''
@@ -1006,6 +1056,7 @@
quote_obj_name,
)
from ansible_collections.community.vmware.plugins.module_utils.vm_device_helper import PyVmomiDeviceHelper
+from ansible_collections.community.vmware.plugins.module_utils.vmware_spbm import SPBM
class PyVmomiCache(object):
@@ -1571,6 +1622,84 @@ def get_vm_ide_devices(self, vm=None):
def get_vm_sata_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualAHCIController)
+ def get_vm_nvdimm_ctl_device(self, vm=None):
+ return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualNVDIMMController)
+
+ def get_vm_nvdimm_devices(self, vm=None):
+ return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualNVDIMM)
+
+ def configure_nvdimm(self, vm_obj):
+ """
+ Manage virtual NVDIMM device to the virtual machine
+ Args:
+ vm_obj: virtual machine object
+ """
+ if self.params['nvdimm']['state']:
+ # Label is required when remove device
+ if self.params['nvdimm']['state'] == 'absent' and not self.params['nvdimm']['label']:
+ self.module.fail_json(msg="Please specify the label of virtual NVDIMM device using 'label' parameter"
+ " when state is set to 'absent'.")
+ # Reconfigure device requires VM in power off state
+ if vm_obj and not vm_obj.config.template:
+ if vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
+ self.module.fail_json(msg="VM is not in power off state, can not do virtual NVDIMM configuration.")
+
+ nvdimm_ctl_exists = False
+ if vm_obj and not vm_obj.config.template:
+ # Get existing NVDIMM controller
+ nvdimm_ctl = self.get_vm_nvdimm_ctl_device(vm=vm_obj)
+ if len(nvdimm_ctl) != 0:
+ nvdimm_ctl_exists = True
+ nvdimm_ctl_key = nvdimm_ctl[0].key
+ if self.params['nvdimm']['label'] is not None:
+ nvdimm_devices = self.get_vm_nvdimm_devices(vm=vm_obj)
+ if len(nvdimm_devices) != 0:
+ existing_nvdimm_dev = self.device_helper.find_nvdimm_by_label(
+ nvdimm_label=self.params['nvdimm']['label'],
+ nvdimm_devices=nvdimm_devices
+ )
+ if existing_nvdimm_dev is not None:
+ if self.params['nvdimm']['state'] == 'absent':
+ nvdimm_remove_spec = self.device_helper.remove_nvdimm(
+ nvdimm_device=existing_nvdimm_dev
+ )
+ self.change_detected = True
+ self.configspec.deviceChange.append(nvdimm_remove_spec)
+ else:
+ if existing_nvdimm_dev.capacityInMB < self.params['nvdimm']['size_mb']:
+ nvdimm_config_spec = self.device_helper.update_nvdimm_config(
+ nvdimm_device=existing_nvdimm_dev,
+ nvdimm_size=self.params['nvdimm']['size_mb']
+ )
+ self.change_detected = True
+ self.configspec.deviceChange.append(nvdimm_config_spec)
+ elif existing_nvdimm_dev.capacityInMB > self.params['nvdimm']['size_mb']:
+ self.module.fail_json(msg="Can not change NVDIMM device size to %s MB, which is"
+ " smaller than the current size %s MB."
+ % (self.params['nvdimm']['size_mb'],
+ existing_nvdimm_dev.capacityInMB))
+ # New VM or existing VM without label specified, add new NVDIMM device
+ if vm_obj is None or (vm_obj and not vm_obj.config.template and self.params['nvdimm']['label'] is None):
+ if self.params['nvdimm']['state'] == 'present':
+ # Get host default PMem storage policy
+ storage_profile_name = "Host-local PMem Default Storage Policy"
+ spbm = SPBM(self.module)
+ pmem_profile = spbm.find_storage_profile_by_name(profile_name=storage_profile_name)
+ if pmem_profile is None:
+ self.module.fail_json(msg="Can not find PMem storage policy with name '%s'." % storage_profile_name)
+ if not nvdimm_ctl_exists:
+ nvdimm_ctl_spec = self.device_helper.create_nvdimm_controller()
+ self.configspec.deviceChange.append(nvdimm_ctl_spec)
+ nvdimm_ctl_key = nvdimm_ctl_spec.device.key
+
+ nvdimm_dev_spec = self.device_helper.create_nvdimm_device(
+ nvdimm_ctl_dev_key=nvdimm_ctl_key,
+ pmem_profile_id=pmem_profile.profileId.uniqueId,
+ nvdimm_dev_size_mb=self.params['nvdimm']['size_mb']
+ )
+ self.change_detected = True
+ self.configspec.deviceChange.append(nvdimm_dev_spec)
+
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
@@ -2741,6 +2870,7 @@ def deploy_vm(self):
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
+ self.configure_nvdimm(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
@@ -2917,6 +3047,7 @@ def reconfigure_vm(self):
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
+ self.configure_nvdimm(vm_obj=self.current_vm_obj)
self.customize_advanced_settings(vm_obj=self.current_vm_obj, config_spec=self.configspec)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
@@ -3145,6 +3276,15 @@ def main():
unit_number=dict(type='int'),
)
),
+ nvdimm=dict(
+ type='dict',
+ default={},
+ options=dict(
+ state=dict(type='str', choices=['present', 'absent']),
+ label=dict(type='str'),
+ size_mb=dict(type='int', default=1024),
+ )
+ ),
cdrom=dict(type='raw', default=[]),
hardware=dict(
type='dict',
@@ -3222,9 +3362,7 @@ def main():
['name', 'uuid'],
],
)
-
result = {'failed': False, 'changed': False}
-
pyv = PyVmomiHelper(module)
# Check requirements for virtualization based security
| [New Feature] Add support for virtual NVDIMM device of VM
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
PMem was introduced on ESXi 6.7, refer to this page for more info: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-EB72D358-9C2C-4FBD-81A9-A145E155CE31.html
So request to add the feature support in community.vmware collection, add new vPMem disk, add new NVDIMM device for VM.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest_disk?
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
* [`plugins/modules/vmware_guest_disk.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest_disk.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Now that #909 and #914 are merged, I'd like to work on making vmware_guest_disk use module_utils.vm_device_helper to create disks. Then we could implement vPMem disk / NVDIMM devices there, thus making it available to both vmware_guest_disk and vmware_guest.
Btw: Do you want to implement this because you have a real use case for this, or just because it's possible? When I hear [NVDIMM](https://en.wikipedia.org/wiki/NVDIMM), I think [3D XPoint](https://en.wikipedia.org/wiki/3D_XPoint). And this looks pretty dead to me. Micron doesn't work on this any more, and personally I think Intel keeps producing Optane because they just don't want to officially acknowledge they've invested a lot of money into a product nobody wants.
I don't have data about how many customers are using vPMem disk or NVDIMM device of VM on vSphere in real world, and there are some limitations for VM with NVDIMM device or vPMem disk, might be one of the reasons users not use it.
This request comes from the project we started using Ansible to automate the GOS certification testing [here](https://github.com/vmware/ansible-vsphere-gos-validation). Some functionalities or VM virutal device types on vSphere not implemented in this collection and we need them in our test suite.
@Tomorrow9 I'd like to move some more code from `vmware_guest_disk` to `vm_device_helper` first.
However, what would we need to implement for vPMem disks / NVDIMM devices? I guess we need a function in `vm_device_helper` to create a [VirtualNVDIMMController](https://vdc-download.vmware.com/vmwb-repository/dcr-public/8946c1b6-2861-4c12-a45f-f14ae0d3b1b9/a5b8094c-c222-4307-9399-3b606a04af55/vim.vm.device.VirtualNVDIMMController.html), right? And for the PMem disks / NVDIMM devices themselves we would use [VirtualDiskLocalPMemBackingInfo](https://vdc-download.vmware.com/vmwb-repository/dcr-public/8946c1b6-2861-4c12-a45f-f14ae0d3b1b9/a5b8094c-c222-4307-9399-3b606a04af55/vim.vm.device.VirtualDisk.LocalPMemBackingInfo.html) / [VirtualNVDIMMBackingInfo](https://vdc-download.vmware.com/vmwb-repository/dcr-public/8946c1b6-2861-4c12-a45f-f14ae0d3b1b9/a5b8094c-c222-4307-9399-3b606a04af55/vim.vm.device.VirtualNVDIMM.BackingInfo.html) as `backing`?
> However, what would we need to implement for vPMem disks / NVDIMM devices? I guess we need a function in `vm_device_helper` to create a [VirtualNVDIMMController](https://vdc-download.vmware.com/vmwb-repository/dcr-public/8946c1b6-2861-4c12-a45f-f14ae0d3b1b9/a5b8094c-c222-4307-9399-3b606a04af55/vim.vm.device.VirtualNVDIMMController.html), right? And for the PMem disks / NVDIMM devices themselves we would use [VirtualDiskLocalPMemBackingInfo](https://vdc-download.vmware.com/vmwb-repository/dcr-public/8946c1b6-2861-4c12-a45f-f14ae0d3b1b9/a5b8094c-c222-4307-9399-3b606a04af55/vim.vm.device.VirtualDisk.LocalPMemBackingInfo.html) / [VirtualNVDIMMBackingInfo](https://vdc-download.vmware.com/vmwb-repository/dcr-public/8946c1b6-2861-4c12-a45f-f14ae0d3b1b9/a5b8094c-c222-4307-9399-3b606a04af55/vim.vm.device.VirtualNVDIMM.BackingInfo.html) as `backing`?
Yeah, you are right, we need to add creating these 2 "VirtualNVDIMMController", "VirtualNVDIMM" new device types, and a new "VirtualDisk" with "VirtualDiskLocalPMemBackingInfo".
@Tomorrow9
I'm sorry, but I don't understand how to implement vPMem disks / NVDIMM devices. If you do, feel free to give it a try. But please wait until #926 is merged so we don't run into any merge conflicts.
Btw: It would be great if you implemented the things that could be useful for several modules (like `vmware_guest` and `vmware_guest_disk`) in `module_utils/vm_device_helper.py`.
@mariolenz sure, I'm working on adding/removing new NVDIMM device in `vmware_guest` module firstly. After that, will consider adding vPmem disk support in `vmware_guest` module and `vmware_guest_disk` module. But common virtual device operations will be added in `module_utils/vm_device_helper.py`. Thanks. | 2021-07-14T06:21:28 |
|
ansible-collections/community.vmware | 977 | ansible-collections__community.vmware-977 | [
"955"
] | 634ec3b9862e07cf7b390a6c1680d15348af0c33 | diff --git a/plugins/modules/vmware_portgroup.py b/plugins/modules/vmware_portgroup.py
--- a/plugins/modules/vmware_portgroup.py
+++ b/plugins/modules/vmware_portgroup.py
@@ -296,12 +296,13 @@ def __init__(self, module):
self.sec_mac_changes = None
if self.params['traffic_shaping']:
self.ts_enabled = self.params['traffic_shaping'].get('enabled')
- for value in ['average_bandwidth', 'peak_bandwidth', 'burst_size']:
- if not self.params['traffic_shaping'].get(value):
- self.module.fail_json(msg="traffic_shaping.%s is a required parameter if traffic_shaping is enabled." % value)
- self.ts_average_bandwidth = self.params['traffic_shaping'].get('average_bandwidth')
- self.ts_peak_bandwidth = self.params['traffic_shaping'].get('peak_bandwidth')
- self.ts_burst_size = self.params['traffic_shaping'].get('burst_size')
+ if self.ts_enabled is True:
+ for value in ['average_bandwidth', 'peak_bandwidth', 'burst_size']:
+ if not self.params['traffic_shaping'].get(value):
+ self.module.fail_json(msg="traffic_shaping.%s is a required parameter if traffic_shaping is enabled." % value)
+ self.ts_average_bandwidth = self.params['traffic_shaping'].get('average_bandwidth')
+ self.ts_peak_bandwidth = self.params['traffic_shaping'].get('peak_bandwidth')
+ self.ts_burst_size = self.params['traffic_shaping'].get('burst_size')
else:
self.ts_enabled = None
self.ts_average_bandwidth = None
@@ -641,11 +642,6 @@ def update_host_port_group(self, host_system, portgroup_object):
changed = True
changed_list.append("Traffic shaping")
host_results['traffic_shaping_previous'] = False
- elif self.ts_enabled is False:
- changed = True
- changed_list.append("Traffic shaping")
- host_results['traffic_shaping_previous'] = True
- spec.policy.shapingPolicy.enabled = False
elif self.ts_enabled is None:
spec.policy.shapingPolicy = None
changed = True
| diff --git a/tests/integration/targets/vmware_portgroup/tasks/main.yml b/tests/integration/targets/vmware_portgroup/tasks/main.yml
--- a/tests/integration/targets/vmware_portgroup/tasks/main.yml
+++ b/tests/integration/targets/vmware_portgroup/tasks/main.yml
@@ -72,6 +72,46 @@
that:
- pg_info.changed
+ - name: Create portgroup with traffic shaping
+ vmware_portgroup:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ vswitch: '{{ switch1 }}'
+ portgroup: 'pg_ts'
+ cluster_name: "{{ ccr1 }}"
+ traffic_shaping:
+ enabled: true
+ average_bandwidth: 100000
+ peak_bandwidth: 100000
+ burst_size: 102400
+ validate_certs: false
+ state: present
+ register: create_portgroup_with_traffic_shaping_result
+
+ - assert:
+ that:
+ - create_portgroup_with_traffic_shaping_result.changed
+
+ # Issue 955
+ - name: Disable traffic shaping
+ vmware_portgroup:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ vswitch: '{{ switch1 }}'
+ portgroup: 'pg_ts'
+ cluster_name: "{{ ccr1 }}"
+ traffic_shaping:
+ enabled: false
+ validate_certs: false
+ state: present
+ register: disable_traffic_shaping_result
+
+ - assert:
+ that:
+ - disable_traffic_shaping_result.changed
+
- name: Integration test a PortGroup name with special characters
block:
- name: Create Switch with special characters
| vmware_portgroup throws an error when disabling traffic_shaping
##### SUMMARY
If I use the vmware_portgroup plugin to disable traffic_shaping like so:
```
- name: Disable traffic shaping
community.vmware.vmware_portgroup:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ inventory_hostname }}"
switch: "{{ vswitch_name }}"
portgroup: "{{ portgroup_name }}"
traffic_shaping:
enabled: False
```
The plugin throws the error:
```
fatal: [myhostname]: FAILED! => {"changed": false, "msg": "traffic_shaping.average_bandwidth is a required parameter if traffic_shaping is enabled."}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_portgroup
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg
configured module search path = ['/Users/eruby/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/eruby/.pyenv/versions/3.7.4/lib/python3.7/site-packages/ansible
executable location = /Users/eruby/.pyenv/versions/3.7.4/bin/ansible
python version = 3.7.4 (default, Oct 30 2019, 17:03:08) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### COLLECTION VERSION
```
# /Users/eruby/.ansible/collections/ansible_collections
Collection Version
---------------- -------
community.vmware 1.12.0
```
##### CONFIGURATION
```
ANSIBLE_PIPELINING(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = -F ansible_ssh_config
CACHE_PLUGIN(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = /tmp/$USER/
CACHE_PLUGIN_TIMEOUT(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = 300
DEFAULT_BECOME(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = True
DEFAULT_BECOME_USER(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = root
DEFAULT_FORKS(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = 60
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['/Users/eruby/Projects/bitfusion/bitfusionio/infrastructure/ansible/inventory']
DEFAULT_PRIVATE_KEY_FILE(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = /Users/eruby/.ssh/id_rsa_ansible_bitfusion
DEFAULT_REMOTE_USER(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = ansible
HOST_KEY_CHECKING(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = /usr/bin/python3
INVENTORY_ENABLED(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml', 'amaz
LOCALHOST_WARNING(/Users/eruby/Projects/bitfusion/infrastructure/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Target OS:
VMware ESXi, 7.0.1, 17551050
vCenter 7.0.2, 17958471
##### STEPS TO REPRODUCE
Add a play to disable traffic shaping:
```yaml
- name: Disable traffic shaping
community.vmware.vmware_portgroup:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ inventory_hostname }}"
switch: "{{ vswitch_name }}"
portgroup: "{{ portgroup_name }}"
traffic_shaping:
enabled: False
```
##### EXPECTED RESULTS
Traffic shaping disabled.
##### ACTUAL RESULTS
The plugin throws the error:
```
fatal: [myhostname]: FAILED! => {"changed": false, "msg": "traffic_shaping.average_bandwidth is a required parameter if traffic_shaping is enabled."}
```
Parameters to set traffic shaping should not be required if traffic shaping is disabled.
| 2021-07-25T15:31:25 |
|
ansible-collections/community.vmware | 998 | ansible-collections__community.vmware-998 | [
"993"
] | a79dbc75c847cba2811b660f3b3e6a0d81abf508 | diff --git a/plugins/module_utils/vm_device_helper.py b/plugins/module_utils/vm_device_helper.py
--- a/plugins/module_utils/vm_device_helper.py
+++ b/plugins/module_utils/vm_device_helper.py
@@ -35,13 +35,21 @@ def __init__(self, module):
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
- 'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
+ 'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController
}
self.sata_device_type = vim.vm.device.VirtualAHCIController
self.nvme_device_type = vim.vm.device.VirtualNVMEController
self.usb_device_type = {
'usb2': vim.vm.device.VirtualUSBController,
- 'usb3': vim.vm.device.VirtualUSBXHCIController,
+ 'usb3': vim.vm.device.VirtualUSBXHCIController
+ }
+ self.nic_device_type = {
+ 'pcnet32': vim.vm.device.VirtualPCNet32,
+ 'vmxnet2': vim.vm.device.VirtualVmxnet2,
+ 'vmxnet3': vim.vm.device.VirtualVmxnet3,
+ 'e1000': vim.vm.device.VirtualE1000,
+ 'e1000e': vim.vm.device.VirtualE1000e,
+ 'sriov': vim.vm.device.VirtualSriovEthernetCard
}
def create_scsi_controller(self, scsi_type, bus_number, bus_sharing='noSharing'):
@@ -49,7 +57,7 @@ def create_scsi_controller(self, scsi_type, bus_number, bus_sharing='noSharing')
Create SCSI Controller with given SCSI Type and SCSI Bus Number
Args:
scsi_type: Type of SCSI
- scsi_bus_number: SCSI Bus number to be assigned
+ bus_number: SCSI Bus number to be assigned
bus_sharing: noSharing, virtualSharing, physicalSharing
Returns: Virtual device spec for SCSI Controller
@@ -271,23 +279,10 @@ def create_hard_disk(self, disk_ctl, disk_index=None):
return diskspec
- def get_device(self, device_type, name):
- nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
- vmxnet2=vim.vm.device.VirtualVmxnet2(),
- vmxnet3=vim.vm.device.VirtualVmxnet3(),
- e1000=vim.vm.device.VirtualE1000(),
- e1000e=vim.vm.device.VirtualE1000e(),
- sriov=vim.vm.device.VirtualSriovEthernetCard(),
- )
- if device_type in nic_dict:
- return nic_dict[device_type]
- else:
- self.module.fail_json(msg='Invalid device_type "%s"'
- ' for network "%s"' % (device_type, name))
-
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
- nic.device = self.get_device(device_type, device_infos['name'])
+ nic_device = self.nic_device_type.get(device_type)
+ nic.device = nic_device()
nic.device.key = -randint(25000, 29999)
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -1705,12 +1705,10 @@ def get_vm_network_interfaces(self, vm=None):
if vm is None:
return device_list
- nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
- vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
- vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
- if isinstance(device, nw_device_types):
- device_list.append(device)
+ for device_type in self.device_helper.nic_device_type.values():
+ if isinstance(device, device_type):
+ device_list.append(device)
return device_list
@@ -1774,12 +1772,10 @@ def sanitize_network_params(self):
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
- validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
- if 'device_type' in network and network['device_type'] not in validate_device_types:
- self.module.fail_json(msg="Device type specified '%s' is not valid."
- " Please specify correct device"
- " type from ['%s']." % (network['device_type'],
- "', '".join(validate_device_types)))
+ if 'device_type' in network and network['device_type'] not in self.device_helper.nic_device_type.keys():
+ self.module.fail_json(msg="Device type specified '%s' is not valid. Please specify correct device type"
+ " from ['%s']." % (network['device_type'],
+ "', '".join(self.device_helper.nic_device_type.keys())))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
@@ -1834,11 +1830,11 @@ def configure_network(self, vm_obj):
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
- device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
- device_class = type(device)
- if not isinstance(nic.device, device_class):
- self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
- "The failing device type is %s" % network_devices[key]['device_type'])
+ device = self.device_helper.nic_device_type.get(network_devices[key]['device_type'])
+ if not isinstance(nic.device, device):
+ self.module.fail_json(msg="Changing the device type is not possible when interface is already"
+ " present. The failing device type is %s"
+ % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
diff --git a/plugins/modules/vmware_vm_config_option.py b/plugins/modules/vmware_vm_config_option.py
new file mode 100644
--- /dev/null
+++ b/plugins/modules/vmware_vm_config_option.py
@@ -0,0 +1,302 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2021, Ansible Project
+# Copyright: (c) 2021, VMware, Inc. All Rights Reserved.
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: vmware_vm_config_option
+short_description: Return supported guest ID list and VM recommended config option for specific guest OS
+description: >
+ This module is used for getting the hardware versions supported for creation, the guest ID list supported by ESXi
+ host for the most recent virtual hardware supported or specified hardware version, the VM recommended config options
+ for specified guest OS ID.
+author:
+- Diane Wang (@Tomorrow9) <[email protected]>
+version_added: '1.15.0'
+notes:
+- Tested on vSphere 6.5
+- Tested on vSphere 6.7
+- Known issue on vSphere 7.0 (https://github.com/vmware/pyvmomi/issues/915)
+requirements:
+- python >= 2.6
+- PyVmomi
+- System.View privilege
+options:
+ datacenter:
+ description:
+ - The datacenter name used to get specified cluster or host.
+ - This parameter is case sensitive.
+ default: ha-datacenter
+ type: str
+ cluster_name:
+ description:
+ - Name of the cluster.
+ - If C(esxi_hostname) is not given, this parameter is required.
+ type: str
+ esxi_hostname:
+ description:
+ - ESXi hostname.
+ - Obtain VM configure options on this ESXi host.
+ - If C(cluster_name) is not given, this parameter is required.
+ type: str
+ get_hardware_versions:
+ description:
+ - Return the list of VM hardware versions supported for creation and the default hardware version on the
+ specified entity.
+ type: bool
+ default: false
+ get_guest_os_ids:
+ description:
+ - Return the list of guest OS IDs supported on the specified entity.
+ - If C(hardware_version) is set, will return the corresponding guest OS ID list supported, or will return the
+ guest OS ID list for the default hardware version.
+ type: bool
+ default: false
+ get_config_options:
+ description:
+ - Return the dict of VM recommended config options for guest ID specified by C(guest_id) with hardware version
+ specified by C(hardware_version) or the default hardware version.
+ - When set to True, C(guest_id) must be set.
+ type: bool
+ default: false
+ guest_id:
+ description:
+ - The guest OS ID from the returned list when C(get_guest_os_ids) is set to C(True), e.g., 'rhel8_64Guest'.
+ - This parameter must be set when C(get_config_options) is set to C(True).
+ type: str
+ hardware_version:
+ description:
+ - The hardware version from the returned list when C(get_hardware_versions) is set to C(True), e.g., 'vmx-19'.
+ type: str
+extends_documentation_fragment:
+- community.vmware.vmware.documentation
+
+'''
+
+EXAMPLES = r'''
+- name: Get supported guest ID list on given ESXi host for with default hardware version
+ community.vmware.vmware_vm_config_option:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ esxi_hostname: "{{ esxi_hostname }}"
+ get_guest_os_ids: True
+ delegate_to: localhost
+
+- name: Get VM recommended config option for Windows 10 guest OS on given ESXi host
+ community.vmware.vmware_vm_config_option:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ esxi_hostname: "{{ esxi_hostname }}"
+ get_config_options: True
+ guest_id: "windows9_64Guest"
+ delegate_to: localhost
+'''
+
+RETURN = r'''
+instance:
+ description: metadata about the VM recommended configuration
+ returned: always
+ type: dict
+ sample: None
+'''
+
+HAS_PYVMOMI = False
+try:
+ from pyVmomi import vim, vmodl
+ HAS_PYVMOMI = True
+except ImportError:
+ pass
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils._text import to_native
+from ansible_collections.community.vmware.plugins.module_utils.vmware import find_obj, vmware_argument_spec, PyVmomi
+from ansible_collections.community.vmware.plugins.module_utils.vm_device_helper import PyVmomiDeviceHelper
+
+
+class VmConfigOption(PyVmomi):
+ def __init__(self, module):
+ super(VmConfigOption, self).__init__(module)
+ self.device_helper = PyVmomiDeviceHelper(self.module)
+ self.ctl_device_type = self.device_helper.scsi_device_type.copy()
+ self.ctl_device_type.update({'sata': self.device_helper.sata_device_type,
+ 'nvme': self.device_helper.nvme_device_type}
+ )
+ self.ctl_device_type.update(self.device_helper.usb_device_type)
+ self.ctl_device_type.update(self.device_helper.nic_device_type)
+
+ def get_hardware_versions(self, env_browser):
+ support_create = []
+ default_config = ''
+ try:
+ desc = env_browser.QueryConfigOptionDescriptor()
+ except vmodl.RuntimeFault as runtime_fault:
+ self.module.fail_json(msg=to_native(runtime_fault.msg))
+ except Exception as generic_fault:
+ self.module.fail_json(msg="Failed to obtain VM config option descriptor due to fault: %s" % generic_fault)
+ if desc:
+ for option_desc in desc:
+ if option_desc.createSupported:
+ support_create = support_create + [option_desc.key]
+ if option_desc.defaultConfigOption:
+ default_config = option_desc.key
+
+ return support_create, default_config
+
+ def get_config_option_by_spec(self, env_browser, guest_id=None, host=None, key=''):
+ vm_config_option = None
+ if guest_id is None:
+ guest_id = []
+ config_query_spec = vim.EnvironmentBrowser.ConfigOptionQuerySpec(guestId=guest_id, host=host, key=key)
+ try:
+ vm_config_option = env_browser.QueryConfigOptionEx(spec=config_query_spec)
+ except vmodl.RuntimeFault as runtime_fault:
+ self.module.fail_json(msg=to_native(runtime_fault.msg))
+ except Exception as generic_fault:
+ self.module.fail_json(msg="Failed to obtain VM config options due to fault: %s" % generic_fault)
+
+ return vm_config_option
+
+ def get_config_option_recommended(self, guest_os_desc, hwv_version=''):
+ guest_os_option_dict = {}
+ if guest_os_desc and len(guest_os_desc) != 0:
+ default_disk_ctl = default_ethernet = default_cdrom_ctl = default_usb_ctl = ''
+ for name, type in self.ctl_device_type.items():
+ if type == guest_os_desc[0].recommendedDiskController:
+ default_disk_ctl = name
+ if type == guest_os_desc[0].recommendedEthernetCard:
+ default_ethernet = name
+ if type == guest_os_desc[0].recommendedCdromController:
+ default_cdrom_ctl = name
+ if type == guest_os_desc[0].recommendedUSBController:
+ default_usb_ctl = name
+ guest_os_option_dict = {
+ 'Hardware version': hwv_version,
+ 'Guest ID': guest_os_desc[0].id,
+ 'Guest fullname': guest_os_desc[0].fullName,
+ 'Default CPU cores per socket': guest_os_desc[0].numRecommendedCoresPerSocket,
+ 'Default CPU socket': guest_os_desc[0].numRecommendedPhysicalSockets,
+ 'Default memory in MB': guest_os_desc[0].recommendedMemMB,
+ 'Default firmware': guest_os_desc[0].recommendedFirmware,
+ 'Default secure boot': guest_os_desc[0].defaultSecureBoot,
+ 'Support secure boot': guest_os_desc[0].supportsSecureBoot,
+ 'Default disk controller': default_disk_ctl,
+ 'Default disk size in MB': guest_os_desc[0].recommendedDiskSizeMB,
+ 'Default network adapter': default_ethernet,
+ 'Default CDROM controller': default_cdrom_ctl,
+ 'Default USB controller': default_usb_ctl
+ }
+
+ return guest_os_option_dict
+
+ def get_guest_id_list(self, guest_os_desc):
+ gos_id_list = []
+ if guest_os_desc:
+ for gos_desc in guest_os_desc.guestOSDescriptor:
+ gos_id_list = gos_id_list + [gos_desc.id]
+
+ return gos_id_list
+
+ def get_config_option_for_guest(self):
+ results = {}
+ guest_id = []
+ host = None
+ datacenter_name = self.params.get('datacenter')
+ cluster_name = self.params.get('cluster_name')
+ esxi_host_name = self.params.get('esxi_hostname')
+ if self.params.get('guest_id'):
+ guest_id = [self.params.get('guest_id')]
+
+ if not self.params.get('get_hardware_versions') and not self.params.get('get_guest_os_ids') \
+ and not self.params.get('get_config_options'):
+ self.module.exit_json(msg="Please set at least one of these parameters 'get_hardware_versions',"
+ " 'get_guest_os_ids', 'get_config_options' to True to get the desired info.")
+ if self.params.get('get_config_options') and len(guest_id) == 0:
+ self.module.fail_json(msg="Please set 'guest_id' when 'get_config_options' is set to True,"
+ " to get the VM recommended config option for specific guest OS.")
+
+ # Get the datacenter object
+ datacenter = find_obj(self.content, [vim.Datacenter], datacenter_name)
+ if not datacenter:
+ self.module.fail_json(msg='Unable to find datacenter "%s"' % datacenter_name)
+ # Get the cluster object
+ if cluster_name:
+ cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
+ if not cluster:
+ self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
+ # If host is given, get the cluster object using the host
+ elif esxi_host_name:
+ host = find_obj(self.content, [vim.HostSystem], esxi_host_name, folder=datacenter)
+ if not host:
+ self.module.fail_json(msg='Unable to find host "%s"' % esxi_host_name)
+ cluster = host.parent
+ # Define the environment browser object the ComputeResource presents
+ env_browser = cluster.environmentBrowser
+ if env_browser is None:
+ self.module.fail_json(msg="The environmentBrowser of the ComputeResource is None, so can not get the"
+ " desired config option info, please check your vSphere environment.")
+ # Get supported hardware versions list
+ support_create_list, default_config = self.get_hardware_versions(env_browser=env_browser)
+ if self.params.get('get_hardware_versions'):
+ results.update({'Supported hardware versions': support_create_list,
+ 'Default hardware version': default_config})
+
+ if self.params.get('get_guest_os_ids') or self.params.get('get_config_options'):
+ # Get supported guest ID list
+ hardware_version = self.params.get('hardware_version', '')
+ if hardware_version and len(support_create_list) != 0 and hardware_version not in support_create_list:
+ self.module.fail_json(msg="Specified hardware version '%s' is not in the supported create list: %s"
+ % (hardware_version, support_create_list))
+ vm_config_option_all = self.get_config_option_by_spec(env_browser=env_browser, host=host,
+ key=hardware_version)
+ supported_gos_list = self.get_guest_id_list(guest_os_desc=vm_config_option_all)
+ if self.params.get('get_guest_os_ids'):
+ info_key = 'Supported guest IDs for %s' % vm_config_option_all.version
+ results.update({info_key: supported_gos_list})
+
+ if self.params.get('get_config_options') and len(guest_id) != 0:
+ if supported_gos_list and guest_id[0] not in supported_gos_list:
+ self.module.fail_json(msg="Specified guest ID '%s' is not in the supported guest ID list: '%s'"
+ % (guest_id[0], supported_gos_list))
+ vm_config_option_guest = self.get_config_option_by_spec(env_browser=env_browser, host=host,
+ guest_id=guest_id, key=hardware_version)
+ guest_os_options = vm_config_option_guest.guestOSDescriptor
+ guest_os_option_dict = self.get_config_option_recommended(guest_os_desc=guest_os_options,
+ hwv_version=vm_config_option_guest.version)
+ results.update({'Recommended config options': guest_os_option_dict})
+
+ self.module.exit_json(changed=False, failed=False, instance=results)
+
+
+def main():
+ argument_spec = vmware_argument_spec()
+ argument_spec.update(
+ datacenter=dict(type='str', default='ha-datacenter'),
+ cluster_name=dict(type='str'),
+ esxi_hostname=dict(type='str'),
+ get_hardware_versions=dict(type='bool', default=False),
+ get_guest_os_ids=dict(type='bool', default=False),
+ get_config_options=dict(type='bool', default=False),
+ guest_id=dict(type='str'),
+ hardware_version=dict(type='str'),
+ )
+ module = AnsibleModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True,
+ required_one_of=[
+ ['cluster_name', 'esxi_hostname'],
+ ]
+ )
+ vm_config_option_guest = VmConfigOption(module)
+ vm_config_option_guest.get_config_option_for_guest()
+
+
+if __name__ == "__main__":
+ main()
| diff --git a/tests/integration/targets/vmware_vm_config_option/aliases b/tests/integration/targets/vmware_vm_config_option/aliases
new file mode 100644
--- /dev/null
+++ b/tests/integration/targets/vmware_vm_config_option/aliases
@@ -0,0 +1,4 @@
+cloud/vcenter
+needs/target/prepare_vmware_tests
+zuul/vmware/vcenter_1esxi
+zuul/vmware/govcsim
diff --git a/tests/integration/targets/vmware_vm_config_option/tasks/main.yml b/tests/integration/targets/vmware_vm_config_option/tasks/main.yml
new file mode 100644
--- /dev/null
+++ b/tests/integration/targets/vmware_vm_config_option/tasks/main.yml
@@ -0,0 +1,117 @@
+# Test code for the vmware_vm_config_option module.
+# Copyright: (c) 2021, VMware, Inc. All Rights Reserved.
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+- import_role:
+ name: prepare_vmware_tests
+ vars:
+ setup_attach_host: true
+
+- when: vcsim is not defined
+ block:
+ - name: get list of config option keys from ESXi host
+ community.vmware.vmware_vm_config_option:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ esxi_hostname: "{{ esxi_hosts[0] }}"
+ get_hardware_versions: true
+ register: config_option_keys_1
+
+ - debug:
+ var: config_option_keys_1
+
+ - name: get list of config option keys from cluster
+ community.vmware.vmware_vm_config_option:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ cluster_name: "{{ ccr1 }}"
+ get_hardware_versions: true
+ register: config_option_keys_2
+
+ - debug:
+ var: config_option_keys_2
+
+ # Ignore errors due to there is known issue on vSphere 7.0.x
+ # https://github.com/vmware/pyvmomi/issues/915
+ - name: get list of supported guest IDs from ESXi host
+ community.vmware.vmware_vm_config_option:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ esxi_hostname: "{{ esxi_hosts[0] }}"
+ get_guest_os_ids: true
+ register: guest_ids_1
+ ignore_errors: true
+
+ - debug:
+ var: guest_ids_1
+
+ - name: check guest ID list returned when task not failed
+ assert:
+ that:
+ - "'instance' in guest_ids_1"
+ when:
+ - "'failed' in guest_ids_1"
+ - not guest_ids_1.failed
+
+ - block:
+ # Ignore errors due to there is known issue on vSphere 7.0.x
+ # https://github.com/vmware/pyvmomi/issues/915
+ - name: get list of supported guest IDs from ESXi host with config option key
+ community.vmware.vmware_vm_config_option:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ esxi_hostname: "{{ esxi_hosts[0] }}"
+ get_guest_os_ids: true
+ hardware_version: "{{ config_option_keys_1['instance']['Supported hardware versions'][0] }}"
+ register: guest_ids_2
+ ignore_errors: true
+
+ - debug:
+ var: guest_ids_2
+ - name: check guest ID list returned when task not failed
+ assert:
+ that:
+ - "'instance' in guest_ids_2"
+ when:
+ - "'failed' in guest_ids_2"
+ - not guest_ids_2.failed
+ when:
+ - "'instance' in config_option_keys_1"
+ - "'Supported hardware versions' in config_option_keys_1['instance']"
+
+ # Ignore errors due to there is known issue on vSphere 7.0.x
+ # https://github.com/vmware/pyvmomi/issues/915
+ - name: get dict of recommended configs for specified guest ID from ESXi host
+ community.vmware.vmware_vm_config_option:
+ validate_certs: false
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ dc1 }}"
+ esxi_hostname: "{{ esxi_hosts[0] }}"
+ get_config_options: true
+ guest_id: "rhel6_64Guest"
+ register: recommended_configs
+ ignore_errors: true
+
+ - debug:
+ var: recommended_configs
+ - name: check guest ID list returned when task not failed
+ assert:
+ that:
+ - "'instance' in recommended_configs"
+ when:
+ - "'failed' in recommended_configs"
+ - not recommended_configs.failed
| Add a new module for retrieving supported GOS list and recommended VM config
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
A new module is used for below 2 cases:
1. Get the supported guest OS ID list on ESXi host for the most recent hardware version supported.
2. Get the recommended VM configurations for specified guest ID, e.g., CPU number, memory size, boot disk controller type, firmware, secure boot supported, etc.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vm_config_option
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
community.vmware.vmware_vm_config_option:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ esxi_hostname }}"
guest_id: "windows9_64Guest"
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner ---> | 2021-08-11T08:06:24 |
ansible-collections/community.vmware | 1,001 | ansible-collections__community.vmware-1001 | [
"860"
] | 9e3d7e79246c1d82e317493d2a3b7d820c5a3836 | diff --git a/plugins/modules/vmware_guest_network.py b/plugins/modules/vmware_guest_network.py
--- a/plugins/modules/vmware_guest_network.py
+++ b/plugins/modules/vmware_guest_network.py
@@ -467,7 +467,7 @@ def _get_nics_from_vm(self, vm_obj):
nic_info_lst.append(d_item)
- nic_info_lst = sorted(nic_info_lst, key=lambda d: d['mac_address'])
+ nic_info_lst = sorted(nic_info_lst, key=lambda d: d['mac_address'] if (d['mac_address'] is not None) else '00:00:00:00:00:00')
return nic_info_lst, nics
def _get_compute_resource_by_name(self, recurse=True):
| vmware_guest_network - Adding NICS to VM Fails if VM has never been powered on
##### SUMMARY
https://github.com/ansible-collections/community.vmware/blob/70d752bc4419e6f1c74d9faee21793376f9899b8/plugins/modules/vmware_guest_network.py#L470
This fails if the VM has never been powered on before (i.e you deployed a VM from an OVA and want to add nics before powering on for the first time)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- vmware_guest_network
##### ANSIBLE VERSION
```
ansible 2.9.19
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/server.local/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
No changes from default
##### OS / ENVIRONMENT
RHEL 8.3
Tower 3.7.1
##### STEPS TO REPRODUCE
- deploy any OVA without powering on
- use `community.vmware.vmware_guest_network` to add nics
##### EXPECTED RESULTS
NICS added to VM successfully
##### ACTUAL RESULTS
```
{
"module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1622083361.5219777-46-219301060242623/AnsiballZ_vmware_guest_network.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1622083361.5219777-46-219301060242623/AnsiballZ_vmware_guest_network.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1622083361.5219777-46-219301060242623/AnsiballZ_vmware_guest_network.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_guest_network', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 890, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 879, in main\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 711, in _nic_present\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 470, in _get_nics_from_vm\r\nTypeError: '<' not supported between instances of 'NoneType' and 'NoneType'\r\n",
"module_stderr": "Shared connection to 10.60.253.13 closed.\r\n",
"exception": "Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1622083361.5219777-46-219301060242623/AnsiballZ_vmware_guest_network.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1622083361.5219777-46-219301060242623/AnsiballZ_vmware_guest_network.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1622083361.5219777-46-219301060242623/AnsiballZ_vmware_guest_network.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_guest_network', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 890, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 879, in main\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 711, in _nic_present\r\n File \"/tmp/ansible_community.vmware.vmware_guest_network_payload_lgz6j7e9/ansible_community.vmware.vmware_guest_network_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest_network.py\", line 470, in _get_nics_from_vm\r\nTypeError: '<' not supported between instances of 'NoneType' and 'NoneType'\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1,
"_ansible_no_log": false,
"changed": false,
"item": "PACS-RTR-Trunk",
"ansible_loop_var": "item",
"_ansible_item_label": "PACS-RTR-Trunk"
}
```
| Files identified in the description:
* [`lib/ansible/plugins/connection`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/lib/ansible/plugins/connection)
* [`plugins/modules/vmware_guest_snapshot.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest_snapshot.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@teamosceola: Greetings! Thanks for taking the time to open this issue. In order for the community to handle your issue effectively, we need a bit more information.
Here are the items we could not find in your description:
- ansible version
Please set the description of this issue with this template:
https://raw.githubusercontent.com/ansible/ansible/devel/.github/ISSUE_TEMPLATE/bug_report.md
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: issue_missing_data --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @nerzhul @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
Files identified in the description:
* [`plugins/modules/vmware_guest_network.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_guest_network.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
> https://github.com/ansible-collections/community.vmware/blob/70d752bc4419e6f1c74d9faee21793376f9899b8/plugins/modules/vmware_guest_network.py#L470
It looks like this has been introduced with PR #29. @acalm do you remember why you've sorted the NICs on the MAC address? Did you just want to have a nicely sorted list somewhere or is this needed for the module to work?
> > https://github.com/ansible-collections/community.vmware/blob/70d752bc4419e6f1c74d9faee21793376f9899b8/plugins/modules/vmware_guest_network.py#L470
>
> It looks like this has been introduced with PR #29. @acalm do you remember why you've sorted the NICs on the MAC address? Did you just want to have a nicely sorted list somewhere or is this needed for the module to work?
@mariolenz I have some vague memory it's done due the diff looking messed up, but I can't remember really, it may just been me imagining some weird edge case where order mattered. From the exception it looks as if `nic.macAddress` is `None` in a deployed not-yet-powered-on OVA(?) I guess it would be possible to do something like `getattr(nic, 'macAddress', 'not_initialized')` [here](https://github.com/ansible-collections/community.vmware/blob/70d752bc4419e6f1c74d9faee21793376f9899b8/plugins/modules/vmware_guest_network.py#L433) or, just throw away line 470 and observe what happens further down the line, `mac_address` may be used for comparisons later in the code and break there instead, if not :tada: fixed forever :tada: :)
Thanks @teamosceola for reporting this. This issue is indeed pretty annoying. Should we revert #29 until we've got this fixed?
It looks like it was possible to compare `None` in Python 2, but [Python 3.0 changed the rules for ordering comparisons](https://docs.python.org/3/whatsnew/3.0.html#ordering-comparisons).
> Should we revert #29 until we've got this fixed?
I thought we could fix this by changeing the lambda function to:
```python
sorted(nic_info_lst, key=lambda d: d['mac_address'] if (d['mac_address'] is not None) else '00:00:00:00:00:00')
```
but I'm not sure if we don't run into another problem somewhere else if we have more than one new NIC without a MAC address. We could give it a try, though. I don't think it'll make things worse. | 2021-08-13T17:33:45 |
|
ansible-collections/community.vmware | 1,007 | ansible-collections__community.vmware-1007 | [
"916"
] | 3ebddaa69737f37a2627f0322ffd1ea410f40002 | diff --git a/plugins/modules/vmware_host_service_manager.py b/plugins/modules/vmware_host_service_manager.py
--- a/plugins/modules/vmware_host_service_manager.py
+++ b/plugins/modules/vmware_host_service_manager.py
@@ -41,7 +41,8 @@
- Desired state of service.
- "State value 'start' and 'present' has same effect."
- "State value 'stop' and 'absent' has same effect."
- choices: [ absent, present, restart, start, stop ]
+ - State value C(unchanged) is added in version 1.14.0 to allow defining startup policy without defining or changing service state.
+ choices: [ absent, present, restart, start, stop, unchanged ]
type: str
default: 'start'
service_policy:
@@ -201,7 +202,7 @@ def main():
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
- state=dict(type='str', default='start', choices=['absent', 'present', 'restart', 'start', 'stop']),
+ state=dict(type='str', default='start', choices=['absent', 'present', 'restart', 'start', 'stop', 'unchanged']),
service_name=dict(type='str', required=True),
service_policy=dict(type='str', choices=['automatic', 'off', 'on']),
)
| Change service startup policy with vmware_host_service_manager without defining service state
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
When setting up the service in vSphere Client, the actions to set startup policy or service state are independent. However, when setting the service startup policy using the `vmware_host_service_manager` module, you have to specify the service state if you don't want to start the service, which is the default behavior.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`vmware_host_service_manager` in community.vmware v1.11.0
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Using the Ansible module should probably match the vSphere Client management behavior. It is not necessary to know the service state before changing the startup policy. The workaround is to use `vmware_host_service_info` module to first gather the state and then use it (which is kinda complicated way as the output values of the variable `running` from the `vmware_host_service_info` don't match the input values of the variable `state` in `vmware_host_service_manager`).
The `state` value of `unchanged` could be added (and set as default?). The current default is `start`.
Example playbook with the changes implemented:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: false
tasks:
- name: Disabling SSH service
community.vmware.vmware_host_service_manager:
esxi_hostname: <esxi_hostname>
service_name: 'TSM-SSH'
service_policy: 'off'
state: 'unchanged'
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@tomasodehnal: Greetings! Thanks for taking the time to open this issue. In order for the community to handle your issue effectively, we need a bit more information.
Here are the items we could not find in your description:
- ansible version
Please set the description of this issue with this template:
https://raw.githubusercontent.com/ansible/ansible/devel/.github/ISSUE_TEMPLATE/bug_report.md
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: issue_missing_data --->
##### ISSUE TYPE
- Feature Idea
I agree that it's a bit weird that you need to know the current state just to change the startup policy. But I don't like your example playbook:
> ```yaml
> - hosts: localhost
> gather_facts: false
> tasks:
> - name: Disabling SSH service
> community.vmware.vmware_host_service_manager:
> esxi_hostname: <esxi_hostname>
> service_name: 'TSM-SSH'
> service_policy: 'off'
> state: 'unchanged'
> ```
Personally, I don't like `unchanged` as a new `state`. I think it would be better to make `state` optional. This way, if you don't care about the current state you could just drop this parameter instead of explicitly stating that you don't want to change it.
Unfortunately, removing a default for a parameter is a breaking change. So we won't do this within 1.x, maybe with 2.0. I'll try to prepare the module for this.
Thanks for the response. You are right, I don't like it either, but exactly as you say, I was considering the breaking change.
I don't know how far 2.0 is. If it's not near, it would be great if `unchanged` could be first introduced as an option and later changed to default.
EDIT: Probably not a good idea as it would mean changing the code twice. I haven't taken into account that your proposal is making the parameter optional.
Good point. Maybe we should introduce a new state `unchanged` and, at the same time, announce that this state is deprecated and that the `state` parameter itself won't have a default in a future release. | 2021-08-19T15:44:55 |
|
ansible-collections/community.vmware | 1,020 | ansible-collections__community.vmware-1020 | [
"164"
] | db79a3774e61b04226cad9418832637c38e3282f | diff --git a/plugins/module_utils/vmware.py b/plugins/module_utils/vmware.py
--- a/plugins/module_utils/vmware.py
+++ b/plugins/module_utils/vmware.py
@@ -216,7 +216,6 @@ def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
def find_cluster_by_name(content, cluster_name, datacenter=None):
-
if datacenter and hasattr(datacenter, 'hostFolder'):
folder = datacenter.hostFolder
else:
@@ -303,8 +302,12 @@ def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
-def find_hostsystem_by_name(content, hostname):
- return find_object_by_name(content, hostname, [vim.HostSystem])
+def find_hostsystem_by_name(content, hostname, datacenter=None):
+ if datacenter and hasattr(datacenter, 'hostFolder'):
+ folder = datacenter.hostFolder
+ else:
+ folder = content.rootFolder
+ return find_object_by_name(content, hostname, [vim.HostSystem], folder=folder)
def find_resource_pool_by_name(content, resource_pool_name):
@@ -1384,16 +1387,17 @@ def get_all_hosts_by_cluster(self, cluster_name):
return []
# Hosts related functions
- def find_hostsystem_by_name(self, host_name):
+ def find_hostsystem_by_name(self, host_name, datacenter=None):
"""
Find Host by name
Args:
host_name: Name of ESXi host
+ datacenter: (optional) Datacenter of ESXi resides
Returns: True if found
"""
- return find_hostsystem_by_name(self.content, hostname=host_name)
+ return find_hostsystem_by_name(self.content, hostname=host_name, datacenter=datacenter)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
diff --git a/plugins/modules/vmware_deploy_ovf.py b/plugins/modules/vmware_deploy_ovf.py
--- a/plugins/modules/vmware_deploy_ovf.py
+++ b/plugins/modules/vmware_deploy_ovf.py
@@ -355,12 +355,13 @@ def get_objects(self):
self.resource_pool = self.find_resource_pool_by_cluster(self.params['resource_pool'], cluster=cluster)
# Or get ESXi host in datacenter if ESXi host configured
elif self.params['esxi_hostname']:
- host = self.find_hostsystem_by_name(self.params['esxi_hostname'])
+ host = self.find_hostsystem_by_name(self.params['esxi_hostname'], datacenter=self.datacenter)
if host is None:
- self.module.fail_json(msg="Unable to find host '%(esxi_hostname)s'" % self.params)
+ self.module.fail_json(msg="Unable to find host '%(esxi_hostname)s' in datacenter '%(datacenter)s'" % self.params)
self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'], folder=host.parent)
else:
- self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'])
+ # For more than one datacenter env, specify 'folder' to datacenter hostFolder
+ self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'], folder=self.datacenter.hostFolder)
if not self.resource_pool:
self.module.fail_json(msg="Resource pool '%(resource_pool)s' could not be located" % self.params)
@@ -383,7 +384,8 @@ def get_objects(self):
self.datastore = self.find_datastore_by_name(self.params['datastore'], datacenter_name=self.datacenter)
if self.datastore is None:
- self.module.fail_json(msg="Datastore '%(datastore)s' could not be located on specified ESXi host or datacenter" % self.params)
+ self.module.fail_json(msg="Datastore '%(datastore)s' could not be located on specified ESXi host or"
+ " datacenter" % self.params)
for key, value in self.params['networks'].items():
network = find_network_by_name(self.content, value, datacenter_name=self.datacenter)
| vmware_deploy_ovf - Failure validating OVF import spec: The provided network mapping between OVF networks and the system network is not supported by any host
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I have a vCenter with 3 clusters configured and NSX-T on networking layer. The network is visible in all three clusters and deploying an ovf template in vCenter works just fine. But when using the vmware_deploy_ovf module an error "Failure validating OVF import spec: The provided network mapping between OVF networks and the system network is not supported by any host" is thrown.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_deploy_ovf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Apr 15 2020, 17:20:14) [GCC 7.5.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = False
DEFAULT_LOCAL_TMP(/etc/ansible/ansible.cfg) = /home/ubuntu/.ansible/tmp/ansible-local-5327JAdofg
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
vSphere / ESXi: 6.7
NSX-T: 2.5
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Upload OVA to vSphere"
vmware_deploy_ovf:
hostname: '{{ vcenter.hostname }}'
username: '{{ vcenter.username }}'
password: '{{ vcenter.password }}'
allow_duplicates: no
fail_on_spec_warnings: no
datacenter: '{{ vcenter.datacenter }}'
cluster: '{{ vcenter.cluster }}'
folder: '{{ vcenter.folder }}'
datastore: '{{ vcenter.datastore }}'
name: '{{ vcenter.vm_name }}'
networks: "{u'VM Network':u'{{ vcenter.provisioning_network_label }}'}"
validate_certs: no
power_on: yes
wait: yes
wait_for_ip_address: yes
ovf: "{{ playbook_dir }}/files/{{ ovf.file_name }}"
properties:
public-keys: "{{ ssh_public }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
VM created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [vm_deployment : Upload OVA to vSphere] *******************************************************************************************************
task path: /home/ubuntu/ansible-test/roles/vm_deployment/tasks/main.yml:114
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<localhost> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp `"&& mkdir /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231 && echo ansible-tmp-1588694515.02-7541-22608241707231="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231 `" ) && sleep 0'
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/_text.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/vmware.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/six/__init__.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/urls.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/basic.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/__init__.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/validation.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/pycompat24.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/process.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/_utils.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/parameters.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/file.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/common/collections.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /usr/lib/python2.7/dist-packages/ansible/module_utils/distro/_distro.py
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/vmware_deploy_ovf.py
<localhost> PUT /home/ubuntu/.ansible/tmp/ansible-local-7277QiTLT4/tmpYuJhyg TO /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231/AnsiballZ_vmware_deploy_ovf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231/ /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231/AnsiballZ_vmware_deploy_ovf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231/AnsiballZ_vmware_deploy_ovf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1588694515.02-7541-22608241707231/ > /dev/null 2>&1 && sleep 0'
fatal: [testhost -> localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_duplicates": false,
"cluster": "CLUSTER_01",
"datacenter": "DC_01",
"datastore": "Datastore_01,
"deployment_option": null,
"disk_provisioning": "thin",
"fail_on_spec_warnings": false,
"folder": "/DC_01/vm/TEST_VMS",
"hostname": "xxx",
"inject_ovf_env": false,
"name": "testvm",
"networks": {
"VM Network": "INFRA-01"
},
"ovf": "/home/ubuntu/ansible-test/files/ubuntu-server-cloudimg-amd64-18.04.ova",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"power_on": true,
"proxy_host": null,
"proxy_port": null,
"resource_pool": "Resources",
"username": "XXX",
"validate_certs": false,
"wait": true,
"wait_for_ip_address": true
}
},
"msg": "Failure validating OVF import spec: The provided network mapping between OVF networks and the system network is not supported by any host."
}
```
I played with the REST API, result for /rest/vcenter/network
```
{
"name": "INFRA-01",
"type": "OPAQUE_NETWORK",
"network": "network-o2950"
}
{
"name": "INFRA-01",
"type": "OPAQUE_NETWORK",
"network": "network-o2951"
}
{
"name": "INFRA-01",
"type": "OPAQUE_NETWORK",
"network": "network-o2952"
}
```
From what I saw the module uses network-o2950 .
| Any update on this ? I am also facing same issue.
@mehmoodkha I wasn't able to resolve the issue with the vmware_deploy_ovf module. But I guess it's because we don't use a resource pool.
As a workaround, I switched to [govc](https://github.com/vmware/govmomi/tree/master/govc) using an explicit host. Now, the deployment of OVA/OVF works again.
Thanks @Onke.
My issue was on VMC.
I was able to get proper details from VMC. Once I did modification it did worked.
```
networks: "{u'nat':u'mynet'}"
```
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
I also hit this issue when there are more than one datacenter in vCenter server, no issue if there is only one datacenter. I'll take a look at this later if there is no one looking at this issue. Thanks.
> I also hit this issue when there are more than one datacenter in vCenter server, no issue if there is only one datacenter. I'll take a look at this later if there is no one looking at this issue. Thanks.
So, I'm having the same issue under the same scenario, more than one datacenter in vCenter server. Besides the govc workaround, there is any other option?
I hit this issue on vCenter Server with two clusters. Are there any updates?
This module calls `find_network_by_name` function at L389 but I think this function cannot find a network object when there are same network names.
This issue may be resolved if this function can find network object according to Cluster, Host or vSwitch name, I think.
Sorry for the late update, I'll take a look at this. Thanks. | 2021-09-02T03:45:36 |
|
ansible-collections/community.vmware | 1,030 | ansible-collections__community.vmware-1030 | [
"1029"
] | d87dbc92c4b9706512e08a2cd8fccd01a3a19854 | diff --git a/plugins/modules/vsphere_file.py b/plugins/modules/vsphere_file.py
--- a/plugins/modules/vsphere_file.py
+++ b/plugins/modules/vsphere_file.py
@@ -105,7 +105,7 @@
datacenter: DC1 Someplace
datastore: datastore1
path: some/remote/file
- state: touch
+ state: file
delegate_to: localhost
ignore_errors: true
| Documentation fix needed in community.vmware.vsphere_file module
##### SUMMARY
There is module called **community.vmware.vsphere_file** . There is one task _Query a file on a datastore_ to get information of already existing file on vsphere. But In Documentation there mentioned **state : touch** . But state:touch is used to create new blank file on vsphere,not to get existing file information. In order to Query a file the state attribute value should `file` not touch.
**state : file**
Correct code :
- name: Query a file on a datastore
community.vmware.vsphere_file:
host: '{{ vhost }}'
username: '{{ vuser }}'
password: '{{ vpass }}'
datacenter: DC1 Someplace
datastore: datastore1
path: some/remote/file
**state: file**
delegate_to: localhost
ignore_errors: true
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
community.vmware.vsphere_file
##### ANSIBLE VERSION
```
ansible 2.10.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
| 2021-09-09T05:53:39 |
||
ansible-collections/community.vmware | 1,075 | ansible-collections__community.vmware-1075 | [
"1053"
] | 0acc5b70e0eb98f0eec666e56a16815c9e8894d6 | diff --git a/plugins/module_utils/vm_device_helper.py b/plugins/module_utils/vm_device_helper.py
--- a/plugins/module_utils/vm_device_helper.py
+++ b/plugins/module_utils/vm_device_helper.py
@@ -372,3 +372,21 @@ def update_nvdimm_config(self, nvdimm_device, nvdimm_size):
nvdimm_device.capacityInMB = nvdimm_size
return nvdimm_spec
+
+ def is_tpm_device(self, device):
+ return isinstance(device, vim.vm.device.VirtualTPM)
+
+ def create_tpm(self):
+ vtpm_device_spec = vim.vm.device.VirtualDeviceSpec()
+ vtpm_device_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
+ vtpm_device_spec.device = vim.vm.device.VirtualTPM()
+ vtpm_device_spec.device.deviceInfo = vim.Description()
+
+ return vtpm_device_spec
+
+ def remove_tpm(self, vtpm_device):
+ vtpm_device_spec = vim.vm.device.VirtualDeviceSpec()
+ vtpm_device_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
+ vtpm_device_spec.device = vtpm_device
+
+ return vtpm_device_spec
diff --git a/plugins/modules/vmware_guest_tpm.py b/plugins/modules/vmware_guest_tpm.py
new file mode 100644
--- /dev/null
+++ b/plugins/modules/vmware_guest_tpm.py
@@ -0,0 +1,232 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2021, Ansible Project
+# Copyright: (c) 2021, VMware, Inc. All Rights Reserved.
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+DOCUMENTATION = r'''
+---
+module: vmware_guest_tpm
+short_description: Add or remove vTPM device for specified VM.
+description: >
+ This module is used for adding or removing Virtual Trusted Platform Module(vTPM) device for an existing
+ Virtual Machine. You must create a key provider on vCenter before you can add a vTPM. The ESXi hosts
+ running in your environment must be ESXi 6.7 or later (Windows guest OS), or 7.0 Update 2 (Linux guest OS).
+author:
+- Diane Wang (@Tomorrow9) <[email protected]>
+version_added: '1.16.0'
+notes:
+- Tested on vSphere 6.7
+- Tested on vSphere 7.0
+requirements:
+- python >= 2.6
+- PyVmomi
+options:
+ name:
+ description:
+ - Name of the virtual machine.
+ - This is required if parameter C(uuid) or C(moid) is not supplied.
+ type: str
+ uuid:
+ description:
+ - UUID of the instance to manage if known, this is VMware's unique identifier.
+ - This is required if parameter C(name) or C(moid) is not supplied.
+ type: str
+ moid:
+ description:
+ - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
+ - This is required if C(name) or C(uuid) is not supplied.
+ type: str
+ folder:
+ description:
+ - VM folder, absolute or relative path to find an existing VM.
+ - This parameter is not required, only when multiple VMs are found with the same name.
+ - The folder should include the datacenter name.
+ - 'Examples:'
+ - ' folder: /datacenter1/vm'
+ - ' folder: datacenter1/vm'
+ - ' folder: /datacenter1/vm/folder1'
+ - ' folder: datacenter1/vm/folder1'
+ - ' folder: /folder1/datacenter1/vm'
+ - ' folder: folder1/datacenter1/vm'
+ - ' folder: /folder1/datacenter1/vm/folder2'
+ type: str
+ datacenter:
+ description:
+ - The vCenter datacenter name used to get specified cluster or host.
+ - This parameter is case sensitive.
+ type: str
+ required: true
+ state:
+ description:
+ - State of vTPM device.
+ - If set to 'absent', vTPM device will be removed from VM.
+ - If set to 'present', vTPM device will be added if not present.
+ - Virtual machine should be turned off before add or remove vTPM device.
+ - Virtual machine should not contain snapshots before add vTPM device.
+ type: str
+ choices: ['present', 'absent']
+ default: 'present'
+extends_documentation_fragment:
+- community.vmware.vmware.documentation
+'''
+
+EXAMPLES = r'''
+- name: Add vTPM to specified VM
+ community.vmware.vmware_guest_tpm:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ datacenter }}"
+ name: "Test_VM"
+ state: present
+ delegate_to: localhost
+
+- name: Remove vTPM from specified VM
+ community.vmware.vmware_guest_tpm:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ datacenter: "{{ datacenter }}"
+ name: "Test_VM"
+ state: absent
+ delegate_to: localhost
+'''
+
+RETURN = r'''
+instance:
+ description: metadata about the VM vTPM device
+ returned: always
+ type: dict
+ sample: None
+'''
+
+HAS_PYVMOMI = False
+try:
+ from pyVmomi import vim
+ HAS_PYVMOMI = True
+except ImportError:
+ pass
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils._text import to_native
+from ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec, PyVmomi, wait_for_task
+from ansible_collections.community.vmware.plugins.module_utils.vm_device_helper import PyVmomiDeviceHelper
+
+
+class PyVmomiHelper(PyVmomi):
+ def __init__(self, module):
+ super(PyVmomiHelper, self).__init__(module)
+ self.device_helper = PyVmomiDeviceHelper(self.module)
+ self.config_spec = vim.vm.ConfigSpec()
+ self.config_spec.deviceChange = []
+ self.vm = None
+ self.vtpm_device = None
+
+ def get_vtpm_info(self, vm_obj=None, vtpm_device=None):
+ vtpm_info = dict()
+ if vm_obj:
+ for device in vm_obj.config.hardware.device:
+ if self.device_helper.is_tpm_device(device):
+ vtpm_device = device
+ if vtpm_device:
+ vtpm_info = dict(
+ key=vtpm_device.key,
+ label=vtpm_device.deviceInfo.label,
+ summary=vtpm_device.deviceInfo.summary,
+ )
+
+ return vtpm_info
+
+ def vtpm_operation(self, vm_obj=None):
+ results = {'failed': False, 'changed': False}
+ if not self.is_vcenter():
+ self.module.fail_json(msg="Please connect to vCenter Server to configure vTPM device of virtual machine.")
+
+ self.vm = vm_obj
+ if self.vm.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
+ self.module.fail_json(msg="Please make sure VM is powered off before configuring vTPM device,"
+ " current state is '%s'" % self.vm.runtime.powerState)
+
+ for device in self.vm.config.hardware.device:
+ if self.device_helper.is_tpm_device(device):
+ self.vtpm_device = device
+
+ if self.module.params['state'] == 'present':
+ if self.module.check_mode:
+ results['desired_operation'] = "add vTPM"
+ else:
+ results['vtpm_operation'] = "add vTPM"
+ if self.vtpm_device:
+ results['vtpm_info'] = self.get_vtpm_info(vtpm_device=self.vtpm_device)
+ results['msg'] = "vTPM device already exist on VM"
+ self.module.exit_json(**results)
+ else:
+ if self.module.check_mode:
+ results['changed'] = True
+ self.module.exit_json(**results)
+ vtpm_device_spec = self.device_helper.create_tpm()
+ if self.module.params['state'] == 'absent':
+ if self.module.check_mode:
+ results['desired_operation'] = "remove vTPM"
+ else:
+ results['vtpm_operation'] = "remove vTPM"
+ if self.vtpm_device is None:
+ results['msg'] = "No vTPM device found on VM"
+ self.module.exit_json(**results)
+ else:
+ if self.module.check_mode:
+ results['changed'] = True
+ self.module.exit_json(**results)
+ vtpm_device_spec = self.device_helper.remove_tpm(self.vtpm_device)
+ self.config_spec.deviceChange.append(vtpm_device_spec)
+
+ try:
+ task = self.vm.ReconfigVM_Task(spec=self.config_spec)
+ wait_for_task(task)
+ except Exception as e:
+ self.module.fail_json(msg="Failed to configure vTPM device on virtual machine due to '%s'" % to_native(e))
+ if task.info.state == 'error':
+ self.module.fail_json(msg='Failed to reconfigure VM with vTPM device', detail=task.info.error.msg)
+ results['changed'] = True
+ results['vtpm_info'] = self.get_vtpm_info(vm_obj=self.vm)
+ self.module.exit_json(**results)
+
+
+def main():
+ argument_spec = vmware_argument_spec()
+ argument_spec.update(
+ name=dict(type='str'),
+ uuid=dict(type='str'),
+ moid=dict(type='str'),
+ folder=dict(type='str'),
+ datacenter=dict(type='str', required=True),
+ state=dict(type='str', default='present', choices=['present', 'absent']),
+ )
+ module = AnsibleModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True,
+ required_one_of=[['name', 'uuid', 'moid']],
+ )
+ if module.params['folder']:
+ # FindByInventoryPath() does not require an absolute path
+ # so we should leave the input folder path unmodified
+ module.params['folder'] = module.params['folder'].rstrip('/')
+
+ vm_config_vtpm = PyVmomiHelper(module)
+ vm = vm_config_vtpm.get_vm()
+ if not vm:
+ vm_id = (module.params.get('name') or module.params.get('uuid') or module.params.get('moid'))
+ module.fail_json(msg="Unable to configure vTPM device for non-existing virtual machine '%s'." % vm_id)
+ try:
+ vm_config_vtpm.vtpm_operation(vm_obj=vm)
+ except Exception as e:
+ module.fail_json(msg="Failed to configure vTPM device of virtual machine '%s' with exception : %s"
+ % (vm.name, to_native(e)))
+
+
+if __name__ == "__main__":
+ main()
| Add a new module for adding vTPM to existing VM
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
vmware_guest module is too heavy for configuring new VM or existing VM. So add a new module for adding vTPM device to VM, if guest OS requests TPM device, then we can create a new VM using vmware_guest module, and add vTPM device to this VM using vmware_guest_tpm module.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest_tpm
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
For adding vTPM device, please refer to this page:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-4DBF65A4-4BA0-4667-9725-AE9F047DE00A.html
For removing vTPM device, please refer to this page:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-137ACCB4-8229-4ACE-90F2-EC5EEBE244BC.html
Before adding vTPM device to VM, need to ensure your vSphere environment is configured for a key provider. This request is filed in #1052
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner ---> | 2021-10-12T07:02:34 |
|
ansible-collections/community.vmware | 1,084 | ansible-collections__community.vmware-1084 | [
"1083"
] | f418bdaa6a678c09b6fb9115d927d8c44d50060f | diff --git a/plugins/modules/vmware_host_lockdown.py b/plugins/modules/vmware_host_lockdown.py
--- a/plugins/modules/vmware_host_lockdown.py
+++ b/plugins/modules/vmware_host_lockdown.py
@@ -121,9 +121,10 @@
'''
try:
- from pyvmomi import vim
+ from pyVmomi import vim
+ HAS_PYVMOMI = True
except ImportError:
- pass
+ HAS_PYVMOMI = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.vmware.plugins.module_utils.vmware import vmware_argument_spec, PyVmomi
@@ -207,6 +208,9 @@ def main():
]
)
+ if not HAS_PYVMOMI:
+ module.fail_json(msg='pyvmomi required for this module')
+
vmware_lockdown_mgr = VmwareLockdownManager(module)
vmware_lockdown_mgr.ensure()
| vmware_host_lockdown crashes on failure
##### SUMMARY
Today, I wanted to enable lockdown mode on a host. This failed, although I didn't find out yet why. But that's not important. The bug is that the module includes `vim` from `pyvmomi` instead of `pyVmomi` and doesn't check that this works:
https://github.com/ansible-collections/community.vmware/blob/f418bdaa6a678c09b6fb9115d927d8c44d50060f/plugins/modules/vmware_host_lockdown.py#L123-L126
I think nobody ran into this issue yet because enabling or disabling lockdown seldom fails (in my experience) and `vim` is only used in this case:
https://github.com/ansible-collections/community.vmware/blob/f418bdaa6a678c09b6fb9115d927d8c44d50060f/plugins/modules/vmware_host_lockdown.py#L176-L182
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_host_lockdown
##### ANSIBLE VERSION
```
ansible [core 2.11.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.1 (default, Aug 19 2021, 02:58:42) [GCC 10.2.0]
jinja version = 3.0.2
libyaml = True
```
##### COLLECTION VERSION
```
# /usr/lib/python3.9/site-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.15.0
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
VMware Photon OS 4.0 and vSphere 7.0U2, but this is irrelevant.
##### STEPS TO REPRODUCE
Tricky. As I've said, enabling / disabling lockdown usually works.
##### EXPECTED RESULTS
A failure.
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NameError: name 'vim' is not defined
```
| 2021-10-22T14:49:17 |
||
ansible-collections/community.vmware | 1,103 | ansible-collections__community.vmware-1103 | [
"1044"
] | d8b881a501c60abffeab48b6c34af9469b9bc8f3 | diff --git a/plugins/modules/vmware_host_snmp.py b/plugins/modules/vmware_host_snmp.py
--- a/plugins/modules/vmware_host_snmp.py
+++ b/plugins/modules/vmware_host_snmp.py
@@ -73,6 +73,16 @@
type: str
choices: [ debug, info, warning, error ]
default: info
+ sys_contact:
+ description:
+ - System contact who manages the system.
+ type: str
+ version_added: '1.17.0'
+ sys_location:
+ description:
+ - System location.
+ type: str
+ version_added: '1.17.0'
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -107,6 +117,16 @@
state: enabled
delegate_to: localhost
+- name: Enable and configure SNMP system contact and location
+ community.vmware.vmware_host_snmp:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
+ sys_contact: "[email protected]"
+ sys_location: "Austin, USA"
+ state: enabled
+ delegate_to: localhost
+
- name: Disable SNMP
community.vmware.vmware_host_snmp:
hostname: '{{ esxi_hostname }}'
@@ -171,6 +191,8 @@ def ensure(self):
log_level = self.params.get("log_level")
send_trap = self.params.get("send_trap")
trap_filter = self.params.get("trap_filter")
+ sys_contact = self.params.get("sys_contact")
+ sys_location = self.params.get("sys_location")
event_filter = None
if trap_filter:
event_filter = ';'.join(trap_filter)
@@ -207,6 +229,10 @@ def ensure(self):
results['log_level_previous'] = option.value
if option.key == 'EventFilter' and option.value != hw_source:
results['trap_filter_previous'] = option.value.split(';')
+ if option.key == 'syscontact' and option.value != hw_source:
+ results['syscontact_previous'] = option.value
+ if option.key == 'syslocation' and option.value != hw_source:
+ results['syslocation_previous'] = option.value
# Build factory default config
destination = vim.host.SnmpSystem.SnmpConfigSpec.Destination()
destination.hostName = ""
@@ -315,6 +341,8 @@ def ensure(self):
results['log_level'] = log_level
results['trap_filter'] = trap_filter
event_filter_found = False
+ sys_contact_found = False
+ sys_location_found = False
if snmp_config_spec.option:
for option in snmp_config_spec.option:
if option.key == 'EnvEventSource' and option.value != hw_source:
@@ -334,6 +362,20 @@ def ensure(self):
changed_list.append("trap filter")
results['trap_filter_previous'] = option.value.split(';')
option.value = event_filter
+ if option.key == 'syscontact':
+ sys_contact_found = True
+ if sys_contact is not None and option.value != sys_contact:
+ changed = True
+ changed_list.append("sys contact")
+ results['sys_contact_previous'] = option.value
+ option.value = sys_contact
+ if option.key == 'syslocation':
+ sys_location_found = True
+ if sys_location is not None and option.value != sys_location:
+ changed = True
+ changed_list.append("sys location")
+ results['sys_location_previous'] = option.value
+ option.value = sys_location
if trap_filter and not event_filter_found:
changed = True
changed_list.append("trap filter")
@@ -351,7 +393,16 @@ def ensure(self):
# Doesn't work. Need to reset config instead
# snmp_config_spec.option = options
reset_hint = True
-
+ if sys_contact and not sys_contact_found:
+ changed = True
+ changed_list.append("sys contact")
+ results['sys_contact_previous'] = ''
+ snmp_config_spec.option.append(self.create_option('syscontact', sys_contact))
+ if sys_location and not sys_location_found:
+ changed = True
+ changed_list.append("sys location")
+ results['sys_location_previous'] = ''
+ snmp_config_spec.option.append(self.create_option('syslocation', sys_location))
if changed:
if snmp_state == 'reset':
if self.module.check_mode:
@@ -472,6 +523,8 @@ def main():
hw_source=dict(type='str', default='indications', choices=['indications', 'sensors']),
log_level=dict(type='str', default='info', choices=['debug', 'info', 'warning', 'error']),
send_trap=dict(type='bool', default=False),
+ sys_contact=dict(type='str'),
+ sys_location=dict(type='str')
)
module = AnsibleModule(
| diff --git a/tests/integration/targets/vmware_host_snmp/tasks/main.yml b/tests/integration/targets/vmware_host_snmp/tasks/main.yml
--- a/tests/integration/targets/vmware_host_snmp/tasks/main.yml
+++ b/tests/integration/targets/vmware_host_snmp/tasks/main.yml
@@ -41,6 +41,22 @@
- snmp_enabled is defined
- snmp_enabled.changed
+ - name: Enable and configure SNMP system contact and location
+ vmware_host_snmp:
+ hostname: '{{ esxi1 }}'
+ username: '{{ esxi_user }}'
+ password: '{{ esxi_password }}'
+ sys_contact: "[email protected]"
+ sys_location: "Austin, USA"
+ state: enabled
+ validate_certs: false
+ register: snmp_enabled_sys_options
+ - debug: var=snmp_enabled_sys_options
+ - assert:
+ that:
+ - snmp_enabled_sys_options is defined
+ - snmp_enabled_sys_options.changed
+
- name: Disable SNMP
vmware_host_snmp:
hostname: '{{ esxi1 }}'
@@ -49,8 +65,8 @@
state: disabled
validate_certs: false
register: snmp_disabled
- - debug: var=snmp_enabled
+ - debug: var=snmp_disabled
- assert:
that:
- - snmp_enabled is defined
- - snmp_enabled.changed
+ - snmp_disabled is defined
+ - snmp_disabled.changed
| Adding support for SNMP syslocation and syscontact configuration
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add support for SNMP syscontact and syslocation configuration
##### ISSUE TYPE
- Module improvement
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
community.vmware.vmware_host_snmp
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Currently it is not possible to configure SNMP syscontact and syslocation via mentioned module. The module should provide parameters to do so.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Enable and configure SNMP
community.vmware.vmware_host_snmp:
hostname: '{{ esxi_host }}'
username: '{{ esxi_user }}'
password: '{{ esxi_host_pass }}'
snmp_port: '{{ esxi_snmp_port }}'
community: [ '{{ esxi_snmp_community }}' ]
contact: '{{ esxi_snmp_contact }}'
location: '{{ esxi_snmp_location }}'
state: enabled
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@JayBeeCee: Greetings! Thanks for taking the time to open this issue. In order for the community to handle your issue effectively, we need a bit more information.
Here are the items we could not find in your description:
- ansible version
Please set the description of this issue with this template:
https://raw.githubusercontent.com/ansible/ansible/devel/.github/ISSUE_TEMPLATE/bug_report.md
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: issue_missing_data ---> | 2021-11-10T07:32:34 |
ansible-collections/community.vmware | 1,113 | ansible-collections__community.vmware-1113 | [
"1112"
] | 401f9e1e634dc55f48c0f72436b684d3bd21d65e | diff --git a/plugins/modules/vcenter_folder.py b/plugins/modules/vcenter_folder.py
--- a/plugins/modules/vcenter_folder.py
+++ b/plugins/modules/vcenter_folder.py
@@ -167,7 +167,6 @@ def ensure(self):
Manage internal state management
"""
state = self.module.params.get('state')
- datacenter_name = self.module.params.get('datacenter')
folder_type = self.module.params.get('folder_type')
folder_name = self.module.params.get('folder_name')
parent_folder = self.module.params.get('parent_folder', None)
@@ -180,15 +179,13 @@ def ensure(self):
parent_folder_parts = parent_folder.strip('/').split('/')
p_folder_obj = None
for part in parent_folder_parts:
- part_folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=part,
+ part_folder_obj = self.get_folder(folder_name=part,
folder_type=folder_type,
parent_folder=p_folder_obj)
if not part_folder_obj:
self.module.fail_json(msg="Could not find folder %s" % part)
p_folder_obj = part_folder_obj
- child_folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=folder_name,
+ child_folder_obj = self.get_folder(folder_name=folder_name,
folder_type=folder_type,
parent_folder=p_folder_obj)
if child_folder_obj:
@@ -196,16 +193,14 @@ def ensure(self):
" parent folder %s" % (folder_name, parent_folder)
self.module.exit_json(**results)
else:
- p_folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=parent_folder,
+ p_folder_obj = self.get_folder(folder_name=parent_folder,
folder_type=folder_type)
if not p_folder_obj:
self.module.fail_json(msg="Parent folder %s does not exist" % parent_folder)
# Check if folder exists under parent folder
- child_folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=folder_name,
+ child_folder_obj = self.get_folder(folder_name=folder_name,
folder_type=folder_type,
parent_folder=p_folder_obj)
if child_folder_obj:
@@ -214,9 +209,9 @@ def ensure(self):
" parent folder %s" % (folder_name, parent_folder)
self.module.exit_json(**results)
else:
- folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=folder_name,
- folder_type=folder_type)
+ folder_obj = self.get_folder(folder_name=folder_name,
+ folder_type=folder_type,
+ recurse=True)
if folder_obj:
results['result']['path'] = self.get_folder_path(folder_obj)
@@ -266,34 +261,30 @@ def ensure(self):
parent_folder_parts = parent_folder.strip('/').split('/')
p_folder_obj = None
for part in parent_folder_parts:
- part_folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=part,
+ part_folder_obj = self.get_folder(folder_name=part,
folder_type=folder_type,
parent_folder=p_folder_obj)
if not part_folder_obj:
self.module.fail_json(msg="Could not find folder %s" % part)
p_folder_obj = part_folder_obj
- folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=folder_name,
+ folder_obj = self.get_folder(folder_name=folder_name,
folder_type=folder_type,
parent_folder=p_folder_obj)
else:
- p_folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=parent_folder,
+ p_folder_obj = self.get_folder(folder_name=parent_folder,
folder_type=folder_type)
if not p_folder_obj:
self.module.fail_json(msg="Parent folder %s does not exist" % parent_folder)
# Check if folder exists under parent folder
- folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=folder_name,
+ folder_obj = self.get_folder(folder_name=folder_name,
folder_type=folder_type,
parent_folder=p_folder_obj)
else:
- folder_obj = self.get_folder(datacenter_name=datacenter_name,
- folder_name=folder_name,
- folder_type=folder_type)
+ folder_obj = self.get_folder(folder_name=folder_name,
+ folder_type=folder_type,
+ recurse=True)
if folder_obj:
try:
if parent_folder:
@@ -328,23 +319,18 @@ def ensure(self):
" exception %s " % to_native(gen_exec))
self.module.exit_json(**results)
- def get_folder(self, datacenter_name, folder_name, folder_type, parent_folder=None):
+ def get_folder(self, folder_name, folder_type, parent_folder=None, recurse=False):
"""
Get managed object of folder by name
Returns: Managed object of folder by name
"""
- folder_objs = get_all_objs(self.content, [vim.Folder], parent_folder)
+ parent_folder = parent_folder or self.datacenter_folder_type[folder_type]
+
+ folder_objs = get_all_objs(self.content, [vim.Folder], parent_folder, recurse=recurse)
for folder in folder_objs:
- if parent_folder:
- if folder.name == folder_name and \
- self.datacenter_folder_type[folder_type].childType == folder.childType:
- return folder
- else:
- if folder.name == folder_name and \
- self.datacenter_folder_type[folder_type].childType == folder.childType and \
- folder.parent.parent.name == datacenter_name: # e.g. folder.parent.parent.name == /DC01/host/folder
- return folder
+ if folder.name == folder_name:
+ return folder
return None
| diff --git a/tests/integration/targets/vcenter_folder/tasks/main.yml b/tests/integration/targets/vcenter_folder/tasks/main.yml
--- a/tests/integration/targets/vcenter_folder/tasks/main.yml
+++ b/tests/integration/targets/vcenter_folder/tasks/main.yml
@@ -214,3 +214,6 @@
assert:
that:
- not all_folder_results.changed
+
+ - block:
+ - include_tasks: regression_folder_collision.yml
diff --git a/tests/integration/targets/vcenter_folder/tasks/regression_folder_collision.yml b/tests/integration/targets/vcenter_folder/tasks/regression_folder_collision.yml
new file mode 100644
--- /dev/null
+++ b/tests/integration/targets/vcenter_folder/tasks/regression_folder_collision.yml
@@ -0,0 +1,135 @@
+# Test code for the vcenter_folder module.
+# Copyright: (c) 2021, Victor Dvornikov <[email protected]>
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+- name: Create first lvl vm folder
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "foo"
+ folder_type: "vm"
+ state: present
+ register: creation_result
+
+- debug:
+ msg: "{{ creation_result }}"
+
+- name: Create second lvl vm folder
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "bar"
+ parent_folder: "foo"
+ folder_type: "vm"
+ state: present
+ register: creation_result
+
+- debug:
+ msg: "{{ creation_result }}"
+
+- name: Create collision third lvl vm folder
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "collision_folder"
+ parent_folder: "foo/bar"
+ folder_type: "vm"
+ state: present
+ register: creation_result
+
+- debug:
+ msg: "{{ creation_result }}"
+
+- name: Delete missed second lvl vm folder using name collision
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "collision_folder"
+ parent_folder: "foo"
+ folder_type: "vm"
+ state: absent
+ register: deletion_result
+
+- debug:
+ msg: "{{ deletion_result }}"
+
+- name: Assert collision folder wasn't deleted
+ assert:
+ that:
+ - not deletion_result.changed
+
+- name: Check present collision folder
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "collision_folder"
+ parent_folder: "foo/bar"
+ folder_type: "vm"
+ state: present
+ register: check_present_result
+ check_mode: true
+
+- debug:
+ msg: "{{ check_present_result }}"
+
+- name: Assert collision folder is still present
+ assert:
+ that:
+ - not deletion_result.changed
+
+- name: Delete third lvl folder first time
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "collision_folder"
+ parent_folder: "foo/bar"
+ folder_type: "vm"
+ state: absent
+ register: deletion_result
+
+- debug:
+ msg: "{{ deletion_result }}"
+
+- name: Assert third lvl folder has been successfully deleted
+ assert:
+ that:
+ - deletion_result.changed
+
+- name: Delete third lvl folder again
+ vcenter_folder:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ datacenter: "{{ dc1 }}"
+ folder_name: "collision_folder"
+ parent_folder: "foo/bar"
+ folder_type: "vm"
+ state: absent
+ register: deletion_result
+
+- debug:
+ msg: "{{ deletion_result }}"
+
+- name: Assert third lvl folder has already been deleted earlier
+ assert:
+ that:
+ - not deletion_result.changed
| Folders search collision in vcenter_folder module
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Hi, thanks for great collection.
I have a collision problem when i try to delete folder via `vcenter_folder` module. The collision causes deletion of wrong folder.
I think the problem in recoursive behavior in get_folder function:
https://github.com/ansible-collections/community.vmware/blob/401f9e1e634dc55f48c0f72436b684d3bd21d65e/plugins/modules/vcenter_folder.py#L276
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vcenter_folder module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
4.5.0
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
1.15.0
```
latest version is still affected.
##### OS / ENVIRONMENT
Ubuntu 20.04
##### STEPS TO REPRODUCE
I have folders structure:
`foo/bar/collision_folder/`
and params for `vcenter_folder` module:
```yaml
folder_name: "collision_folder"
parent_folder: "foo"
folder_type: "vm"
```
##### EXPECTED RESULTS
The execution returns `ok` and 3rd level `collision_folder` is still present
##### ACTUAL RESULTS
The execution returns `changed` and deletes 3rd level `collision_folder`,
| 2021-11-11T14:18:10 |
|
ansible-collections/community.vmware | 1,145 | ansible-collections__community.vmware-1145 | [
"739"
] | c9c5bae2ecb616bb3c4e83cdded5be65ff0461e8 | diff --git a/plugins/modules/vmware_cluster_drs.py b/plugins/modules/vmware_cluster_drs.py
--- a/plugins/modules/vmware_cluster_drs.py
+++ b/plugins/modules/vmware_cluster_drs.py
@@ -39,8 +39,7 @@
description:
- Whether to enable DRS.
type: bool
- default: false
- aliases: [ enable_drs ]
+ default: true
drs_enable_vm_behavior_overrides:
description:
- Whether DRS Behavior overrides for individual virtual machines are enabled.
@@ -153,9 +152,6 @@ def __init__(self, module):
else:
self.changed_advanced_settings = None
- self.module.warn("The default for enable will change from false to true in a future version to make the behavior more consistent with other modules."
- "Please make sure your playbooks don't rely on the default being false so you don't run into problems.")
-
def check_drs_config_diff(self):
"""
Check DRS configuration diff
@@ -218,7 +214,7 @@ def main():
cluster_name=dict(type='str', required=True),
datacenter=dict(type='str', required=True, aliases=['datacenter_name']),
# DRS
- enable=dict(type='bool', default=False, aliases=['enable_drs']),
+ enable=dict(type='bool', default=True),
drs_enable_vm_behavior_overrides=dict(type='bool', default=True),
drs_default_vm_behavior=dict(type='str',
choices=['fullyAutomated', 'manual', 'partiallyAutomated'],
diff --git a/plugins/modules/vmware_cluster_ha.py b/plugins/modules/vmware_cluster_ha.py
--- a/plugins/modules/vmware_cluster_ha.py
+++ b/plugins/modules/vmware_cluster_ha.py
@@ -40,8 +40,7 @@
description:
- Whether to enable HA.
type: bool
- default: false
- aliases: [ enable_ha ]
+ default: true
ha_host_monitoring:
description:
- Whether HA restarts virtual machines after a host fails.
@@ -284,9 +283,6 @@ def __init__(self, module):
else:
self.changed_advanced_settings = None
- self.module.warn("The default for enable will change from false to true in a future version to make the behavior more consistent with other modules."
- "Please make sure your playbooks don't rely on the default being false so you don't run into problems.")
-
def get_failover_hosts(self):
"""
Get failover hosts for failover_host_admission_control policy
@@ -459,7 +455,7 @@ def main():
cluster_name=dict(type='str', required=True),
datacenter=dict(type='str', required=True, aliases=['datacenter_name']),
# HA
- enable=dict(type='bool', default=False, aliases=['enable_ha']),
+ enable=dict(type='bool', default=True),
ha_host_monitoring=dict(type='str',
default='enabled',
choices=['enabled', 'disabled']),
diff --git a/plugins/modules/vmware_cluster_vsan.py b/plugins/modules/vmware_cluster_vsan.py
--- a/plugins/modules/vmware_cluster_vsan.py
+++ b/plugins/modules/vmware_cluster_vsan.py
@@ -41,8 +41,7 @@
description:
- Whether to enable vSAN.
type: bool
- default: false
- aliases: [ enable_vsan ]
+ default: true
vsan_auto_claim_storage:
description:
- Whether the VSAN service is configured to automatically claim local storage
@@ -170,9 +169,6 @@ def __init__(self, module):
vcMos = vsanapiutils.GetVsanVcMos(client_stub, context=ssl_context, version=apiVersion)
self.vsanClusterConfigSystem = vcMos['vsan-cluster-config-system']
- self.module.warn("The default for enable will change from false to true in a future version to make the behavior more consistent with other modules."
- "Please make sure your playbooks don't rely on the default being false so you don't run into problems.")
-
def check_vsan_config_diff(self):
"""
Check VSAN configuration diff
@@ -261,7 +257,7 @@ def main():
cluster_name=dict(type='str', required=True),
datacenter=dict(type='str', required=True, aliases=['datacenter_name']),
# VSAN
- enable=dict(type='bool', default=False, aliases=['enable_vsan']),
+ enable=dict(type='bool', default=True),
vsan_auto_claim_storage=dict(type='bool', default=False),
advanced_options=dict(type='dict', options=dict(
automatic_rebalance=dict(type='bool', required=False),
| diff --git a/tests/integration/targets/vmware_cluster_drs/tasks/main.yml b/tests/integration/targets/vmware_cluster_drs/tasks/main.yml
--- a/tests/integration/targets/vmware_cluster_drs/tasks/main.yml
+++ b/tests/integration/targets/vmware_cluster_drs/tasks/main.yml
@@ -42,7 +42,7 @@
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_drs
- enable_drs: false
+ enable: false
register: cluster_drs_result_0002
- name: Ensure DRS is disabled
diff --git a/tests/integration/targets/vmware_cluster_ha/tasks/main.yml b/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
--- a/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
+++ b/tests/integration/targets/vmware_cluster_ha/tasks/main.yml
@@ -40,7 +40,7 @@
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
- enable_ha: false
+ enable: false
register: disable_ha_result
- name: Ensure HA is disabled
diff --git a/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml b/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml
--- a/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml
+++ b/tests/integration/targets/vmware_cluster_vsan/tasks/main.yml
@@ -44,7 +44,7 @@
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_vsan
- enable_vsan: true
+ enable: true
register: cluster_vsan_result_0002
- name: Ensure vSAN is not enabled again
| default for enable_drs in vmware_cluster_drs module
##### SUMMARY
Need change default choise for parameter enable_drs in vmware_cluster_drs. By default, this parameter contains No (False) - if the user does not specify this parameter explicitly by setting it to Yes (True), then the consequences for the cluster can be destructive, because disabling DRS destroys the entire Resourse Pools tree. For some environments (eg VCD, VRA) the consequences will be disastrous.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
[plugins/modules/vmware_cluster_drs.py](https://github.com/ansible-collections/community.vmware/blob/main/plugins/modules/vmware_cluster_drs.py)
| Files identified in the description:
* [`plugins/modules/vmware_cluster_drs.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/vmware_cluster_drs.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
cc @Akasurde @Tomorrow9 @goneri @lparkes @pgbidkar @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
@ihumster I understand the problem but changing the default to `True` might affect people in the opposite way it affects you: They think that not specifying `enable_drs` means it's disabled, but suddenly their playbooks would enable DRS when not setting this parameter.
Maybe it would be better to _not_ have a default and make this a required parameter. This would force people to explicitly state what they want to achieve.
@Akasurde @goneri @sky-joker
What's your opinion? Change the default to `True` or don't have a default for `enable_drs` and make it a required parameter? Or keep it as is... after all, the default value of `False` is documented. Personally, I tend to make it a required parameter. Maybe the [The Zen of Python](https://www.python.org/dev/peps/pep-0020/) _is_ right and explicit is better than implicit.
`vmware_cluster_ha` and `vmware_cluster_vsan` may have a similar problem since enabling these also defaults to `False`.
> Maybe it would be better to not have a default and make this a required parameter.
I see.
If it modifies the option, I think that way is better.
> > Maybe it would be better to not have a default and make this a required parameter.
>
> I see.
> If it modifies the option, I think that way is better.
@sky-joker Thanks for sharing your opinion on this. I'll prepare a PR to make the parameter required.
I understand it's a desirable improvement but it's also a breaking change. I suggest we start a (short) deprecation cycle. For now we show up a warning if the key is not set and we slip the switch for Ansible 5.0.0.
@ihumster As it turned out, this is considered a breaking change. So we won't do it right now. But I've changed my PR to announce that we will change the default in the future.
@ihumster
> I understand it's a desirable improvement but it's also a breaking change. I suggest we start a (short) deprecation cycle. For now we show up a warning if the key is not set and we slip the switch for Ansible 5.0.0.
A breaking change means we're talking about version 2 of this collection, which won't be part of Ansible 5 (see #1056).
So you'll have to wait for Ansible 6. Or, alternatively, use this collection directly in version 2 via ansible-galaxy.
cc @goneri | 2021-12-07T16:00:51 |
ansible-collections/community.vmware | 1,169 | ansible-collections__community.vmware-1169 | [
"25"
] | 801361b81d9952e17485dd7a340d5de99715ef62 | diff --git a/plugins/modules/vmware_guest_network.py b/plugins/modules/vmware_guest_network.py
--- a/plugins/modules/vmware_guest_network.py
+++ b/plugins/modules/vmware_guest_network.py
@@ -184,6 +184,7 @@
- Manual specified MAC address of the network adapter when creating, or reconfiguring.
- If not specified when creating new network adapter, mac address will be generated automatically.
- When reconfigure MAC address, VM should be in powered off state.
+ - There are restrictions on the MAC addresses you can set. Consult the documentation of your vSphere version as to allowed MAC addresses.
connected:
type: bool
description:
@@ -330,8 +331,9 @@
pass
import copy
+from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
-from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task
+from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, TaskError, vmware_argument_spec, wait_for_task
class PyVmomiHelper(PyVmomi):
@@ -792,6 +794,8 @@ def _nic_present(self, network_params=None):
wait_for_task(task)
except (vim.fault.InvalidDeviceSpec, vim.fault.RestrictedVersion) as e:
self.module.fail_json(msg='failed to reconfigure guest', detail=e.msg)
+ except TaskError as task_e:
+ self.module.fail_json(msg=to_native(task_e))
if task.info.state == 'error':
self.module.fail_json(msg='failed to reconfigure guest', detail=task.info.error.msg)
| vmware_guest_network: fails with a backtrace if static MAC starts with '00:50:56'
##### SUMMARY
<!--- Explain the problem briefly below -->
hi @pgbidkar and @Tomorrow9.
If I use the following task:
```yaml
- name: add new network adapters to virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- name: "VM Network"
state: new
device_type: e1000e
manual_mac: "00:50:56:58:59:60"
- name: "VM Network"
state: new
device_type: vmxnet3
manual_mac: "00:50:56:58:59:61"
register: add_netadapter
```
I get the following error: [add_nic.txt](https://github.com/ansible/ansible/files/3709181/add_nic.txt)
If I remove the `manual_mac` keys, everything works fine.
I create the Fedora 30 VM with:
```yaml
- name: Create VMs
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ dc1 }}"
validate_certs: no
folder: '/DC0/vm/F0'
name: test_vm1
state: poweredon
guest_id: centos7_64Guest
disk:
- size_gb: 1
type: thin
datastore: '{{ ds2 }}'
hardware:
version: latest
memory_mb: 1024
num_cpus: 1
scsi: paravirtual
cdrom:
type: iso
iso_path: "[{{ ds1 }}] fedora.iso"
networks:
- name: VM Network
```
I've tried several hardware.version/guest_id combination, without any difference. My lab is set-up as described here: https://docs.ansible.com/ansible/devel/dev_guide/platforms/vmware_guidelines.html
My vcenter set-up:
```shell
$ govc ls '/**/**/**'
/DC0/vm/F0/test_vm1
/DC0/network/VM Network
/DC0/host/DC0_C0/Resources
/DC0/host/DC0_C0/esxi1.test
/DC0/host/DC0_C0/esxi2.test
/DC0/datastore/LocalDS_0
/DC0/datastore/LocalDS_1
/DC0/datastore/datastore1 (1)
/DC0/datastore/datastore1
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
| @goneri said - <br> It's not a problem with the module. I get exactly the same error if I the same operation manually using vcenter web UI.
@goneri said - <br> Ah! The problem comes from the 00:50:56 prefix (VMware), If I use another one, everything just works as expected!
@goneri said - <br> https://github.com/ansible/ansible/issues/51221 seems to be the the same problem. MAC starts with 00:50:56 too.
Migrated from https://github.com/ansible/ansible/issues/63302 to this repo
cc @goneri
@Akasurde, yeah, I can reproduce the issue using "vmware_guest_network" module with MAC address prefix "00:50:56", but not reproduce on web UI on vSphere 6.7U3 manually even set with the same MAC address as the existing network adapter. I'm not quite sure about if "VMware Organizationally Unique Identifier (OUI) 00:50:56" can be set manually, but I'll confirm that later. Thanks.
cc @lparkes @warthog9
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: notify --->
For me this behavior is bit different. I am using `vmware_guest` ansible module to deploy multiple vms from a template and in the ansible playbook, I have kept the random MAC calculation logic. Out of two vms, one if working fine but other one is failing with the following error.

My ansible role is here:
```
---
- name: Deploying vm from '{{ win_temp }}'
vmware_guest:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: '{{ vsphere_datacenter }}'
cluster: '{{ vsphere_cluster }}'
datastore: '{{ vsphere_datastore }}'
name: '{{ inventory_hostname }}'
template: '{{ win_temp }}'
folder: '{{ folder }}'
validate_certs: 'no'
networks:
- name: '{{ Mgmt_network }}'
ip: "{{ Mgmt_network_ipv4 }}"
netmask: '{{ Mgmt_network_nmv4 }}'
mac: "{{ '00:50:56' | community.general.random_mac(seed=inventory_hostname) }}"
gateway: '{{ Mgmt_network_gwv4 }}'
dns_servers:
- '{{ dns_server1 }}'
- '{{ dns_server2 }}'
state: poweredon
wait_for_ip_address: yes
customization:
hostname: "{{ vsphere_vm_hostname }}"
dns_suffix: '{{ ad_domain }}'
domainadmin: '{{ ad_domain_admin }}'
domainadminpassword: '{{ ad_domain_password }}'
joindomain: '{{ ad_domain }}'
timezone: '{{ timezone }}'
wait_for_customization: yes
delegate_to: localhost
```
I am not sure who is using that MAC address as mentioned in the error message.
As per VMware documentation, its reserved for some ESXi work. Refer https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.troubleshooting.doc/GUID-7F723748-E7B8-48B9-A773-3822C514684B.html | 2021-12-29T15:05:41 |
|
ansible-collections/community.vmware | 1,243 | ansible-collections__community.vmware-1243 | [
"1070"
] | e31483fcd11e1a05bfc12cb1913bf2d26483903b | diff --git a/plugins/modules/vmware_dvs_host.py b/plugins/modules/vmware_dvs_host.py
--- a/plugins/modules/vmware_dvs_host.py
+++ b/plugins/modules/vmware_dvs_host.py
@@ -300,7 +300,7 @@ def set_desired_state(self):
switch_uplink_ports[name].append(port.key)
lag_uplinks.append(port.key)
- for port in ports:
+ for port in sorted(ports, key=lambda port: port.config.name):
if port.key in self.uplink_portgroup.portKeys and port.key not in lag_uplinks:
switch_uplink_ports['non_lag'].append(port.key)
| community.vmware.vmware_dvs_host adding pNIC inverted (starting at Uplink 4)
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using the community.vmware.vmware_dvs_host to create and add uplinks to Distributed vSwitch (4 uplinks) when PNICs are added they are added at Uplink 4 for VMNIC0, Uplink 3 for VMNIC1, ....
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
community.vmware.vmware_dvs_host
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
# /root/.ansible/collections/ansible_collections
Collection Version
--------------------- -------
ansible.netcommon 2.3.0
ansible.posix 1.2.0
ansible.utils 2.3.1
community.docker 1.3.0
community.general 1.3.1
community.kubernetes 1.1.1
community.vmware 1.15.0
dellemc.openmanage 2.1.4
f5networks.f5_modules 1.11.0
google.cloud 1.0.1
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.9 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Add Host to dVS "{{outside.esxi_hostname}}"
community.vmware.vmware_dvs_host:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ outside.esxi_hostname }}"
validate_certs: false
switch_name: "{{ vcenter_dvs_sw }}"
vmnics:
- "{{migrated_adapter}}"
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
When adding pNICs to DVS i would expect it to start with Uplink-1 and move downward not inverted.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Code runs correctly PNIC (VMNIC0 links to Uplink-4, VMNIC1 links to Uplink-3....
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner ---> | 2022-03-06T05:11:49 |
|
ansible-collections/community.vmware | 1,276 | ansible-collections__community.vmware-1276 | [
"1253"
] | 30de1f007ae3d00c8217129eddea66b5a4a7c004 | diff --git a/plugins/modules/vmware_guest_powerstate.py b/plugins/modules/vmware_guest_powerstate.py
--- a/plugins/modules/vmware_guest_powerstate.py
+++ b/plugins/modules/vmware_guest_powerstate.py
@@ -186,7 +186,7 @@
- name: Automatically answer if a question locked a virtual machine
block:
- name: Power on a virtual machine without the answer param
- vmware_guest_powerstate:
+ community.vmware.vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
@@ -196,7 +196,7 @@
state: powered-on
rescue:
- name: Power on a virtual machine with the answer param
- vmware_guest_powerstate:
+ community.vmware.vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
| vmware_guest_powerstate - Moved/Copied answers not working with ESXi standalone
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Hi.
It looks like answering to the "msg.uuid.altered" question does not work. At least for my standalone ESXi host version 6.7.0.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest_powerstate
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
vars_files:
- vmhost01_vars.yaml
tasks:
- name: Automatically answer if a question locked a virtual machine
block:
- name: Power on a virtual machine without the answer param
vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: false
folder: "/vm"
name: "{{ vm_name }}"
state: powered-on
rescue:
- name: Power on a virtual machine with the answer param
vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: false
folder: "/vm"
name: "{{ vm_name }}"
answer:
- question: "msg.uuid.altered"
response: "button.uuid.movedTheVM"
state: powered-on
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
I tried "1", "2", "I moved It", "I Copied It" as responses, but in any case, it waits for the default answer timeout vui.timeoutUS.question of 4 minutes to continue ( https://kb.vmware.com/s/article/2113542 ).
After 4 minutes the VM gets powered on, but if am logged on to the web client, the dialog stays there waiting for an answer. Clicking Cancel make the dialog go away and VM goes into working state, no more dialogs
The VM log shows that "I copied It" is always the default answer (reply=2), no matter what "response" i declare into the playbook.
```
2022-03-14T13:58:42.287Z| vmx| I125: Msg_Question:
2022-03-14T13:58:42.287Z| vmx| I125: [msg.uuid.altered] This virtual machine might have been moved or copied.
2022-03-14T13:58:42.287Z| vmx| I125+ In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied.
2022-03-14T13:58:42.287Z| vmx| I125+
2022-03-14T13:58:42.287Z| vmx| I125+ If you don't know, answer "I Co_pied It".
2022-03-14T13:58:42.287Z| vmx| I125+
2022-03-14T13:58:42.287Z| vmx| I125: ----------------------------------------
2022-03-14T14:02:42.490Z| vmx| I125: Timing out dialog 18033681
2022-03-14T14:02:42.490Z| vmx| I125: MsgQuestion: msg.uuid.altered reply=2
```
I wonder if I'm doing something wrong.
my ESXI host version information:
Client version: 1.33.7
Client build number: 15803439
ESXi version: 6.7.0
ESXi build number: 17167734
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@msilveirabr: Greetings! Thanks for taking the time to open this issue. In order for the community to handle your issue effectively, we need a bit more information.
Here are the items we could not find in your description:
- component name
Please set the description of this issue with this template:
https://raw.githubusercontent.com/ansible/ansible/devel/.github/ISSUE_TEMPLATE/bug_report.md
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: issue_missing_data --->
@sky-joker You've implemented this in #821. Do you have an idea why it doesn't work on a standalone ESXi host?
Is it possibly a problem with version 6.7? I think you've tested on vSphere 7 only.
Quick update:
The issue is that I simply copied and pasted the content from https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_powerstate_module.html :
```yaml
- name: Automatically answer if a question locked a virtual machine
block:
- name: Power on a virtual machine without the answer param
vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: false
folder: "{{ f1 }}"
name: "{{ vm_name }}"
state: powered-on
rescue:
- name: Power on a virtual machine with the answer param
vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: false
folder: "{{ f1 }}"
name: "{{ vm_name }}"
answer:
- question: "msg.uuid.altered"
response: "button.uuid.copiedTheVM"
state: powered-on
```
I blindly used the provided example, but **vmware_guest_powerstate** is different from **community.vmware.vmware_guest_powerstate**.
The example should be:
```yaml
- name: Automatically answer if a question locked a virtual machine
block:
- name: Power on a virtual machine without the answer param
community.vmware.vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: false
folder: "{{ f1 }}"
name: "{{ vm_name }}"
state: powered-on
rescue:
- name: Power on a virtual machine with the answer param
community.vmware.vmware_guest_powerstate:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: false
folder: "{{ f1 }}"
name: "{{ vm_name }}"
answer:
- question: "msg.uuid.altered"
response: "button.uuid.copiedTheVM"
state: powered-on
```
Works fine, both copiedTheVM and movedTheVM show up in vmware.log
Just wondering if the Web UI(when logged onto the ESXi host), the dialog should disappear once ansible is able to answer the question or not... When I'm logged off from the Web UI, the question doesn't show as expected.
So, the only issue here is that the documentation example should be updated.
Should I close this issue and open another, or am I supposed to wait for a dev to manage this ticket? ( Update documentation an close this ticket ?)
Now that's a bit weird. `vmware_guest_powerstate` is just an alias to `community.vmware.vmware_guest_powerstate`, so it should be exactly the same code that's executed. I don't understand the difference.
Anyway, since we prefer to use the fully qualified collection name in examples I'll open a PR to fix the documentation. Don't close the isue, it should be closed automatically when the PR is merged. | 2022-03-31T15:45:19 |
|
ansible-collections/community.vmware | 1,280 | ansible-collections__community.vmware-1280 | [
"1238"
] | fd46931c7f38effa7d0a28844a1e94ea395374ff | diff --git a/plugins/modules/vmware_guest_powerstate.py b/plugins/modules/vmware_guest_powerstate.py
--- a/plugins/modules/vmware_guest_powerstate.py
+++ b/plugins/modules/vmware_guest_powerstate.py
@@ -261,6 +261,9 @@ def main():
result = dict(changed=False,)
+ if module.params['folder']:
+ module.params['folder'] = module.params['folder'].rstrip('/')
+
pyv = PyVmomi(module)
# Check if the VM exists before continuing
| diff --git a/tests/integration/targets/vmware_guest_powerstate/tasks/main.yml b/tests/integration/targets/vmware_guest_powerstate/tasks/main.yml
--- a/tests/integration/targets/vmware_guest_powerstate/tasks/main.yml
+++ b/tests/integration/targets/vmware_guest_powerstate/tasks/main.yml
@@ -40,7 +40,7 @@
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- folder: '{{ f0 }}'
+ folder: '{{ f0 }}/' # Test with a trailing / because of issue 1238
state: powered-off
register: poweroff_d1_c1_f0
| community.vmware.vmware_guest_powerstate not finding VM by name
##### SUMMARY
When trying to control powerstate of a VM by name the module is unable to find the VM. This despite the fact that the exact same parameters will find the VM in other modules (such as vmware_guest_snapshot).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_powerstate
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.12 (default, Sep 21 2021, 00:10:52) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```# /root/.ansible/collections/ansible_collections
Collection Version
---------------- -------
community.vmware 2.1.0
[root@jumpserver snaprevert_test]# ansible-galaxy collection list community.vmware
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```[root@jumpserver snaprevert_test]# ansible-config dump --only-changed
[root@jumpserver snaprevert_test]#
```
##### OS / ENVIRONMENT
```NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"```
##### STEPS TO REPRODUCE
Running the playbook below you'll find that the vmware_guest_snapshot task will find the VM and perform action while the vmware_guest_powerstate will fail with "Unable to set power state for non-existing virtual machine" despite all parameters being identical.
```---
- name: Test of snapshot revert
hosts: localhost
gather_facts: no
vars:
vcenter_hostname: 1.2.3.4
vcenter_username: [email protected]
vcenter_password: FOO
datacenter_name: BAR
tasks:
- name: Revert to initial snapshot
community.vmware.vmware_guest_snapshot:
validate_certs: no
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/Jumpserver_VMs/"
name: "jump_7216"
state: revert
snapshot_name: "Initial_Setup"
delegate_to: localhost
- name: Power on machine
community.vmware.vmware_guest_powerstate:
validate_certs: no
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/Jumpserver_VMs/"
name: "jump_7216"
state: powered-on
delegate_to: localhost
```
##### EXPECTED RESULTS
I would expect vmware_guest_powerstate to find the VM just like vmware_guest_snapshot does.
##### ACTUAL RESULTS
Task fails with "non-existing virtual machine" error despite VM existing.
<!--- Paste verbatim command output between quotes -->
```PLAY [Test of snapshot revert] ********************************************************************************************************************************************************************************************************************************************************************************************************************************************
TASK [Revert to a snapshot] ***********************************************************************************************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [Power on machine] ****************************************************************************************************************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable to set power state for non-existing virtual machine : 'jump_7216'"}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
| @danthem - I tried to power off and power on a VM using vmware_guest_powerstate and I did not notice any issue.
Can you please add some pause/delay after the task "Revert to initial snapshot" ?
Hi @tejaswi-bachu ,
Thank you for looking in to this. The 'Revert to Initial snapshot' task is irrelevant, I face this issue regardless if that task is present or not. The problem I'm experiencing is that the '_name: "whatever"_' does not seem to be able to match a VM. The reason I included the 'Revert to Initial snapshot' task was to show that the exact same name pattern works in that task but not when using vmware_guest_powerstate.
Here is me running a playbook with only the 'Power on machine' task with -vvvv flags:
```[root@jumpserver-old snaprevert_test]# ansible-playbook -vvvv issuetest.yml --vault-password-file=~/.vault_secret
ansible-playbook [core 2.12.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.8.12 (default, Sep 21 2021, 00:10:52) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading collection community.vmware from /root/.ansible/collections/ansible_collections/community/vmware
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.8/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: issuetest.yml ***************************************************************************************************************************************************************************************************************************************************************************************************************************************************
Positional arguments: issuetest.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
vault_password_files: ('/root/.vault_secret',)
forks: 5
1 plays in issuetest.yml
Trying secret FileVaultSecret(filename='/root/.vault_secret') for vault_id=default
Read vars_file 'vars/secrets.enc'
Trying secret FileVaultSecret(filename='/root/.vault_secret') for vault_id=default
Read vars_file 'vars/secrets.enc'
Trying secret FileVaultSecret(filename='/root/.vault_secret') for vault_id=default
Read vars_file 'vars/secrets.enc'
PLAY [get folder] *********************************************************************************************************************************************************************************************************************************************************************************************************************************************************
Trying secret FileVaultSecret(filename='/root/.vault_secret') for vault_id=default
Read vars_file 'vars/secrets.enc'
META: ran handlers
Trying secret FileVaultSecret(filename='/root/.vault_secret') for vault_id=default
Read vars_file 'vars/secrets.enc'
Trying secret FileVaultSecret(filename='/root/.vault_secret') for vault_id=default
Read vars_file 'vars/secrets.enc'
TASK [Power on machine] ***************************************************************************************************************************************************************************************************************************************************************************************************************************************************
task path: /root/ansible/snaprevert_test/issuetest.yml:10
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717 `" && echo ansible-tmp-1649178101.6537755-351047-37329199885717="` echo /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_guest_powerstate.py
<localhost> PUT /root/.ansible/tmp/ansible-local-351042m2rcs4p4/tmpeeldimim TO /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717/AnsiballZ_vmware_guest_powerstate.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717/ /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717/AnsiballZ_vmware_guest_powerstate.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3.8 /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717/AnsiballZ_vmware_guest_powerstate.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1649178101.6537755-351047-37329199885717/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"answer": null,
"datacenter": "FOO",
"folder": "/BAR/vm/Jumpserver_VMs/",
"force": false,
"hostname": "10.60.34.206",
"moid": null,
"name": "jump_7216",
"name_match": "first",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"schedule_task_description": null,
"schedule_task_enabled": true,
"schedule_task_name": null,
"scheduled_at": null,
"state": "powered-on",
"state_change_timeout": 0,
"use_instance_uuid": false,
"username": "[email protected]",
"uuid": null,
"validate_certs": false
}
},
"msg": "Unable to set power state for non-existing virtual machine : 'jump_7216'"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Note how it can't find my "jump_7216" machine, claiming it does not exist. But I am sure this machine exists in the path specified, also other modules such as "community.vmware.vmware_guest_snapshot" finds the machine with the exact same parameters.
As a workaround I've reverted to using UUIDs with "vmware_guest_powerstate", this works fine but it would be better for me to use 'name' for matching possibly hundreds of VMs easily. I would expect that if 'name' works for other modules it should also work for vmware_guest_powerstate.
I'm not 100% sure what's happening here, but I might have an idea. Could you please try `folder: "/{{ datacenter_name }}/vm/Jumpserver_VMs"`, that is without a trailing slash? This would help me a lot.
BTW: If this works, just let us know but **don't close this issue**. I would consider this a work-around, but still a bug. Or at least a feature request that `vmware_guest_powerstate` and `vmware_guest_snapshot` should interpret `folder` the same way.
@tejaswi-bachu FYI: I think the problem is that `vmware_guest_snapshot` removes the trailing slash:
https://github.com/ansible-collections/community.vmware/blob/fd46931c7f38effa7d0a28844a1e94ea395374ff/plugins/modules/vmware_guest_snapshot.py#L437-L440
But `vmware_guest_powerstate` doesn't, as far as I can see. Well, let's wait what @danthem says. If I'm right and my suggestion helps, I know what to do.
> As a workaround I've reverted to using UUIDs with "vmware_guest_powerstate", this works fine but it would be better for me to use 'name' for matching possibly hundreds of VMs easily.
Personally, I _hate_ using UUIDs. It's not infrastructure as code at all. The idea of infrastructure as code is that you can crash everything, and deploy it again. But the VMs would have a new UUID, so you'll run into problems. Same thing with MOIDs. I think we shouldn't use them, _anywhere_. They are internal IDs and shouldn't be used when doing IAC imho.
Thank you @mariolenz , when removing the trailing slash the machine is found correctly:
```[root@jumpserver-old snaprevert_test]# ansible-playbook issuetest.yml --vault-password-file=~/.vault_secret
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [get folder] ****************************************************************************************************************
TASK [Power on machine] **********************************************************************************************************
ok: [localhost]
PLAY RECAP ***********************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root@jumpserver-old snaprevert_test]#
```
So I agree, it does look like it's working in vmware_guest_snapshot due to it removing the trailing slash.. Interesting and an easy fix :-)
Thanks for testing @danthem, this helps a lot! | 2022-04-06T15:09:14 |
ansible-collections/community.vmware | 1,298 | ansible-collections__community.vmware-1298 | [
"203"
] | 40caefd4c93615a527c8332bbe63890ff5a4cf25 | diff --git a/plugins/modules/vmware_vswitch.py b/plugins/modules/vmware_vswitch.py
--- a/plugins/modules/vmware_vswitch.py
+++ b/plugins/modules/vmware_vswitch.py
@@ -61,6 +61,81 @@
- Manage the vSwitch using this ESXi host system.
aliases: [ 'host' ]
type: str
+ security:
+ description:
+ - Network policy specifies layer 2 security settings for a
+ portgroup such as promiscuous mode, where guest adapter listens
+ to all the packets, MAC address changes and forged transmits.
+ - Dict which configures the different security values for portgroup.
+ version_added: '2.4.0'
+ suboptions:
+ promiscuous_mode:
+ type: bool
+ description: Indicates whether promiscuous mode is allowed.
+ forged_transmits:
+ type: bool
+ description: Indicates whether forged transmits are allowed.
+ mac_changes:
+ type: bool
+ description: Indicates whether mac changes are allowed.
+ required: False
+ aliases: [ 'security_policy', 'network_policy' ]
+ type: dict
+ teaming:
+ description:
+ - Dictionary which configures the different teaming values for portgroup.
+ version_added: '2.4.0'
+ suboptions:
+ load_balancing:
+ type: str
+ description:
+ - Network adapter teaming policy.
+ choices: [ loadbalance_ip, loadbalance_srcmac, loadbalance_srcid, failover_explicit, None ]
+ aliases: [ 'load_balance_policy' ]
+ network_failure_detection:
+ type: str
+ description: Network failure detection.
+ choices: [ link_status_only, beacon_probing ]
+ notify_switches:
+ type: bool
+ description: Indicate whether or not to notify the physical switch if a link fails.
+ failback:
+ type: bool
+ description: Indicate whether or not to use a failback when restoring links.
+ active_adapters:
+ type: list
+ description:
+ - List of active adapters used for load balancing.
+ - All vmnics are used as active adapters if C(active_adapters) and C(standby_adapters) are not defined.
+ elements: str
+ standby_adapters:
+ type: list
+ description:
+ - List of standby adapters used for failover.
+ - All vmnics are used as active adapters if C(active_adapters) and C(standby_adapters) are not defined.
+ elements: str
+ required: False
+ aliases: [ 'teaming_policy' ]
+ type: dict
+ traffic_shaping:
+ description:
+ - Dictionary which configures traffic shaping for the switch.
+ version_added: '2.4.0'
+ suboptions:
+ enabled:
+ type: bool
+ description: Status of Traffic Shaping Policy.
+ average_bandwidth:
+ type: int
+ description: Average bandwidth (kbit/s).
+ peak_bandwidth:
+ type: int
+ description: Peak bandwidth (kbit/s).
+ burst_size:
+ type: int
+ description: Burst size (KB).
+ required: False
+ type: dict
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -109,6 +184,53 @@
mtu: 9000
delegate_to: localhost
+- name: Add a VMware vSwitch to a specific host system with Promiscuous Mode Enabled
+ community.vmware.vmware_vswitch:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
+ esxi_hostname: DC0_H0
+ switch_name: vswitch_001
+ nic_name: vmnic0
+ mtu: 9000
+ security:
+ promiscuous_mode: True
+ delegate_to: localhost
+
+- name: Add a VMware vSwitch to a specific host system with active/standby teaming
+ community.vmware.vmware_vswitch:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
+ esxi_hostname: DC0_H0
+ switch_name: vswitch_001
+ nic_name:
+ - vmnic0
+ - vmnic1
+ teaming:
+ active_adapters:
+ - vmnic0
+ standby_adapters:
+ - vmnic1
+ delegate_to: localhost
+
+- name: Add a VMware vSwitch to a specific host system with traffic shaping
+ community.vmware.vmware_vswitch:
+ hostname: '{{ esxi_hostname }}'
+ username: '{{ esxi_username }}'
+ password: '{{ esxi_password }}'
+ esxi_hostname: DC0_H0
+ switch_name: vswitch_001
+ nic_name:
+ - vmnic0
+ - vmnic1
+ traffic_shaping:
+ enabled: True
+ average_bandwidth: 100000
+ peak_bandwidth: 100000
+ burst_size: 102400
+ delegate_to: localhost
+
- name: Delete a VMware vSwitch in a specific host system
community.vmware.vmware_vswitch:
hostname: '{{ esxi_hostname }}'
@@ -157,12 +279,15 @@ def __init__(self, module):
self.module.fail_json(msg="Failed to get details of ESXi server."
" Please specify esxi_hostname.")
+ self.network_mgr = self.host_system.configManager.networkSystem
+ if not self.network_mgr:
+ self.module.fail_json(msg="Failed to find network manager for ESXi system.")
+
if self.params.get('state') == 'present':
# Gather information about all vSwitches and Physical NICs
- network_manager = self.host_system.configManager.networkSystem
- available_pnic = [pnic.device for pnic in network_manager.networkInfo.pnic]
+ available_pnic = [pnic.device for pnic in self.network_mgr.networkInfo.pnic]
self.available_vswitches = dict()
- for available_vswitch in network_manager.networkInfo.vswitch:
+ for available_vswitch in self.network_mgr.networkInfo.vswitch:
used_pnic = []
for pnic in available_vswitch.pnic:
# vSwitch contains all PNICs as string in format of 'key-vim.host.PhysicalNic-vmnic0'
@@ -223,40 +348,62 @@ def state_create_vswitch(self):
vss_spec.mtu = self.mtu
if self.nics:
vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=self.nics)
- try:
- network_mgr = self.host_system.configManager.networkSystem
- if network_mgr:
- network_mgr.AddVirtualSwitch(vswitchName=self.switch,
- spec=vss_spec)
- results['changed'] = True
+
+ if self.module.check_mode:
+ results['msg'] = "vSwitch '%s' would be created" % self.switch
+ else:
+ try:
+ self.network_mgr.AddVirtualSwitch(vswitchName=self.switch,
+ spec=vss_spec)
+
+ changed = False
+ spec = self.find_vswitch_by_name(self.host_system, self.switch).spec
+
+ # Check Security Policy
+ if self.update_security_policy(spec, results):
+ changed = True
+
+ # Check Teaming Policy
+ if self.update_teaming_policy(spec, results):
+ changed = True
+
+ # Check Traffic Shaping Policy
+ if self.update_traffic_shaping_policy(spec, results):
+ changed = True
+
+ if changed:
+ self.network_mgr.UpdateVirtualSwitch(vswitchName=self.switch,
+ spec=spec)
+
results['result'] = "vSwitch '%s' is created successfully" % self.switch
- else:
- self.module.fail_json(msg="Failed to find network manager for ESXi system")
- except vim.fault.AlreadyExists as already_exists:
- results['result'] = "vSwitch with name %s already exists: %s" % (self.switch,
- to_native(already_exists.msg))
- except vim.fault.ResourceInUse as resource_used:
- self.module.fail_json(msg="Failed to add vSwitch '%s' as physical network adapter"
- " being bridged is already in use: %s" % (self.switch,
- to_native(resource_used.msg)))
- except vim.fault.HostConfigFault as host_config_fault:
- self.module.fail_json(msg="Failed to add vSwitch '%s' due to host"
- " configuration fault : %s" % (self.switch,
- to_native(host_config_fault.msg)))
- except vmodl.fault.InvalidArgument as invalid_argument:
- self.module.fail_json(msg="Failed to add vSwitch '%s', this can be due to either of following :"
- " 1. vSwitch Name exceeds the maximum allowed length,"
- " 2. Number of ports specified falls out of valid range,"
- " 3. Network policy is invalid,"
- " 4. Beacon configuration is invalid : %s" % (self.switch,
- to_native(invalid_argument.msg)))
- except vmodl.fault.SystemError as system_error:
- self.module.fail_json(msg="Failed to add vSwitch '%s' due to : %s" % (self.switch,
- to_native(system_error.msg)))
- except Exception as generic_exc:
- self.module.fail_json(msg="Failed to add vSwitch '%s' due to"
- " generic exception : %s" % (self.switch,
- to_native(generic_exc)))
+ except vim.fault.AlreadyExists as already_exists:
+ results['result'] = "vSwitch with name %s already exists: %s" % (self.switch,
+ to_native(already_exists.msg))
+ except vim.fault.ResourceInUse as resource_used:
+ self.module.fail_json(msg="Failed to add vSwitch '%s' as physical network adapter"
+ " being bridged is already in use: %s" % (self.switch,
+ to_native(resource_used.msg)))
+ except vim.fault.HostConfigFault as host_config_fault:
+ self.module.fail_json(msg="Failed to add vSwitch '%s' due to host"
+ " configuration fault : %s" % (self.switch,
+ to_native(host_config_fault.msg)))
+ except vmodl.fault.InvalidArgument as invalid_argument:
+ self.module.fail_json(msg="Failed to add vSwitch '%s', this can be due to either of following :"
+ " 1. vSwitch Name exceeds the maximum allowed length,"
+ " 2. Number of ports specified falls out of valid range,"
+ " 3. Network policy is invalid,"
+ " 4. Beacon configuration is invalid : %s" % (self.switch,
+ to_native(invalid_argument.msg)))
+ except vmodl.fault.SystemError as system_error:
+ self.module.fail_json(msg="Failed to add vSwitch '%s' due to : %s" % (self.switch,
+ to_native(system_error.msg)))
+ except Exception as generic_exc:
+ self.module.fail_json(msg="Failed to add vSwitch '%s' due to"
+ " generic exception : %s" % (self.switch,
+ to_native(generic_exc)))
+
+ results['changed'] = True
+
self.module.exit_json(**results)
def state_exit_unchanged(self):
@@ -272,26 +419,30 @@ def state_destroy_vswitch(self):
"""
results = dict(changed=False, result="")
- try:
- self.host_system.configManager.networkSystem.RemoveVirtualSwitch(self.vss.name)
- results['changed'] = True
- results['result'] = "vSwitch '%s' removed successfully." % self.vss.name
- except vim.fault.NotFound as vswitch_not_found:
- results['result'] = "vSwitch '%s' not available. %s" % (self.switch,
- to_native(vswitch_not_found.msg))
- except vim.fault.ResourceInUse as vswitch_in_use:
- self.module.fail_json(msg="Failed to remove vSwitch '%s' as vSwitch"
- " is used by several virtual"
- " network adapters: %s" % (self.switch,
- to_native(vswitch_in_use.msg)))
- except vim.fault.HostConfigFault as host_config_fault:
- self.module.fail_json(msg="Failed to remove vSwitch '%s' due to host"
- " configuration fault : %s" % (self.switch,
- to_native(host_config_fault.msg)))
- except Exception as generic_exc:
- self.module.fail_json(msg="Failed to remove vSwitch '%s' due to generic"
- " exception : %s" % (self.switch,
- to_native(generic_exc)))
+ if self.module.check_mode:
+ results['msg'] = "vSwitch '%s' would be removed" % self.vss.name
+ else:
+ try:
+ self.host_system.configManager.networkSystem.RemoveVirtualSwitch(self.vss.name)
+ results['result'] = "vSwitch '%s' removed successfully." % self.vss.name
+ except vim.fault.NotFound as vswitch_not_found:
+ results['result'] = "vSwitch '%s' not available. %s" % (self.switch,
+ to_native(vswitch_not_found.msg))
+ except vim.fault.ResourceInUse as vswitch_in_use:
+ self.module.fail_json(msg="Failed to remove vSwitch '%s' as vSwitch"
+ " is used by several virtual"
+ " network adapters: %s" % (self.switch,
+ to_native(vswitch_in_use.msg)))
+ except vim.fault.HostConfigFault as host_config_fault:
+ self.module.fail_json(msg="Failed to remove vSwitch '%s' due to host"
+ " configuration fault : %s" % (self.switch,
+ to_native(host_config_fault.msg)))
+ except Exception as generic_exc:
+ self.module.fail_json(msg="Failed to remove vSwitch '%s' due to generic"
+ " exception : %s" % (self.switch,
+ to_native(generic_exc)))
+
+ results['changed'] = True
self.module.exit_json(**results)
@@ -300,78 +451,95 @@ def state_update_vswitch(self):
Update vSwitch
"""
+ changed = False
results = dict(changed=False, result="No change in vSwitch '%s'" % self.switch)
- vswitch_pnic_info = self.available_vswitches[self.switch]
- pnic_add = []
- for desired_pnic in self.nics:
- if desired_pnic not in vswitch_pnic_info['pnic']:
- pnic_add.append(desired_pnic)
- pnic_remove = []
- for configured_pnic in vswitch_pnic_info['pnic']:
- if configured_pnic not in self.nics:
- pnic_remove.append(configured_pnic)
- diff = False
- # Update all nics
- all_nics = vswitch_pnic_info['pnic']
- if pnic_add or pnic_remove:
- diff = True
- if pnic_add:
- all_nics += pnic_add
- if pnic_remove:
- for pnic in pnic_remove:
- all_nics.remove(pnic)
-
- if vswitch_pnic_info['mtu'] != self.mtu or \
- vswitch_pnic_info['num_ports'] != self.number_of_ports:
- diff = True
-
- try:
- if diff:
- vss_spec = vim.host.VirtualSwitch.Specification()
- if all_nics:
- vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=all_nics)
- vss_spec.numPorts = self.number_of_ports
- vss_spec.mtu = self.mtu
-
- network_mgr = self.host_system.configManager.networkSystem
- if network_mgr:
- network_mgr.UpdateVirtualSwitch(vswitchName=self.switch,
- spec=vss_spec)
- results['changed'] = True
+ spec = self.vss.spec
+
+ # Check MTU
+ if self.vss.mtu != self.mtu:
+ spec.mtu = self.mtu
+ changed = True
+
+ # Check Number of Ports
+ if spec.numPorts != self.number_of_ports:
+ spec.numPorts = self.number_of_ports
+ changed = True
+
+ # Check nics
+ nics_current = set(map(lambda n: n.rsplit('-', 1)[1], self.vss.pnic))
+ if nics_current != set(self.nics):
+ if self.nics:
+ spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=self.nics)
+ else:
+ spec.bridge = None
+ changed = True
+
+ # Update teaming if not configured specifigaly
+ if not self.params['teaming']:
+ nicOrder = spec.policy.nicTeaming.nicOrder
+ # Remove missing nics from policy
+ if nicOrder.activeNic != [i for i in nicOrder.activeNic if i in self.nics]:
+ nicOrder.activeNic = [i for i in nicOrder.activeNic if i in self.nics]
+ if nicOrder.standbyNic != [i for i in nicOrder.standbyNic if i in self.nics]:
+ nicOrder.standbyNic = [i for i in nicOrder.standbyNic if i in self.nics]
+ # Set new nics as active
+ if set(self.nics) - nics_current:
+ nicOrder.activeNic += set(self.nics) - nics_current
+
+ # Check Security Policy
+ if self.update_security_policy(spec, results):
+ changed = True
+
+ # Check Teaming Policy
+ if self.update_teaming_policy(spec, results):
+ changed = True
+
+ # Check Traffic Shaping Policy
+ if self.update_traffic_shaping_policy(spec, results):
+ changed = True
+
+ if changed:
+ if self.module.check_mode:
+ results['msg'] = "vSwitch '%s' would be updated" % self.switch
+ else:
+ try:
+ self.network_mgr.UpdateVirtualSwitch(vswitchName=self.switch,
+ spec=spec)
results['result'] = "vSwitch '%s' is updated successfully" % self.switch
- else:
- self.module.fail_json(msg="Failed to find network manager for ESXi system.")
- except vim.fault.ResourceInUse as resource_used:
- self.module.fail_json(msg="Failed to update vSwitch '%s' as physical network adapter"
- " being bridged is already in use: %s" % (self.switch,
- to_native(resource_used.msg)))
- except vim.fault.NotFound as not_found:
- self.module.fail_json(msg="Failed to update vSwitch with name '%s'"
- " as it does not exists: %s" % (self.switch,
- to_native(not_found.msg)))
-
- except vim.fault.HostConfigFault as host_config_fault:
- self.module.fail_json(msg="Failed to update vSwitch '%s' due to host"
- " configuration fault : %s" % (self.switch,
- to_native(host_config_fault.msg)))
- except vmodl.fault.InvalidArgument as invalid_argument:
- self.module.fail_json(msg="Failed to update vSwitch '%s', this can be due to either of following :"
- " 1. vSwitch Name exceeds the maximum allowed length,"
- " 2. Number of ports specified falls out of valid range,"
- " 3. Network policy is invalid,"
- " 4. Beacon configuration is invalid : %s" % (self.switch,
- to_native(invalid_argument.msg)))
- except vmodl.fault.SystemError as system_error:
- self.module.fail_json(msg="Failed to update vSwitch '%s' due to : %s" % (self.switch,
- to_native(system_error.msg)))
- except vmodl.fault.NotSupported as not_supported:
- self.module.fail_json(msg="Failed to update vSwitch '%s' as network adapter teaming policy"
- " is set but is not supported : %s" % (self.switch,
- to_native(not_supported.msg)))
- except Exception as generic_exc:
- self.module.fail_json(msg="Failed to update vSwitch '%s' due to"
- " generic exception : %s" % (self.switch,
- to_native(generic_exc)))
+ except vim.fault.ResourceInUse as resource_used:
+ self.module.fail_json(msg="Failed to update vSwitch '%s' as physical network adapter"
+ " being bridged is already in use: %s" % (self.switch,
+ to_native(resource_used.msg)))
+ except vim.fault.NotFound as not_found:
+ self.module.fail_json(msg="Failed to update vSwitch with name '%s'"
+ " as it does not exists: %s" % (self.switch,
+ to_native(not_found.msg)))
+
+ except vim.fault.HostConfigFault as host_config_fault:
+ self.module.fail_json(msg="Failed to update vSwitch '%s' due to host"
+ " configuration fault : %s" % (self.switch,
+ to_native(host_config_fault.msg)))
+ except vmodl.fault.InvalidArgument as invalid_argument:
+ self.module.fail_json(msg="Failed to update vSwitch '%s', this can be due to either of following :"
+ " 1. vSwitch Name exceeds the maximum allowed length,"
+ " 2. Number of ports specified falls out of valid range,"
+ " 3. Network policy is invalid,"
+ " 4. Beacon configuration is invalid : %s" % (self.switch,
+ to_native(invalid_argument.msg)))
+ except vmodl.fault.SystemError as system_error:
+ self.module.fail_json(msg="Failed to update vSwitch '%s' due to : %s" % (self.switch,
+ to_native(system_error.msg)))
+ except vmodl.fault.NotSupported as not_supported:
+ self.module.fail_json(msg="Failed to update vSwitch '%s' as network adapter teaming policy"
+ " is set but is not supported : %s" % (self.switch,
+ to_native(not_supported.msg)))
+ except Exception as generic_exc:
+ self.module.fail_json(msg="Failed to update vSwitch '%s' due to"
+ " generic exception : %s" % (self.switch,
+ to_native(generic_exc)))
+
+ results['changed'] = True
+
self.module.exit_json(**results)
def check_vswitch_configuration(self):
@@ -402,6 +570,176 @@ def find_vswitch_by_name(host, vswitch_name):
return vss
return None
+ def update_security_policy(self, spec, results):
+ """
+ Update the security policy according to the parameters
+ Args:
+ spec: The vSwitch spec
+ results: The results dict
+
+ Returns: True if changes have been made, else false
+ """
+ if not self.params['security'] or not spec.policy.security:
+ return False
+
+ security_policy = spec.policy.security
+ changed = False
+ sec_promiscuous_mode = self.params['security'].get('promiscuous_mode')
+ sec_forged_transmits = self.params['security'].get('forged_transmits')
+ sec_mac_changes = self.params['security'].get('mac_changes')
+
+ if sec_promiscuous_mode is not None:
+ results['sec_promiscuous_mode'] = sec_promiscuous_mode
+ if security_policy.allowPromiscuous is not sec_promiscuous_mode:
+ results['sec_promiscuous_mode_previous'] = security_policy.allowPromiscuous
+ security_policy.allowPromiscuous = sec_promiscuous_mode
+ changed = True
+
+ if sec_mac_changes is not None:
+ results['sec_mac_changes'] = sec_mac_changes
+ if security_policy.macChanges is not sec_mac_changes:
+ results['sec_mac_changes_previous'] = security_policy.macChanges
+ security_policy.macChanges = sec_mac_changes
+ changed = True
+
+ if sec_forged_transmits is not None:
+ results['sec_forged_transmits'] = sec_forged_transmits
+ if security_policy.forgedTransmits is not sec_forged_transmits:
+ results['sec_forged_transmits_previous'] = security_policy.forgedTransmits
+ security_policy.forgedTransmits = sec_forged_transmits
+ changed = True
+
+ return changed
+
+ def update_teaming_policy(self, spec, results):
+ """
+ Update the teaming policy according to the parameters
+ Args:
+ spec: The vSwitch spec
+ results: The results dict
+
+ Returns: True if changes have been made, else false
+ """
+ if not self.params['teaming'] or not spec.policy.nicTeaming:
+ return False
+
+ teaming_policy = spec.policy.nicTeaming
+ changed = False
+ teaming_load_balancing = self.params['teaming'].get('load_balancing')
+ teaming_failure_detection = self.params['teaming'].get('network_failure_detection')
+ teaming_notify_switches = self.params['teaming'].get('notify_switches')
+ teaming_failback = self.params['teaming'].get('failback')
+ teaming_failover_order_active = self.params['teaming'].get('active_adapters')
+ teaming_failover_order_standby = self.params['teaming'].get('standby_adapters')
+
+ # Check teaming policy
+ if teaming_load_balancing is not None:
+ results['load_balancing'] = teaming_load_balancing
+ if teaming_policy.policy != teaming_load_balancing:
+ results['load_balancing_previous'] = teaming_policy.policy
+ teaming_policy.policy = teaming_load_balancing
+ changed = True
+
+ # Check teaming notify switches
+ if teaming_notify_switches is not None:
+ results['notify_switches'] = teaming_notify_switches
+ if teaming_policy.notifySwitches is not teaming_notify_switches:
+ results['notify_switches_previous'] = teaming_policy.notifySwitches
+ teaming_policy.notifySwitches = teaming_notify_switches
+ changed = True
+
+ # Check failback
+ if teaming_failback is not None:
+ results['failback'] = teaming_failback
+ if teaming_policy.rollingOrder is not teaming_failback:
+ results['notify_switches_previous'] = teaming_policy.rollingOrder
+ teaming_policy.rollingOrder = teaming_failback
+ changed = True
+
+ # Check teaming failover order
+ if teaming_failover_order_active is not None:
+ results['failover_active'] = teaming_failover_order_active
+ if teaming_policy.nicOrder.activeNic != teaming_failover_order_active:
+ results['failover_active_previous'] = teaming_policy.nicOrder.activeNic
+ teaming_policy.nicOrder.activeNic = teaming_failover_order_active
+ changed = True
+ if teaming_failover_order_standby is not None:
+ results['failover_standby'] = teaming_failover_order_standby
+ if teaming_policy.nicOrder.standbyNic != teaming_failover_order_standby:
+ results['failover_standby_previous'] = teaming_policy.nicOrder.standbyNic
+ teaming_policy.nicOrder.standbyNic = teaming_failover_order_standby
+ changed = True
+
+ # Check teaming failure detection
+ if teaming_failure_detection is not None:
+ results['failure_detection'] = teaming_failure_detection
+ if teaming_failure_detection == "link_status_only":
+ if teaming_policy.failureCriteria.checkBeacon is True:
+ results['failure_detection_previous'] = "beacon_probing"
+ teaming_policy.failureCriteria.checkBeacon = False
+ changed = True
+ elif teaming_failure_detection == "beacon_probing":
+ if teaming_policy.failureCriteria.checkBeacon is False:
+ results['failure_detection_previous'] = "link_status_only"
+ teaming_policy.failureCriteria.checkBeacon = True
+ changed = True
+
+ return changed
+
+ def update_traffic_shaping_policy(self, spec, results):
+ """
+ Update the traffic shaping policy according to the parameters
+ Args:
+ spec: The vSwitch spec
+ results: The results dict
+
+ Returns: True if changes have been made, else false
+ """
+ if not self.params['traffic_shaping'] or not spec.policy.nicTeaming:
+ return False
+
+ ts_policy = spec.policy.shapingPolicy
+ changed = False
+ ts_enabled = self.params['traffic_shaping'].get('enabled')
+
+ # Check if traffic shaping needs to be disabled
+ if not ts_enabled:
+ if ts_policy.enabled:
+ ts_policy.enabled = False
+ changed = True
+ return changed
+
+ for value in ['average_bandwidth', 'peak_bandwidth', 'burst_size']:
+ if not self.params['traffic_shaping'].get(value):
+ self.module.fail_json(msg="traffic_shaping.%s is a required parameter if traffic_shaping is enabled." % value)
+ ts_average_bandwidth = self.params['traffic_shaping'].get('average_bandwidth') * 1000
+ ts_peak_bandwidth = self.params['traffic_shaping'].get('peak_bandwidth') * 1000
+ ts_burst_size = self.params['traffic_shaping'].get('burst_size') * 1024
+
+ if not ts_policy.enabled:
+ ts_policy.enabled = True
+ changed = True
+
+ if ts_policy.averageBandwidth != ts_average_bandwidth:
+ results['traffic_shaping_avg_bandw'] = ts_average_bandwidth
+ results['traffic_shaping_avg_bandw_previous'] = ts_policy.averageBandwidth
+ ts_policy.averageBandwidth = ts_average_bandwidth
+ changed = True
+
+ if ts_policy.peakBandwidth != ts_peak_bandwidth:
+ results['traffic_shaping_peak_bandw'] = ts_peak_bandwidth
+ results['traffic_shaping_peak_bandw_previous'] = ts_policy.peakBandwidth
+ ts_policy.peakBandwidth = ts_peak_bandwidth
+ changed = True
+
+ if ts_policy.burstSize != ts_burst_size:
+ results['traffic_shaping_burst'] = ts_burst_size
+ results['traffic_shaping_burst_previous'] = ts_policy.burstSize
+ ts_policy.burstSize = ts_burst_size
+ changed = True
+
+ return changed
+
def main():
argument_spec = vmware_argument_spec()
@@ -410,12 +748,55 @@ def main():
nics=dict(type='list', aliases=['nic_name'], default=[], elements='str'),
number_of_ports=dict(type='int', default=128),
mtu=dict(type='int', default=1500),
- state=dict(type='str', default='present', choices=['absent', 'present'])),
+ state=dict(type='str', default='present', choices=['absent', 'present']),
esxi_hostname=dict(type='str', aliases=['host']),
- )
+ security=dict(
+ type='dict',
+ options=dict(
+ promiscuous_mode=dict(type='bool'),
+ forged_transmits=dict(type='bool'),
+ mac_changes=dict(type='bool'),
+ ),
+ aliases=['security_policy', 'network_policy']
+ ),
+ teaming=dict(
+ type='dict',
+ options=dict(
+ load_balancing=dict(
+ type='str',
+ choices=[
+ None,
+ 'loadbalance_ip',
+ 'loadbalance_srcmac',
+ 'loadbalance_srcid',
+ 'failover_explicit',
+ ],
+ aliases=['load_balance_policy'],
+ ),
+ network_failure_detection=dict(
+ type='str',
+ choices=['link_status_only', 'beacon_probing']
+ ),
+ notify_switches=dict(type='bool'),
+ failback=dict(type='bool'),
+ active_adapters=dict(type='list', elements='str'),
+ standby_adapters=dict(type='list', elements='str'),
+ ),
+ aliases=['teaming_policy']
+ ),
+ traffic_shaping=dict(
+ type='dict',
+ options=dict(
+ enabled=dict(type='bool'),
+ average_bandwidth=dict(type='int'),
+ peak_bandwidth=dict(type='int'),
+ burst_size=dict(type='int'),
+ ),
+ ),
+ ))
module = AnsibleModule(argument_spec=argument_spec,
- supports_check_mode=False)
+ supports_check_mode=True)
host_virtual_switch = VMwareHostVirtualSwitch(module)
host_virtual_switch.process_state()
| diff --git a/tests/integration/targets/vmware_vswitch/tasks/main.yml b/tests/integration/targets/vmware_vswitch/tasks/main.yml
--- a/tests/integration/targets/vmware_vswitch/tasks/main.yml
+++ b/tests/integration/targets/vmware_vswitch/tasks/main.yml
@@ -109,6 +109,115 @@
- assert:
that:
- add_vswitch_with_host_system is changed
+
+ - name: Add a vSwitch with a network policy
+ vmware_vswitch:
+ hostname: '{{ esxi1 }}'
+ username: '{{ esxi_user }}'
+ password: '{{ esxi_password }}'
+ validate_certs: false
+ switch: vmswitch_0001
+ nics:
+ - vmnic1
+ - vmnic2
+ state: present
+ security:
+ forged_transmits: true
+ mac_changes: true
+ traffic_shaping:
+ enabled: true
+ average_bandwidth: 100000
+ peak_bandwidth: 100000
+ burst_size: 102400
+ teaming:
+ active_adapters: vmnic1
+ standby_adapters: vmnic2
+ register: add_vswitch_netpol_run
+ - debug: var=add_vswitch_netpol_run
+ - assert:
+ that:
+ - add_vswitch_netpol_run.changed == true
+
+ - name: Add a vSwitch with a network policy again (idempotency check)
+ vmware_vswitch:
+ hostname: '{{ esxi1 }}'
+ username: '{{ esxi_user }}'
+ password: '{{ esxi_password }}'
+ validate_certs: false
+ switch: vmswitch_0001
+ nics:
+ - vmnic1
+ - vmnic2
+ state: present
+ security:
+ forged_transmits: true
+ mac_changes: true
+ traffic_shaping:
+ enabled: true
+ average_bandwidth: 100000
+ peak_bandwidth: 100000
+ burst_size: 102400
+ teaming:
+ active_adapters: vmnic1
+ standby_adapters: vmnic2
+ register: add_vswitch_netpol_again_run
+ - assert:
+ that:
+ - add_vswitch_netpol_again_run.changed == false
+
+ - name: Update a vSwitch network policy
+ vmware_vswitch:
+ hostname: '{{ esxi1 }}'
+ username: '{{ esxi_user }}'
+ password: '{{ esxi_password }}'
+ validate_certs: false
+ switch: vmswitch_0001
+ nics:
+ - vmnic1
+ - vmnic2
+ state: present
+ security:
+ forged_transmits: false
+ mac_changes: true
+ traffic_shaping:
+ enabled: false
+ teaming:
+ active_adapters:
+ - vmnic1
+ - vmnic2
+ standby_adapters: []
+ register: update_vswitch_netpol_run
+ - debug: var=update_vswitch_netpol_run
+ - assert:
+ that:
+ - update_vswitch_netpol_run.changed == true
+
+ - name: Update a vSwitch network policy again (idempotency check)
+ vmware_vswitch:
+ hostname: '{{ esxi1 }}'
+ username: '{{ esxi_user }}'
+ password: '{{ esxi_password }}'
+ validate_certs: false
+ switch: vmswitch_0001
+ nics:
+ - vmnic1
+ - vmnic2
+ state: present
+ security:
+ forged_transmits: false
+ mac_changes: true
+ traffic_shaping:
+ enabled: false
+ teaming:
+ active_adapters:
+ - vmnic1
+ - vmnic2
+ standby_adapters: []
+ register: update_vswitch_netpol_again_run
+ - assert:
+ that:
+ - update_vswitch_netpol_again_run.changed == false
+
always:
- include_tasks: teardown.yml
| Teaming and security on vswitch
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
vmware_vswitch should allow setting of teaming and security parameters
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware/vswitch
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The vmware_vswitch module allows adding NICs to vswitches, but currently there is no way to configure how those nics are set up for teaming or resilience. Similarly there is no way to set the security functions on the whole switch.
This functionality is already present in the vmware_portgroup module, but it is not always desirable to set it at the portgroup level - configuring teaming of the uplink NICs is particularly of more use at the vSwitch level. It seems silly to be using ansible to configure vSwitches and portgroups on hosts consistently, but still have to then go and use other tools to turn on or off settings on those vSwitches.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
!component =vmware/plugins/modules/vmware_vswitch.py
##### ADDITIONAL INFORMATION
This feature will enable a group level adjustment for every port group in a vSwitch that defaults to the vSwitch policies.
Tumble down the vSphere Web Services API rabbit hole with me:
- https://code.vmware.com/apis/704/vsphere/vim.host.NetworkSystem.html#addVirtualSwitch
- https://code.vmware.com/apis/704/vsphere/vim.host.NetworkSystem.html#updateVirtualSwitch
- https://code.vmware.com/apis/704/vsphere/vim.host.VirtualSwitch.Specification.html
- https://code.vmware.com/apis/704/vsphere/vim.host.NetworkPolicy.html
- https://code.vmware.com/apis/704/vsphere/vim.host.NetworkPolicy.SecurityPolicy.html
```yaml
- name: Add a VMware vSwitch
vmware_vswitch:
hostname: "{{ esxi_hostname }}"
esxi_hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
switch: "vswitch_name"
nics: "vmnic_name"
mtu: 9000
security:
promiscuous_mode: False
mac_changes: False
forged_transmits: False
traffic_shaping:
enabled: True
average_bandwidth: 100000
peak_bandwidth: 100000
burst_size: 102400
teaming:
load_balancing: failover_explicit
network_failure_detection: link_status_only
notify_switches: true
failback: true
active_adapters:
- vmnic0
standby_adapters:
- vmnic1
delegate_to: localhost
```
Current alternative is to run a shell script after creating the vSwitches... Something like:
```sh
#!/bin/sh
esxcli network vswitch standard policy security set -p true -v vswitch_name;
```
I am actually actively working on this for our own internal usage, so I'm interested in working on this issue. We're most interested in routing policy and security settings.
@wells, have you done anything with this code yet? I see you have a sample playbook above, but wasn't sure if there was any code to go with it yet.
I'm also considering adding check mode, but that itself might add lots of conditionals so we'll see.
Looks like this already got developed by @ckotte back in 2018, but never got merged:
https://github.com/ansible/ansible/pull/47015
Good find!
@ckotte, is it possible to get a new PR opened on to ansible-collections/community.vmware for this code?
I invested so much time in improving modules, but several of my PRs were not merged because some people in charge here just don't care. I also created a lot of new modules where I didn't even created PRs anymore. Anyway. I don't waste my time with this any longer.
Files identified in the description:
* [`vmware/plugins/modules/vmware_vswitch.py`](https://github.com/['ansible-collections/community.vmware']/blob/main/vmware/plugins/modules/vmware_vswitch.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Files identified in the description:
* [`vmware/plugins/modules/vmware_vswitch.py`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/vmware/plugins/modules/vmware_vswitch.py)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
@ckotte Would like to re-open https://github.com/ansible/ansible/pull/47015? Thanks.
needs_info
@Akasurde I'm still waiting for your help with https://github.com/ansible-collections/community.vmware/pull/531. I won't open anything until this is merged..
Is there an ETA for this fix? Currently have a workaround but would like to replace it...
@sjwk This issue is waiting for your response. Please respond or the issue will be closed.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: needs_info_base --->
I don't believe anyone is waiting on me..
@sjwk Just for your information, there are several issues open for `vmware_vswitch`. I found three, and I've opened #1035 because I think it might be easier to address them all in one go instead of one by one. | 2022-04-27T07:44:44 |
ansible-collections/community.vmware | 1,305 | ansible-collections__community.vmware-1305 | [
"1297"
] | 4de43a0f5077f3d630b9d6d0c457c2c9e3b9fb53 | diff --git a/plugins/modules/vmware_migrate_vmk.py b/plugins/modules/vmware_migrate_vmk.py
--- a/plugins/modules/vmware_migrate_vmk.py
+++ b/plugins/modules/vmware_migrate_vmk.py
@@ -53,6 +53,12 @@
- Portgroup name to migrate VMK interface to
required: True
type: str
+ migrate_vlan_id:
+ version_added: '2.4.0'
+ description:
+ - VLAN to use for the VMK interface when migrating from VDS to VSS
+ - Will be ignored when migrating from VSS to VDS
+ type: int
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -91,6 +97,7 @@ def __init__(self, module):
self.host_system = None
self.migrate_switch_name = self.module.params['migrate_switch_name']
self.migrate_portgroup_name = self.module.params['migrate_portgroup_name']
+ self.migrate_vlan_id = self.module.params['migrate_vlan_id']
self.device = self.module.params['device']
self.esxi_hostname = self.module.params['esxi_hostname']
self.current_portgroup_name = self.module.params['current_portgroup_name']
@@ -130,7 +137,7 @@ def create_port_group_config_vds_vss(self):
port_group_config.spec = vim.host.PortGroup.Specification()
port_group_config.changeOperation = "add"
port_group_config.spec.name = self.migrate_portgroup_name
- port_group_config.spec.vlanId = 0
+ port_group_config.spec.vlanId = self.migrate_vlan_id if self.migrate_vlan_id is not None else 0
port_group_config.spec.vswitchName = self.migrate_switch_name
port_group_config.spec.policy = vim.host.NetworkPolicy()
return port_group_config
@@ -209,7 +216,8 @@ def main():
current_switch_name=dict(required=True, type='str'),
current_portgroup_name=dict(required=True, type='str'),
migrate_switch_name=dict(required=True, type='str'),
- migrate_portgroup_name=dict(required=True, type='str')))
+ migrate_portgroup_name=dict(required=True, type='str'),
+ migrate_vlan_id=dict(required=False, type='int')))
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
| VLAN is set to 0 when migrating a VMKernel port to a VSS port group
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using `vmware_migrate_vmk` to migrate a VMKernel port from a DVS to a VSS port group, the resulting port group on the VSS is set to VLAN `0`. There is no attribute to `vmware_migrate_vmk` to set the VLAN.
I think the code where the VLAN is set to `0` is on line 133 of `vmware_migrate_vmk.py`:
https://github.com/ansible-collections/community.vmware/blob/1ee484fe3f116c305d8871f80aaf5a30ecdf0e74/plugins/modules/vmware_migrate_vmk.py#L128-L136
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
community.vmware.vmware_migrate_vmk
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.27
config file = /home/REDACTED/.ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Mar 25 2022, 11:15:52) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)]
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
$ grep version ~/.ansible/collections/ansible_collections/community/vmware/MANIFEST.json
"version": "2.2.0",
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/REDACTED/.ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/REDACTED/.ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s
CACHE_PLUGIN(/home/REDACTED/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/REDACTED/.ansible.cfg) = ~/.ansible/fact-cache
CACHE_PLUGIN_TIMEOUT(/home/REDACTED/.ansible.cfg) = 3600
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /home/REDACTED/ansible.log
GALAXY_SERVER_LIST(/home/REDACTED/.ansible.cfg) = [REDACTED_ansible_galaxy']
INVENTORY_CACHE_ENABLED(/home/REDACTED/.ansible.cfg) = True
INVENTORY_CACHE_PLUGIN(/home/REDACTED/.ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/home/REDACTED/.ansible.cfg) = ~/.ansible/inventory-cache
INVENTORY_CACHE_TIMEOUT(/home/REDACTED/.ansible.cfg) = 3600
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
```
```
"about_info": {
"about_info": {
"api_type": "VirtualCenter",
"api_version": "6.7.3",
"build": "19299595",
"instance_uuid": "REDACTED",
"license_product_name": "VMware VirtualCenter Server",
"license_product_version": "6.0",
"locale_build": "000",
"locale_version": "INTL",
"os_type": "linux-x64",
"product_full_name": "VMware vCenter Server 6.7.0 build-19299595",
"product_line_id": "vpx",
"product_name": "VMware vCenter Server",
"vendor": "VMware, Inc.",
"version": "6.7.0"
},
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Using a (not so) minimal playbook to migrate a _VMKernel_ port from a _DVS_ to a _VSS_, I can demonstrate the _VLAN_ is not set correctly.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
connection: local
become: false
gather_facts: false
vars:
current_portgroup: vds1-vlan-203
current_switch: vds1
datacenter: ExampleDC
device: vmk1
migrate_portgroup: vswitch0-vlan-203
migrate_switch: vSwitch0
tasks:
- name: get VMKernel interface information
community.vmware.vmware_vmkernel_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ esxi_hostname }}"
register: vmkernel_info
delegate_to: localhost
tags: always
- name: display VMkernel interface information
debug:
msg: "{{ vmkernel_info.host_vmk_info[inventory_hostname] | selectattr('device', 'equalto', device) | list }}"
delegate_to: localhost
tags: always
- name: get portgroup information
community.vmware.vmware_dvs_portgroup_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
dvswitch: "{{ current_switch }}"
show_mac_learning: false
show_network_policy: false
show_port_policy: false
show_teaming_policy: false
show_uplinks: false
show_vlan_info: true
register: dvs_portgroup_info
delegate_to: localhost
tags: always
- name: display portgroup information
debug:
msg: "{{ dvs_portgroup_info.dvs_portgroup_info[current_switch] | selectattr('portgroup_name', 'equalto', current_portgroup) | map(attribute='vlan_info') | list }}"
delegate_to: localhost
tags: always
- name: migrate VMKernel interface
community.vmware.vmware_migrate_vmk:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ esxi_hostname }}"
current_portgroup_name: "{{ current_portgroup }}"
current_switch_name: "{{ current_switch }}"
device: "{{ device }}"
migrate_portgroup_name: "{{ migrate_portgroup }}"
migrate_switch_name: "{{ migrate_switch }}"
delegate_to: localhost
tags:
- never
- migrate
- name: get VMKernel interface information
community.vmware.vmware_vmkernel_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ esxi_hostname }}"
register: vmkernel_info
delegate_to: localhost
tags: always
- name: display VMkernel interface information
debug:
msg: "{{ vmkernel_info.host_vmk_info[inventory_hostname] | selectattr('device', 'equalto', device) | list }}"
delegate_to: localhost
tags: always
- name: get portgroup information
community.vmware.vmware_portgroup_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
esxi_hostname: "{{ esxi_hostname }}"
register: portgroup_info
delegate_to: localhost
tags: always
- name: display VMkernel interface information
debug:
msg: "{{ portgroup_info.hosts_portgroup_info[inventory_hostname] | selectattr('portgroup', 'equalto', migrate_portgroup) | list }}"
delegate_to: localhost
tags: always
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The VLAN_should be preserved from the old port group to the new port group.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The VLAN is set to `0` on the target port group.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i inventory migrate_vmk_to_vss.yml --ask-vault-pass -l esx.example.com -t migrate
Vault password:
PLAY [all] *********************************************************************************************************************************************************************************************************
TASK [get VMKernel interface information] **************************************************************************************************************************************************************************
ok: [esx.example.com -> localhost]
TASK [display VMkernel interface information] **********************************************************************************************************************************************************************
ok: [esx.example.com -> localhost] => {
"msg": [
{
"device": "vmk1",
"dhcp": false,
"enable_ft": false,
"enable_management": false,
"enable_vmotion": true,
"enable_vsan": false,
"ipv4_address": "10.xxx.xxx.50",
"ipv4_subnet_mask": "255.255.252.0",
"key": "key-vim.host.VirtualNic-vmk1",
"mac": "00:50:56:xx:xx:xx",
"mtu": 1500,
"portgroup": "",
"stack": "defaultTcpipStack"
}
]
}
TASK [get portgroup information] ***********************************************************************************************************************************************************************************
ok: [esx.example.com -> localhost]
TASK [display portgroup information] *******************************************************************************************************************************************************************************
ok: [esx.example.com -> localhost] => {
"msg": [
{
"pvlan": false,
"trunk": false,
"vlan_id": "203"
}
]
}
TASK [migrate VMKernel interface] **********************************************************************************************************************************************************************************
changed: [esx.example.com -> localhost]
TASK [get VMKernel interface information] **************************************************************************************************************************************************************************
ok: [esx.example.com -> localhost]
TASK [display VMkernel interface information] **********************************************************************************************************************************************************************
ok: [esx.example.com -> localhost] => {
"msg": [
{
"device": "vmk1",
"dhcp": false,
"enable_ft": false,
"enable_management": false,
"enable_vmotion": true,
"enable_vsan": false,
"ipv4_address": "10.xx.xx.50",
"ipv4_subnet_mask": "255.255.252.0",
"key": "key-vim.host.VirtualNic-vmk1",
"mac": "00:50:56:xx:xx:xx",
"mtu": 1500,
"portgroup": "vswitch0-vlan-203",
"stack": "defaultTcpipStack"
}
]
}
TASK [get portgroup information] ***********************************************************************************************************************************************************************************
ok: [esx.example.com -> localhost]
TASK [display VMkernel interface information] **********************************************************************************************************************************************************************
ok: [esx.example.com -> localhost] => {
"msg": [
{
"portgroup": "vswitch0-vlan-203",
"vlan_id": 0,
"vswitch": "vSwitch0"
}
]
}
PLAY RECAP *********************************************************************************************************************************************************************************************************
esx.example.com : ok=9 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
> When using `vmware_migrate_vmk` to migrate a VMKernel port from a DVS to a VSS port group, the resulting port group on the VSS is set to VLAN `0`. There is no attribute to `vmware_migrate_vmk` to set the VLAN.
Yes, I think this was overlooked simply because migrating from VSS to DVS is the more common use case.
> ##### EXPECTED RESULTS
>
> The VLAN should be preserved from the old port group to the new port group.
Yes... and no. The correct VLAN on the destination VSS might be (unusal, but possible) different from the DVS. And, of course, the uplinks of the VSS might be native ports, that is there are no VLANs tagged at all. So I think we need to have both, a way to keep the VLAN from the source portgroup and a way to set it explicitelly.
What do you think?
Several years ago, I write a PowerCLI script to migrate a VMKernel Port from a DVS to a VSS. I modeled the script after a code from a blog post by [William Lam](https://williamlam.com/2013/11/automate-reverse-migrating-from-vsphere.html). If I recall correctly, on a VSS, a port group used by a VMKernel port is not the same as a virtual machine port group (vmnic port group verses VM port group), so when adding or migrating a VMKernel port to a VSS, a new port group is always created. When creating the vmnic port group, the default VLAN is 0. The VLAN is not configurable when connecting a VMKernel port to a DVS port group because an existing virtual machine port group is required, which already has the VLAN specified.
The PowerCLI script I wrote to migrate a VMKernel port to a VSS requires a VLAN number, and IIRC, is a requirement of the API and/or SDK. So, I think the correct way to fix the `vmware.vmware_migrate_vmk` module is to add a VLAN attribute so it can be set manually. The default can be `0` if not specified.
If the `vmware.vmware_migrate_vmk` module is able to also determine the VLAN from the source DVS port group, then I would consider it a nice _bonus_ to the module functionality, but I don't think it's required based on my past experience with the equivalent PowerCLI commandlets. | 2022-05-03T10:03:26 |
|
ansible-collections/community.vmware | 1,310 | ansible-collections__community.vmware-1310 | [
"1008"
] | 6b2ef74f396212a64f123bf0582d2df5acd19ab8 | diff --git a/plugins/inventory/vmware_vm_inventory.py b/plugins/inventory/vmware_vm_inventory.py
--- a/plugins/inventory/vmware_vm_inventory.py
+++ b/plugins/inventory/vmware_vm_inventory.py
@@ -340,6 +340,26 @@
- 'guest.ipAddress'
- 'guest.guestFamily'
- 'guest.ipStack'
+
+# Select a specific IP address for use by ansible when multiple NICs are present on the VM
+ plugin: community.vmware.vmware_vm_inventory
+ strict: False
+ hostname: 10.65.223.31
+ username: [email protected]
+ password: Esxi@123$%
+ validate_certs: False
+ compose:
+ # Set the IP address used by ansible to one that starts by 10.42. or 10.43.
+ ansible_host: >-
+ guest.net
+ | selectattr('ipAddress')
+ | map(attribute='ipAddress')
+ | flatten
+ | select('match', '^10.42.*|^10.43.*')
+ | list
+ | first
+ properties:
+ - guest.net
'''
from ansible.errors import AnsibleError, AnsibleParserError
| VMware Inventory Plugin does not select correct IP address on hosts with multiple NICs
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
My goal is to have the VMware inventory plugin set "ansible_host" as a specific IP address that matches a RegEx expression. Our servers have multiple NICs on different subnets and only one of those subnets has connectivity with our Ansible Tower cluster. The default network interfaces do not connect with Tower, hence the need for this custom Jinja expression in the plugin. I have tried dozens of different expressions and none of them work, either in Tower or with CLI Ansible.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
VMware inventory plugin for Ansible
##### ANSIBLE VERSION
```
Ansible Tower:
Ansible 2.9.24 Ansible Tower 3.7.4
Ansible CLI:
ansible [core 2.11.2]
config file = /export/home/<redacted>/vmware/ansible.cfg
configured module search path = ['/export/home/<redacted>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /export/home/<redacted>/vmware/.venv/lib64/python3.6/site-packages/ansible
ansible collection location = /export/home/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /export/home/<redacted>/vmware/.venv/bin/ansible
python version = 3.6.8 (default, Mar 18 2021, 08:58:41) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.1
libyaml = True
```
##### COLLECTION VERSION
```
Bundled with Ansible Tower 3.7.4.
On CLI Ansible:
Collection Version
----------------------------- -------
community.vmware 1.10.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
INVENTORY_ENABLED(/export/home/<redacted>/vmware/ansible.cfg) = ['vmware_vm_inventory']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8 control nodes. Targeted nodes are a blend of many different OSes in many different networks.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
First I tried what was described here: https://access.redhat.com/solutions/3701361
Then I started testing lots of other expressions, such as:
<!--- Paste example playbooks or commands between quotes below -->
```
---
plugin: vmware_vm_inventory
strict: False
hostname: <redacted>
username: <redacted>
password: <redacted>
validate_certs: False
with_tags: True
compose:
ansible_host: "{{ guest.net[0]|select('ipAddress', 'contains', '10.177.') }}"
max_object_level: 5
filters:
- runtime.powerState == 'poweredOn'
- config.name is match('<redacted>')
properties:
- 'config.name'
- 'config.guestId'
- 'guest.ipAddress'
- 'summary.runtime.powerState'
- 'guest.net'
- 'config.uuid'
```
```
# Other ansible_host expressions tried include but are not limited to:
ansible_host: "{{ (guest.net|select_chain_match('ipAddress', '^10.177.*|^10.183.*'))[0] }}"
ansible_host: "{{ guest.net[0]|selectattr('ipAddress', 'contains', '10.177.') }}"
ansible_host: "{{ (guest.net|selectattr('ipAddress', 'contains', '10.177.')|first) }}"
ansible_host |
"{% for item in guest.net %}
{% if '10.177.' in item.ipAddress[0] or if '10.183.' in item.ipAddress[0] %}
{{ item.ipAddress[0] }}
{% endif %}
{% endfor %}"
# The following expression does return an IP address but it's not always the correct one.
ansible_host: "{{ guest.net[2].ipAddress[0] }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
"ansible_host" should be set to an IP address starting with either 10.177 or 10.183.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
None of the dozens of expressions I've tried have returned an IP address value for "ansible_host", that matches what I need.
This one:
```
ansible_host: "{{ guest.net[2].ipAddress[0] }}"
````
does return an IP address but it isn't selecting from the list of IP addresses the one that I need. Most other expressions return nothing for "ansible_host", indicating invalid Jinja or a bug.
| Files identified in the description:
* [`lib/ansible/plugins/inventory`](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/lib/ansible/plugins/inventory)
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Hi @cscal!
Would this works for you?
```
- hosts: localhost
vars:
guest:
net:
- ipAddress: ['192.168.1.2']
- ipAddress: ['10.177.45.3']
tasks:
- set_fact:
my_ansible_host: "{{ guest.net | selectattr('ipAddress')|map(attribute='ipAddress')|flatten|select('match', '^10.177.*|^10.183.*')|list|first }}"
- debug: var=my_ansible_host
```
Yes, this worked, thank you! I feel like this must be a somewhat common use case in large datacenters, and should be added in the documentation somewhere.
@cscal would you like to open a PR to improve the documentation of the inventory plugin?
@cscal how you set compose?
Got it working by using @goneri suggestion, here is the part of my inventory configuration file that made it work :
```
plugin: community.vmware.vmware_vm_inventory
compose:
ansible_host: >-
guest.net
| selectattr('ipAddress')
| map(attribute='ipAddress')
| flatten
| select('match', '^10.177.*|^10.183.*')
| list
| first
properties:
- guest.net
```
@matletix Would you be interested in updating documentation? Let us know.
@Akasurde ok I'll submit a PR
> @Akasurde ok I'll submit a PR
Thanks a lot. Let me know if you need any help. | 2022-05-06T08:18:37 |
|
ansible-collections/community.vmware | 1,335 | ansible-collections__community.vmware-1335 | [
"1270"
] | 5c0cb194968cabc8dd3a05984d88be96a9dc1990 | diff --git a/plugins/modules/vmware_cfg_backup.py b/plugins/modules/vmware_cfg_backup.py
--- a/plugins/modules/vmware_cfg_backup.py
+++ b/plugins/modules/vmware_cfg_backup.py
@@ -172,6 +172,8 @@ def reset_configuration(self):
def save_configuration(self):
url = self.host.configManager.firmwareSystem.BackupFirmwareConfiguration()
url = url.replace('*', self.host.name)
+ if self.module.params["port"] == 443:
+ url = url.replace("http:", "https:")
if os.path.isdir(self.dest):
filename = url.rsplit('/', 1)[1]
self.dest = os.path.join(self.dest, filename)
| community.vmware.vmware_cfg_backup: Failed to write backup file
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When I run the ansible playbook it's return "Failed to write backup file" error.
```
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ESXI backup test] ******************************************************************************************************************************************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Failed to write backup file. Ensure that the dest path exists and is writable. Details : <urlopen error [Errno 111] Connection refused>"}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`community.vmware.vmware_cfg_backup`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/sergen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0]
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
ansible-galaxy collection list community.vmware
usage: ansible-galaxy collection [-h] COLLECTION_ACTION ...
ansible-galaxy collection: error: argument COLLECTION_ACTION: invalid choice: 'list' (choose from 'init', 'build', 'publish', 'install')
```
List command is not working but I install collection with this command
`ansible-galaxy collection install community.vmware`
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
no output
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 20.04
VMware ESXI 7.0.3
##YAML FILE
returns the same result in both files
```
- hosts: localhost
vars:
esxi_hostname: "192.168.88.154"
esxi_username: "root"
esxi_password: "password"
tasks:
- name: ESXI backup test
local_action:
module: vmware_cfg_backup
hostname: '{{esxi_hostname}}'
username: '{{esxi_username}}'
password: '{{esxi_password}}'
state: saved
dest: /tmp/
validate_certs: no
```
```
cat vmware.yaml
- hosts: localhost
vars:
esxi_hostname: "192.168.88.154"
esxi_username: "root"
esxi_password: "password"
tasks:
- name: Save
community.vmware.vmware_cfg_backup:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
state: saved
dest: /tmp/
validate_certs: no
delegate_to: localhost
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-playbook vmware.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Save] ******************************************************************************************************************************************************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Failed to write backup file. Ensure that the dest path exists and is writable. Details : <urlopen error [Errno 111] Connection refused>"}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
!component
@sergenaras , did you manage to solve this? I am facing the same
@nb25186 no, I couldn't solve. I think this community is inactive or something like that
I've tested this:
> ```
> - hosts: localhost
> tasks:
> - name: Save
> community.vmware.vmware_cfg_backup:
> hostname: '192.168.88.154'
> username: 'root'
> password: 'password'
> state: saved
> dest: /tmp/
> validate_certs: no
> delegate_to: localhost
> ```
with
- ansible 2.9.27
- ansible 5.6.0 (ansible-core 2.12.4 + community.vmware 1.18.0)
- ansible 6.0.0a1 (ansible-core 2.13.0b0 + community.vmware 2.2.0)
but couldn't reproduce your issue. I thought you might have lockdown enabled, but in this case the error message is different:
`fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "User root does not have required permission to log on to vCenter or ESXi API at 192.168.88.154:443 : Permission to perform this operation was denied."}`
Although your error message says something about being unable to write backup file, this
`Details : <urlopen error [Errno 111] Connection refused>`
looks like a network connectivity problem. What happens if you run `curl https://192.168.88.154` on your ansible machine?
```
curl http://192.168.88.154
<HTML><BODY><H1>301 Moved Permanently</H1></BODY></HTML>
```
```
curl https://192.168.88.154
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
```
Is it about ssl? But I say in ansible _validate_certs: no_
Too bad, I was hoping that it's a simple connectivity issue. I'm afraid I'm running out of ideas, especially since I can't reproduce the problem.
@Akasurde @sky-joker Do you have any ideas?
> Is it about ssl? But I say in ansible _validate_certs: no_
I doubt it. When I remove `validate_certs: no` I get an explicit error message about certificate verification:
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable to connect to vCenter or ESXi API at 192.168.88.154 on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)"}
```
As a work-around, couldn't you get the configuration through vCenter?
```
- hosts: localhost
gather_facts: false
tasks:
- name: Save
vmware_cfg_backup:
hostname: "vCenter"
username: "vCenter_User"
password: "vCenter_PW"
esxi_hostname: "ESXi"
state: saved
dest: /tmp/
validate_certs: no
delegate_to: localhost
```
I'm doing all this on vmware workstation and unfortunately I don't have the resources to install vCenter. That's why I couldn't test in vCenter.
Like Serge, I am also working with a standalone ESXi, no vcenter available here. CURL from ansible machine to esxi host http/https (the later with -k to ignore cert verification) ran well
I ran a playbook like Mario's and still get the same issue (I added a netstat at the end of the ansible command)
<img width="770" alt="image" src="https://user-images.githubusercontent.com/92930247/164017710-3b68370f-a756-490e-98b6-9a831b78adeb.png">
Same problem here.
I checked file permissions and firewall rules (ports 22, 443) so it seems to be no issue with that. However the error message is even not really clear if it is a disk write problem or a connection problem. So I check the code, stacktrace and network again. This time with ports (22, 80, 443).
### Stacktrace
```bash
The full traceback is:
File "/tmp/ansible_vmware_cfg_backup_payload_mjf8j77e/ansible_vmware_cfg_backup_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cfg_backup.py", line 183, in save_configuration
File "/tmp/ansible_vmware_cfg_backup_payload_mjf8j77e/ansible_vmware_cfg_backup_payload.zip/ansible/module_utils/urls.py", line 1393, in open_url
return Request().open(method, url, data=data, headers=headers, use_proxy=use_proxy,
File "/tmp/ansible_vmware_cfg_backup_payload_mjf8j77e/ansible_vmware_cfg_backup_payload.zip/ansible/module_utils/urls.py", line 1304, in open
return urllib_request.urlopen(request, None, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1383, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.8/urllib/request.py", line 1357, in do_open
raise URLError(err)
fatal: [esx_XY -> localhost]: FAILED! => changed=false
invocation:
module_args:
dest: /tmp/LOCATION_XY
esxi_hostname: null
hostname: esx_XY
password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
port: 443
proxy_host: null
proxy_port: null
src: null
state: saved
username: nobody_XY
validate_certs: false
msg: 'Failed to write backup file. Ensure that the dest path exists and is writable. Details : <urlopen error timed out>'
```
### VMWare Collection Version
```bash
Collection Version
---------------- -------
community.vmware 1.12.0
```
I know it is dead old version, but there seem to be no updates on the vmware_cfg_backup.py module since then.
### NMAP Ansible Controller
```bash
Host is up (0.00069s latency).
PORT STATE SERVICE
22/tcp open ssh
80/tcp filtered http
443/tcp open https
```
Have a look at the **filtered** port 80!
### Finding/Cross-Check
Interesting fact, the playbook works when executing locally.
### NMAP local machine
My nmap says this time
```bash
Host is up (0.00069s latency).
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
```
Have a look at the **open** port 80!
So...
### Solution
I found out that, the "preflight" or HEAD request probably needs HTTP with Port 80 open or another call of the script and that from our ansible controller we do not have port 80 open, but locally I did have access to port 80.
So the solution was to **_open port 80_** and the vmware_cfg_backup.py works as expected.
### Security
However from a security perspective is it really necessary to open port 80? Should this be declared as a bug and be fixed in the module itself so not to use 80 and rewrite it to be able to work only with port 443?! I think it should. | 2022-05-31T13:59:02 |
|
ansible-collections/community.vmware | 1,362 | ansible-collections__community.vmware-1362 | [
"1290"
] | c673d7517eb3a49bbb83e29661ab65fc84cc61c1 | diff --git a/plugins/modules/vmware_content_deploy_ovf_template.py b/plugins/modules/vmware_content_deploy_ovf_template.py
--- a/plugins/modules/vmware_content_deploy_ovf_template.py
+++ b/plugins/modules/vmware_content_deploy_ovf_template.py
@@ -260,7 +260,7 @@ def deploy_vm_from_ovf_template(self):
self._resourcepool_id = cluster_obj.resource_pool
# Find the resourcepool by the given resourcepool name
- if self.resourcepool and self.cluster and self.host:
+ if self.resourcepool:
self._resourcepool_id = self.get_resource_pool_by_name(self.datacenter, self.resourcepool, self.cluster, self.host)
if not self._resourcepool_id:
self._fail(msg="Failed to find the resource_pool %s" % self.resourcepool)
diff --git a/plugins/modules/vmware_content_deploy_template.py b/plugins/modules/vmware_content_deploy_template.py
--- a/plugins/modules/vmware_content_deploy_template.py
+++ b/plugins/modules/vmware_content_deploy_template.py
@@ -279,7 +279,7 @@ def deploy_vm_from_template(self, power_on=False):
self._resourcepool_id = cluster_obj.resource_pool
# Find the resourcepool by the given resourcepool name
- if self.resourcepool and self.cluster and self.host:
+ if self.resourcepool:
self._resourcepool_id = self.get_resource_pool_by_name(self.datacenter, self.resourcepool, self.cluster, self.host)
if not self._resourcepool_id:
self._fail(msg="Failed to find the resource_pool %s" % self.resourcepool)
| vmware_content_deploy_ovf_template does not respet resource_pool parameter
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When creating a VM from an OVF template in a Content Library, no matter what resource_pool you try to configure ( either it does exist or not ) the VM get created in the top level resource pool ( if such thing exist ).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_content_deploy_ovf_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible --version
ansible 2.9.20
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pigi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib64/python3.9/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.10 (main, Jan 16 2022, 02:41:55) [GCC 11.2.0]
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Have a vmware environment with some resource pools.
Create a Content Library and upload an ovf
Try to create a VM from this ovf in a specific resource pool
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: CreateBootStrap
user: root
hosts: localhost
#become: true
gather_facts: false
vars:
# Define your username and password here that you want to create on target hosts.
vcenter_hostname: vcsa67
vcenter_username: pigi
vcenter_password: Passw0rd
OCP_template: rhcos-vmware.x86_64
OCP_Library: Lib_CL_RHCOS
Cluster: "Compute LAB Cluster"
DataStore: Compute_03-Sistemi
folderVM: "Pigi_folder_vm"
DataCenter: "LAB Datacenter"
NameVM: "Test_for_OCP"
ResPool: "OCP_TST"
- name: create machine from template.
community.vmware.vmware_content_deploy_ovf_template:
cluster: '{{ Cluster }}'
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
template: '{{ OCP_template }}'
content_library: '{{ OCP_Library }}'
datastore: '{{ DataStore }}'
folder: '{{ folderVM }}'
datacenter: '{{ DataCenter }}'
name: '{{ NameVM }}'
resource_pool: '{{ ResPool }}'
storage_provisioning: thin
validate_certs: no
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expepct that the VM would be created in the OCP_TST resource pool
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The VM get created in the correct cluster but outside the resource pool
<!--- Paste verbatim command output between quotes -->
```paste below
```
| This bug is still available in the latest version of vmware ansible collections.
Its taking the `resource_pool` value but there is no code to handle it.
Issue seems to be in https://github.com/ansible-collections/community.vmware/blob/5326613d98fe55cd2298ea7a361b2d5778f0a2f9/plugins/modules/vmware_content_deploy_template.py#L283 line. Its trying to get the resource pool details using `get_resource_pool_by_name` function. But that function itself is not available. Hence no activity is happening at that lebel.
- My installed version of collection:
```
$ ansible-galaxy collection list community.vmware
# /usr/local/lib/python3.8/dist-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.18.0
```
Can we get this fixed ?
@Udayendu This issue is about the `vmware_content_deploy_ovf_template` module. But you're linking to the `vmware_content_deploy_template`:
> Issue seems to be in
>
> https://github.com/ansible-collections/community.vmware/blob/5326613d98fe55cd2298ea7a361b2d5778f0a2f9/plugins/modules/vmware_content_deploy_template.py#L283
Do you run into this issue with `vmware_content_deploy_ovf_template`, with `vmware_content_deploy_template` or with both?
> Its trying to get the resource pool details using `get_resource_pool_by_name` function. But that function itself is not available. Hence no activity is happening at that lebel.
I think you're wrong there. You see, if this method wouldn't be available I think the module would crash. And, anyway, the method _is_ defined in `VmwareRestClient` from where `VmwareContentDeployTemplate` inherits it:
https://github.com/ansible-collections/community.vmware/blob/1f295a8dbf7745e051c1e7f2fccae79829a6c6cb/plugins/module_utils/vmware_rest_client.py#L370-L375
@Pigi-102 It looks like the module searches for the resource pool only if both `cluster` and `host` are defined:
https://github.com/ansible-collections/community.vmware/blob/1f295a8dbf7745e051c1e7f2fccae79829a6c6cb/plugins/modules/vmware_content_deploy_ovf_template.py#L262-L264
Could you please try to deploy to a specific host in the cluster? I'd like to see what happens in this case. It would help me to fix this.
@n3pjk Could you have a look at this? You've been the last one to work on this module, so I thought you might be interested.
Sure thing! I don't use resource pools since they're evil, but I'll have a look.
Sent from my Verizon, Samsung Galaxy smartphone
Get Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Mario Lenz ***@***.***>
Sent: Saturday, June 11, 2022 11:30:22 AM
To: ansible-collections/community.vmware ***@***.***>
Cc: Paul Knight ***@***.***>; Mention ***@***.***>
Subject: Re: [ansible-collections/community.vmware] vmware_content_deploy_ovf_template does not respet resource_pool parameter (Issue #1290)
@n3pjk<https://github.com/n3pjk> Could you have a look at this? You've been the last one to work on this module, so I thought you might be interested.
—
Reply to this email directly, view it on GitHub<https://github.com/ansible-collections/community.vmware/issues/1290#issuecomment-1152951458>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AA2LV6YAXZMZ5HAOO6EB5V3VOSWI5ANCNFSM5TQOIZMQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
> @Pigi-102 It looks like the module searches for the resource pool only if both `cluster` and `host` are defined:
>
> https://github.com/ansible-collections/community.vmware/blob/1f295a8dbf7745e051c1e7f2fccae79829a6c6cb/plugins/modules/vmware_content_deploy_ovf_template.py#L262-L264
>
> Could you please try to deploy to a specific host in the cluster? I'd like to see what happens in this case. It would help me to fix this.
Hello,
unfortunally at the moment I don't have access to an infrastructure to execute the test.
In case I will gain a new access I will try it.
Sorry.
@mariolenz,
Issue is with both the modules. As this bug was already opened, so I though I should include it here.
But if you want I can open another bug for `vmware_content_deploy_template` and map it here as patch should go to both the tools separately.
> Issue is with both the modules. As this bug was already opened, so I though I should include it here. But if you want I can open another bug for `vmware_content_deploy_template` and map it here as patch should go to both the tools separately.
@Udayendu I don't think that's necessary. Let's track both issues here. It's possibly the same basic problem because `vmware_content_deploy_template` also searches for the resource pool only if both `cluster` and `host` are defined
https://github.com/ansible-collections/community.vmware/blob/1f295a8dbf7745e051c1e7f2fccae79829a6c6cb/plugins/modules/vmware_content_deploy_template.py#L281-L283
Could you please specify both `host` and `cluster` in addition to `resource_pool`? If this works, I think we've already found the problem.
> > Issue is with both the modules. As this bug was already opened, so I though I should include it here. But if you want I can open another bug for `vmware_content_deploy_template` and map it here as patch should go to both the tools separately.
>
> @Udayendu I don't think that's necessary. Let's track both issues here. It's possibly the same basic problem because `vmware_content_deploy_template` also searches for the resource pool only if both `cluster` and `host` are defined
>
> https://github.com/ansible-collections/community.vmware/blob/1f295a8dbf7745e051c1e7f2fccae79829a6c6cb/plugins/modules/vmware_content_deploy_template.py#L281-L283
>
> Could you please specify both `host` and `cluster` in addition to `resource_pool`? If this works, I think we've already found the problem.
So far I was just using the cluster & resource_pool. Never tried to use the host as cluster supposed to do that selection. But I will try and update you by Monday.
@mariolenz
I have completed the testing and here is my observation:
- Its able to deploy the vm under resource pool if cluster, resource pool and esxi hosts are selected at a time.
- ESXi server should be 6.7 and higher. Initial I tried on couple of ESXi 6.5 hosts and it failed with a compatibility error message.
I see the problem. In both modules, cluster and host are in the if clause. I think removing them from the clause should correct the problem. I will submit a PR later today.
Sent from my Verizon, Samsung Galaxy smartphone
Get Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Udayendu Kar ***@***.***>
Sent: Monday, June 20, 2022 1:57:29 AM
To: ansible-collections/community.vmware ***@***.***>
Cc: Paul Knight ***@***.***>; Mention ***@***.***>
Subject: Re: [ansible-collections/community.vmware] vmware_content_deploy_ovf_template does not respet resource_pool parameter (Issue #1290)
@mariolenz<https://github.com/mariolenz>
I have completed the testing and here is my observation:
* Its able to deploy the vm under resource pool if cluster, resource pool and esxi hosts are selected at a time.
* ESXi server should be 6.7 and higher. Initial I tried on couple of ESXi 6.5 hosts and it failed with a compatibility error message.
—
Reply to this email directly, view it on GitHub<https://github.com/ansible-collections/community.vmware/issues/1290#issuecomment-1160005247>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AA2LV67FVMCNNNBQ5OE3YT3VQAB4TANCNFSM5TQOIZMQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
| 2022-06-22T12:54:36 |
|
ansible-collections/community.vmware | 1,402 | ansible-collections__community.vmware-1402 | [
"1401"
] | b1abe47f7f76142132749f29b021d2020e259846 | diff --git a/plugins/modules/vmware_guest.py b/plugins/modules/vmware_guest.py
--- a/plugins/modules/vmware_guest.py
+++ b/plugins/modules/vmware_guest.py
@@ -3373,13 +3373,13 @@ def main():
# Check requirements for virtualization based security
if pyv.params['hardware']['virt_based_security']:
if not pyv.params['hardware']['nested_virt']:
- pyv.module.fail("Virtualization based security requires nested virtualization. Please enable nested_virt.")
+ pyv.module.fail_json(msg="Virtualization based security requires nested virtualization. Please enable nested_virt.")
if not pyv.params['hardware']['secure_boot']:
- pyv.module.fail("Virtualization based security requires (U)EFI secure boot. Please enable secure_boot.")
+ pyv.module.fail_json(msg="Virtualization based security requires (U)EFI secure boot. Please enable secure_boot.")
if not pyv.params['hardware']['iommu']:
- pyv.module.fail("Virtualization based security requires I/O MMU. Please enable iommu.")
+ pyv.module.fail_json(msg="Virtualization based security requires I/O MMU. Please enable iommu.")
# Check if the VM exists before continuing
vm = pyv.get_vm()
| vmware_guest: 'AnsibleModule' object has no attribute 'fail'
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Task failed when try to enable VBS on Windows VM
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible [core 2.13.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.12 (default, Dec 12 2021, 11:39:22) [GCC 7.3.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Windows 10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: test vbs config
vmware_guest:
hostname: "{{ vsphere_host_name }}"
username: "{{ vsphere_host_user }}"
password: "{{ vsphere_host_user_password }}"
validate_certs: "{{ validate_certs | default(false) }}"
datacenter: "{{ vsphere_host_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_name }}"
hardware:
virt_based_security: "{{ win_enable_vbs }}"
register: vm_config_vbs_result
- debug: var=vm_config_vbs_result
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Windows VM VBS enabled
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Task failed
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1658208036.0432546-5446-152058052578346/AnsiballZ_vmware_guest.py", line 107, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1658208036.0432546-5446-152058052578346/AnsiballZ_vmware_guest.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1658208036.0432546-5446-152058052578346/AnsiballZ_vmware_guest.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_guest', init_globals=dict(_module_fqn='ansible_collections.community.vmware.plugins.modules.vmware_guest', _modlib_path=modlib_path),
File "/usr/local/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/local/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_vmware_guest_payload_ik7gb7i0/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3463, in <module>
File "/tmp/ansible_vmware_guest_payload_ik7gb7i0/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3376, in main
AttributeError: 'AnsibleModule' object has no attribute 'fail'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1658208036.0432546-5446-152058052578346/AnsiballZ_vmware_guest.py\", line 107, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1658208036.0432546-5446-152058052578346/AnsiballZ_vmware_guest.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1658208036.0432546-5446-152058052578346/AnsiballZ_vmware_guest.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_guest', init_globals=dict(_module_fqn='ansible_collections.community.vmware.plugins.modules.vmware_guest', _modlib_path=modlib_path),\n File \"/usr/local/lib/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/lib/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/local/lib/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_vmware_guest_payload_ik7gb7i0/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py\", line 3463, in <module>\n File \"/tmp/ansible_vmware_guest_payload_ik7gb7i0/ansible_vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py\", line 3376, in main\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
| 2022-07-19T07:40:25 |
||
ansible-collections/community.vmware | 1,408 | ansible-collections__community.vmware-1408 | [
"1407"
] | 19b8ffed6d6220d866dd4f87c3180d3a5fdf873a | diff --git a/plugins/modules/vmware_vm_info.py b/plugins/modules/vmware_vm_info.py
--- a/plugins/modules/vmware_vm_info.py
+++ b/plugins/modules/vmware_vm_info.py
@@ -351,8 +351,9 @@ def get_virtual_machines(self):
datacenter = get_parent_datacenter(vm)
datastore_url = list()
datastore_attributes = ('name', 'url')
- if vm.config.datastoreUrl:
- for entry in vm.config.datastoreUrl:
+ vm_datastore_urls = _get_vm_prop(vm, ('config', 'datastoreUrl'))
+ if vm_datastore_urls:
+ for entry in vm_datastore_urls:
datastore_url.append({key: getattr(entry, key) for key in dir(entry) if key in datastore_attributes})
virtual_machine = {
"guest_name": summary.config.name,
| vmware_vm_info module throws AttributeError: 'NoneType' object has no attribute 'datastoreUrl'
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Module `vmware_vm_info` chokes on objects without datastoreUrl attribute. Most likely this is not an issue under normal circumstances, and the playbook worked for me in the past. It took some time to figure out our vCenter had a couple of inaccessible VMs, which obviously don't have a lot of attributes.
I barely know anything about Python but was able to fix the issue based on examples of similar past issues.
```
diff -u vmware_vm_info.py.bak vmware_vm_info.py
--- vmware_vm_info.py.bak 2022-07-21 09:09:41.482721250 -0500
+++ vmware_vm_info.py 2022-07-21 10:12:08.529573282 -0500
@@ -350,7 +350,8 @@
datacenter = get_parent_datacenter(vm)
datastore_url = list()
datastore_attributes = ('name', 'url')
- if vm.config.datastoreUrl:
+ has_datastoreurl = _get_vm_prop(vm, ('config', 'datastoreUrl'))
+ if has_datastoreurl:
for entry in vm.config.datastoreUrl:
datastore_url.append({key: getattr(entry, key) for key in dir(entry) if key in datastore_attributes})
virtual_machine = {
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vm_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible --version
ansible [core 2.12.4]
config file = /etc/ansible/ansible.cfg
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.1
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
Collection Version
---------------- -------
community.vmware 2.7.0
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get all VMs
community.vmware.vmware_vm_info:
hostname: "{{ vcc_hostname }}"
username: "{{ vcc_user }}"
password: "{{ vcc_pass }}"
validate_certs: false
vm_type: vm
register: all_vms
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Get a list of all VMs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
MODULE_STDERR:
Traceback (most recent call last):
File "/home/user/.ansible/tmp/ansible-tmp-1658346852.8637314-2616787-129994423621433/AnsiballZ_vmware_vm_info.py", line 107, in <module>
_ansiballz_main()
File "/home/user/.ansible/tmp/ansible-tmp-1658346852.8637314-2616787-129994423621433/AnsiballZ_vmware_vm_info.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/user/.ansible/tmp/ansible-tmp-1658346852.8637314-2616787-129994423621433/AnsiballZ_vmware_vm_info.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vm_info', init_globals=dict(_module_fqn='ansible_collections.community.vmware.plugins.modules.vmware_vm_info', _modlib_path=modlib_path),
File "/usr/lib64/python3.9/runpy.py", line 210, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.vmware.vmware_vm_info_payload_dk_rqyae/ansible_community.vmware.vmware_vm_info_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vm_info.py", line 409, in <module>
File "/tmp/ansible_community.vmware.vmware_vm_info_payload_dk_rqyae/ansible_community.vmware.vmware_vm_info_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vm_info.py", line 403, in main
File "/tmp/ansible_community.vmware.vmware_vm_info_payload_dk_rqyae/ansible_community.vmware.vmware_vm_info_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vm_info.py", line 353, in get_virtual_machines
AttributeError: 'NoneType' object has no attribute 'datastoreUrl'
```
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner ---> | 2022-07-21T16:43:06 |
|
ansible-collections/community.vmware | 1,436 | ansible-collections__community.vmware-1436 | [
"1431"
] | 65d18a66dc7ae3a707ff41080bda85347327c50c | diff --git a/plugins/modules/vmware_vswitch.py b/plugins/modules/vmware_vswitch.py
--- a/plugins/modules/vmware_vswitch.py
+++ b/plugins/modules/vmware_vswitch.py
@@ -648,9 +648,10 @@ def update_teaming_policy(self, spec, results):
# Check failback
if teaming_failback is not None:
results['failback'] = teaming_failback
- if teaming_policy.rollingOrder is not teaming_failback:
- results['notify_switches_previous'] = teaming_policy.rollingOrder
- teaming_policy.rollingOrder = teaming_failback
+ current_failback = not teaming_policy.rollingOrder
+ if current_failback != teaming_failback:
+ results['failback_previous'] = current_failback
+ teaming_policy.rollingOrder = not teaming_failback
changed = True
# Check teaming failover order
| vmware_vswitch sets teaming failback opposite of what's documented
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using `vmware_vswitch` module to set a vSwitch teaming failback mode, the actual operation is opposite of what's [documented](https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_vswitch_module.html#parameters).
`failback: no` actually sets vSwitch Failback=yes and vice versa.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
community.vmware.vmware_vswitch
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible [core 2.13.2]
config file = /home/thomasa/esxi-build/ansible.cfg
configured module search path = ['/home/thomasa/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/thomasa/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/thomasa/.ansible/collections:/usr/share/ansible/collections:/home/thomasa/.local/lib/python3.8/site-packages/ansible_collections
executable location = /home/thomasa/.local/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
Collection Version
---------------- -------
community.vmware 2.7.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
COLLECTIONS_PATHS(/home/thomasa/esxi-build/ansible.cfg) = ['/home/thomasa/.ansible/collections', '/usr/share/ansible/collections', '/home/thomasa/.lo>
DEFAULT_HOST_LIST(/home/thomasa/esxi-build/ansible.cfg) = ['/home/thomasa/esxi-build/inventory.yml']
DEFAULT_PRIVATE_KEY_FILE(/home/thomasa/esxi-build/ansible.cfg) = /home/thomasa/.ssh/id_ecdsa_ansible
DEFAULT_VAULT_PASSWORD_FILE(/home/thomasa/esxi-build/ansible.cfg) = /home/thomasa/.vault_passwd
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ESXi 7.0U3f (build 20036589)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
**Step 1 - Query current vSwitch failback setting**
```
PS C:\Users\thomasa> Get-VirtualSwitch -name vSwitch0 | Get-NicTeamingPolicy | Select-Object *
VirtualSwitchId : key-vim.host.VirtualSwitch-vSwitch0
VirtualSwitch : vSwitch0
BeaconInterval : 1
LoadBalancingPolicy : LoadBalanceSrcId
NetworkFailoverDetectionPolicy : LinkStatus
NotifySwitches : True
FailbackEnabled : True
ActiveNic : {vmnic0, vmnic1}
StandbyNic :
UnusedNic :
CheckBeacon : False
VmHostId : HostSystem-ha-host
ExtensionData : VMware.Vim.HostNicTeamingPolicy
Uid : /[email protected]:443/VMHost=HostSystem-ha-host/VirtualSwitch=key-vim.host
.VirtualSwitch-vSwitch0/NicTeamingVirtualSwitchPolicy=/
```
**Step 2- Run playbook to change vSwitch failback setting**
Sample play
```
- name: Change vSwitch0 settings
community.vmware.vmware_vswitch:
esxi_hostname: "{{ inventory_hostname }}"
hostname: "{{ mgmt_ip }}"
username: "{{ ansible_user }}"
password: "{{ password }}"
validate_certs: false
switch: 'vSwitch0'
nics: ['vmnic0','vmnic1']
state: present
teaming:
failback: yes # works reverse of how its documented
delegate_to: localhost
```
Output
```
changed: [esxi-1 -> localhost] => {
"changed": true,
"failback": true,
"invocation": {
"module_args": {
"esxi_hostname": "esxi-1",
"hostname": "192.168.192.11",
"mtu": 1500,
"nics": [
"vmnic0",
"vmnic1"
],
"number_of_ports": 128,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"security": null,
"state": "present",
"switch": "vSwitch0",
"teaming": {
"active_adapters": null,
"failback": true,
"load_balancing": null,
"network_failure_detection": null,
"notify_switches": null,
"standby_adapters": null
},
"traffic_shaping": null,
"username": "root",
"validate_certs": false
}
},
"notify_switches_previous": false,
"result": "vSwitch 'vSwitch0' is updated successfully"
}
```
**Step 3 - Query vSwitch failback setting after running playbook**
```
PS C:\Users\thomasa> Get-VirtualSwitch -name vSwitch0 | Get-NicTeamingPolicy | Select-Object *
VirtualSwitchId : key-vim.host.VirtualSwitch-vSwitch0
VirtualSwitch : vSwitch0
BeaconInterval : 1
LoadBalancingPolicy : LoadBalanceSrcId
NetworkFailoverDetectionPolicy : LinkStatus
NotifySwitches : True
FailbackEnabled : False
ActiveNic : {vmnic0, vmnic1}
StandbyNic :
UnusedNic :
CheckBeacon : False
VmHostId : HostSystem-ha-host
ExtensionData : VMware.Vim.HostNicTeamingPolicy
Uid : /[email protected]:443/VMHost=HostSystem-ha-host/VirtualSwitch=key-vim.host
.VirtualSwitch-vSwitch0/NicTeamingVirtualSwitchPolicy=/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`failback: yes` setting should result in `FailbackEnabled: True` setting for vSwitch
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Opposite if configured, as documented above.
`failback: yes` setting will result in `FailbackEnabled: False` setting for vSwitch
`failback: no` setting will result in `FailbackEnabled: True` setting for vSwitch
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
Same issue after upgrading to `community.vmware 2.8.0`
The module sets `rollingOrder` to `failback`:
https://github.com/ansible-collections/community.vmware/blob/65d18a66dc7ae3a707ff41080bda85347327c50c/plugins/modules/vmware_vswitch.py#L648-L654
However, according to the [documentation](https://vdc-download.vmware.com/vmwb-repository/dcr-public/bf660c0a-f060-46e8-a94d-4b5e6ffc77ad/208bc706-e281-49b6-a0ce-b402ec19ef82/SDK/vsphere-ws/docs/ReferenceGuide/vim.host.NetworkPolicy.NicTeamingPolicy.html) rollingOrder works directly opposed to how people understand failback:
> For example, assume the explicit link order is (vmnic9, vmnic0), therefore vmnic9 goes down, vmnic0 comes up. However, when vmnic9 comes backup, if rollingOrder is set to be true, vmnic0 continues to be used, otherwise, vmnic9 is restored as specified in the explicitly order.
So you get a _failback_ behavior if you set `rollingOrder` to `false`. But this should be easily fixed. | 2022-08-26T14:10:40 |
|
ansible-collections/community.vmware | 1,437 | ansible-collections__community.vmware-1437 | [
"1430"
] | 65d18a66dc7ae3a707ff41080bda85347327c50c | diff --git a/plugins/modules/vmware_content_library_info.py b/plugins/modules/vmware_content_library_info.py
--- a/plugins/modules/vmware_content_library_info.py
+++ b/plugins/modules/vmware_content_library_info.py
@@ -93,25 +93,46 @@ def __init__(self, module):
"""Constructor."""
super(VmwareContentLibInfo, self).__init__(module)
self.content_service = self.api_client
+ self.local_content_libraries = self.content_service.content.LocalLibrary.list()
+ if self.local_content_libraries is None:
+ self.local_content_libraries = []
+
+ self.subscribed_content_libraries = self.content_service.content.SubscribedLibrary.list()
+ if self.subscribed_content_libraries is None:
+ self.subscribed_content_libraries = []
+
self.library_info = []
def get_all_content_libs(self):
"""Method to retrieve List of content libraries."""
- self.module.exit_json(changed=False, content_libs=self.content_service.content.LocalLibrary.list())
+ content_libraries = self.local_content_libraries + self.subscribed_content_libraries
+
+ self.module.exit_json(changed=False, content_libs=content_libraries)
def get_content_lib_details(self, library_id):
"""Method to retrieve Details of contentlib with library_id"""
- try:
- lib_details = self.content_service.content.LocalLibrary.get(library_id)
- except Exception as e:
- self.module.fail_json(exists=False, msg="%s" % self.get_error_message(e))
- lib_publish_info = dict(
- persist_json_enabled=lib_details.publish_info.persist_json_enabled,
- authentication_method=lib_details.publish_info.authentication_method,
- publish_url=lib_details.publish_info.publish_url,
- published=lib_details.publish_info.published,
- user_name=lib_details.publish_info.user_name
- )
+ lib_publish_info = None
+
+ if library_id in self.local_content_libraries:
+ try:
+ lib_details = self.content_service.content.LocalLibrary.get(library_id)
+ lib_publish_info = dict(
+ persist_json_enabled=lib_details.publish_info.persist_json_enabled,
+ authentication_method=lib_details.publish_info.authentication_method,
+ publish_url=lib_details.publish_info.publish_url,
+ published=lib_details.publish_info.published,
+ user_name=lib_details.publish_info.user_name
+ )
+ except Exception as e:
+ self.module.fail_json(exists=False, msg="%s" % self.get_error_message(e))
+ elif library_id in self.subscribed_content_libraries:
+ try:
+ lib_details = self.content_service.content.SubscribedLibrary.get(library_id)
+ except Exception as e:
+ self.module.fail_json(exists=False, msg="%s" % self.get_error_message(e))
+ else:
+ self.module.fail_json(exists=False, msg="Library %s not found." % library_id)
+
self.library_info.append(
dict(
library_name=lib_details.name,
| vmware_content_library_info: Only lists Content Libraries with the type of "Local", does not include "Subscribed" type
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using the vmware_content_library_info task type to query our Content Libraries, only the Libraries with the type of "Local" are reported back to the ansible task. We used shared or "Subscribed" Content Libraries in our environment, to share a consistent Library of VM Templates between all of our vCenters.
How can we get this functionality added?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_content_library_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.9
config file = /home/<redacted>/.ansible.cfg
configured module search path = ['/home/<redacted>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<redacted>/.local/lib/python3.8/site-packages/ansible
executable location = /home/<redacted>/.local/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
# /usr/local/lib/python3.8/dist-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.10.0
# /home/<redacted>/.local/lib/python3.8/site-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.10.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/<redacted>/.ansible.cfg) = ['/home/<redacted>/inventory']
DEFAULT_LOG_PATH(/home/<redacted>/.ansible.cfg) = /home/<redacted>/.ansible/logs/log.txt
DEFAULT_TIMEOUT(/home/<redacted>/.ansible.cfg) = 120
DEFAULT_VAULT_PASSWORD_FILE(/home/<redacted>/.ansible.cfg) = /home/<redacted>/playbooks/secret.yaml
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
vCenter Version - 7.0.3 build 20150588
Client - vSphere Client version 7.0.3.00700
Hosts - VMware ESXi, 7.0.3, 20036589
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Collect list of Content Libraries from specified vCenter"
community.vmware.vmware_content_library_info:
hostname: "{{ hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
register: libraries
- name: "Display list of found Content Libraries"
debug:
var: libraries
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```yaml
TASK [Display list of found Content Libraries] ****************************************************************************************************************************
ok: [localhost] => {
"libraries": {
"changed": false,
"content_libs": [
"6b5e0c60-3173-4a75-8101-33335f3bb7dd",
"7bd40369-84d6-4fd5-9cf9-7c33377f3931"
],
"failed": false
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```yaml
TASK [Display list of found Content Libraries] ****************************************************************************************************************************
ok: [localhost] => {
"libraries": {
"changed": false,
"content_libs": [
"6b5e0c60-3173-4a75-8101-33335f3bb7dd"
],
"failed": false
}
}
```
| 1.10.0 is quite old...
However, it looks like `vmware_content_library_info` still doesn't know about subscribed content libraries in the latest release. I'll try to work on this, but you probably would have to update to (at least) `community.vmware` 2.9.0.
_edit:_ I think we could use the [Content Subscribed Library APIs](https://developer.vmware.com/apis/vsphere-automation/latest/content/content/subscribed_library/) to achieve this. | 2022-08-29T10:17:31 |
|
ansible-collections/community.vmware | 1,441 | ansible-collections__community.vmware-1441 | [
"1440"
] | 1fa7f9bf6dd38da33f4901caed4c7f57a137f7dc | diff --git a/plugins/modules/vmware_cfg_backup.py b/plugins/modules/vmware_cfg_backup.py
--- a/plugins/modules/vmware_cfg_backup.py
+++ b/plugins/modules/vmware_cfg_backup.py
@@ -135,6 +135,8 @@ def load_configuration(self):
url = self.host.configManager.firmwareSystem.QueryFirmwareConfigUploadURL()
url = url.replace('*', self.cfg_hurl)
+ if self.module.params["port"] == 443:
+ url = url.replace("http:", "https:")
# find manually the url if there is a redirect because urllib2 -per RFC- doesn't do automatic redirects for PUT requests
try:
open_url(url=url, method='HEAD', validate_certs=self.validate_certs)
| community.vmware.vmware_cfg_backup: urlopen error timed out on config restore
##### SUMMARY
When attempting a restore to a host the module fails with the below URL timed out failure, due to port 80 being blocked.
Similar issue to #1270 which was for backup, and a similar fix works
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`vmware_cfg_backup`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible [core 2.11.12]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible/venvs/vmware-runner/lib64/python3.6/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible/venvs/vmware-runner/bin/ansible
python version = 3.6.8 (default, Oct 13 2020, 16:18:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
# /home/ansible/.ansible/collections/ansible_collections
Collection Version
---------------- -------
community.vmware 2.7.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
Centos 7 (with venv for ansible)
Vmware ESXi 7.0.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
This needs port 80 blocked to the host to show the failure.
I identified the issue via packet capture, and then found the related issue and fix.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
vars:
esxi_hostname: "192.168.88.154"
esxi_username: "root"
esxi_password: "password"
tasks:
- name: ESXI restore test
community.vmware.vmware_cfg_backup:
hostname: '{{esxi_hostname}}'
username: '{{esxi_username}}'
password: '{{esxi_password}}'
validate_certs: no
state: loaded
src: ./configbackup.tgz
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Module should restore the backup file to the host.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] *********************************************************************************************************************************************
TASK [ESXI restore test] *************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible/module_utils/urls.py:176: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.\nTraceback (most recent call last):\n File \"/usr/lib64/python3.6/urllib/request.py\", line 1349, in do_open\n encode_chunked=req.has_header('Transfer-encoding'))\n File \"/usr/lib64/python3.6/http/client.py\", line 1254, in request\n self._send_request(method, url, body, headers, encode_chunked)\n File \"/usr/lib64/python3.6/http/client.py\", line 1300, in _send_request\n self.endheaders(body, encode_chunked=encode_chunked)\n File \"/usr/lib64/python3.6/http/client.py\", line 1249, in endheaders\n self._send_output(message_body, encode_chunked=encode_chunked)\n File \"/usr/lib64/python3.6/http/client.py\", line 1036, in _send_output\n self.send(msg)\n File \"/usr/lib64/python3.6/http/client.py\", line 974, in send\n self.connect()\n File \"/usr/lib64/python3.6/http/client.py\", line 946, in connect\n (self.host,self.port), self.timeout, self.source_address)\n File \"/usr/lib64/python3.6/socket.py\", line 724, in create_connection\n raise err\n File \"/usr/lib64/python3.6/socket.py\", line 713, in create_connection\n sock.connect(sa)\nsocket.timeout: timed out\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/home/ansible/.ansible/tmp/ansible-tmp-1661998153.8253503-4467-134295241311897/AnsiballZ_vmware_cfg_backup.py\", line 100, in <module>\n _ansiballz_main()\n File \"/home/ansible/.ansible/tmp/ansible-tmp-1661998153.8253503-4467-134295241311897/AnsiballZ_vmware_cfg_backup.py\", line 92, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/ansible/.ansible/tmp/ansible-tmp-1661998153.8253503-4467-134295241311897/AnsiballZ_vmware_cfg_backup.py\", line 41, in invoke_module\n run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cfg_backup.py\", line 231, in <module>\n File \"/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cfg_backup.py\", line 227, in main\n File \"/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cfg_backup.py\", line 130, in process_state\n File \"/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_cfg_backup.py\", line 142, in load_configuration\n File \"/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible/module_utils/urls.py\", line 1541, in open_url\n File \"/tmp/ansible_community.vmware.vmware_cfg_backup_payload_hgs7dk1q/ansible_community.vmware.vmware_cfg_backup_payload.zip/ansible/module_utils/urls.py\", line 1446, in open\n File \"/usr/lib64/python3.6/urllib/request.py\", line 223, in urlopen\n return opener.open(url, data, timeout)\n File \"/usr/lib64/python3.6/urllib/request.py\", line 526, in open\n response = self._open(req, data)\n File \"/usr/lib64/python3.6/urllib/request.py\", line 544, in _open\n '_open', req)\n File \"/usr/lib64/python3.6/urllib/request.py\", line 504, in _call_chain\n result = func(*args)\n File \"/usr/lib64/python3.6/urllib/request.py\", line 1377, in http_open\n return self.do_open(http.client.HTTPConnection, req)\n File \"/usr/lib64/python3.6/urllib/request.py\", line 1351, in do_open\n raise URLError(err)\nurllib.error.URLError: <urlopen error timed out>\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP ***************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
| 2022-09-01T02:22:13 |
||
ansible-collections/community.vmware | 1,456 | ansible-collections__community.vmware-1456 | [
"1196"
] | a74b7a4a8e569c8b0c9171a072366e2c42b9149a | diff --git a/plugins/modules/vsphere_copy.py b/plugins/modules/vsphere_copy.py
--- a/plugins/modules/vsphere_copy.py
+++ b/plugins/modules/vsphere_copy.py
@@ -18,10 +18,6 @@
author:
- Dag Wieers (@dagwieers)
options:
- hostname:
- aliases: ['host']
- username:
- aliases: ['login']
src:
description:
- The file to push to vCenter.
@@ -125,8 +121,6 @@ def vmware_path(datastore, datacenter, path):
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(
- hostname=dict(required=False, aliases=['host']),
- username=dict(required=False, aliases=['login']),
src=dict(required=True, aliases=['name']),
datacenter=dict(required=False),
datastore=dict(required=True),
@@ -140,11 +134,6 @@ def main():
supports_check_mode=False,
)
- if module.params.get('host'):
- module.deprecate("The 'host' option is being replaced by 'hostname'", version='3.0.0', collection_name='community.vmware')
- if module.params.get('login'):
- module.deprecate("The 'login' option is being replaced by 'username'", version='3.0.0', collection_name='community.vmware')
-
hostname = module.params['hostname']
username = module.params['username']
password = module.params.get('password')
| vsphere_copy: Remove deprecated parameters
##### SUMMARY
The parameters `host` and `login` are deprecated and should be removed in version 3.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vsphere_copy
##### ADDITIONAL INFORMATION
#1194
https://github.com/ansible-collections/community.vmware/blob/67b9506a306da2caec9a2eda60003fd54a9df71e/plugins/modules/vsphere_copy.py#L143-L146
| 2022-09-08T15:14:49 |
||
ansible-collections/community.vmware | 1,457 | ansible-collections__community.vmware-1457 | [
"1268"
] | 4d1c5e0bde356601b9bd7c918019077d3bb61cfd | diff --git a/plugins/modules/vmware_vm_config_option.py b/plugins/modules/vmware_vm_config_option.py
--- a/plugins/modules/vmware_vm_config_option.py
+++ b/plugins/modules/vmware_vm_config_option.py
@@ -187,20 +187,20 @@ def get_config_option_recommended(self, guest_os_desc, hwv_version=''):
if dev_type == guest_os_desc[0].recommendedCdromController:
default_cdrom_ctl = name
guest_os_option_dict = {
- 'Hardware version': hwv_version,
- 'Guest ID': guest_os_desc[0].id,
- 'Guest fullname': guest_os_desc[0].fullName,
- 'Default CPU cores per socket': guest_os_desc[0].numRecommendedCoresPerSocket,
- 'Default CPU socket': guest_os_desc[0].numRecommendedPhysicalSockets,
- 'Default memory in MB': guest_os_desc[0].recommendedMemMB,
- 'Default firmware': guest_os_desc[0].recommendedFirmware,
- 'Default secure boot': guest_os_desc[0].defaultSecureBoot,
- 'Support secure boot': guest_os_desc[0].supportsSecureBoot,
- 'Default disk controller': default_disk_ctl,
- 'Default disk size in MB': guest_os_desc[0].recommendedDiskSizeMB,
- 'Default network adapter': default_ethernet,
- 'Default CDROM controller': default_cdrom_ctl,
- 'Default USB controller': default_usb_ctl,
+ 'hardware_version': hwv_version,
+ 'guest_id': guest_os_desc[0].id,
+ 'guest_fullname': guest_os_desc[0].fullName,
+ 'rec_cpu_cores_per_socket': guest_os_desc[0].numRecommendedCoresPerSocket,
+ 'rec_cpu_socket': guest_os_desc[0].numRecommendedPhysicalSockets,
+ 'rec_memory_mb': guest_os_desc[0].recommendedMemMB,
+ 'rec_firmware': guest_os_desc[0].recommendedFirmware,
+ 'default_secure_boot': guest_os_desc[0].defaultSecureBoot,
+ 'support_secure_boot': guest_os_desc[0].supportsSecureBoot,
+ 'default_disk_controller': default_disk_ctl,
+ 'rec_disk_mb': guest_os_desc[0].recommendedDiskSizeMB,
+ 'default_ethernet': default_ethernet,
+ 'default_cdrom_controller': default_cdrom_ctl,
+ 'default_usb_controller': default_usb_ctl,
'support_tpm_20': guest_os_desc[0].supportsTPM20,
'support_persistent_memory': guest_os_desc[0].persistentMemorySupported,
'rec_persistent_memory': guest_os_desc[0].recommendedPersistentMemoryMB,
@@ -262,8 +262,8 @@ def get_config_option_for_guest(self):
# Get supported hardware versions list
support_create_list, default_config = self.get_hardware_versions(env_browser=env_browser)
if self.params.get('get_hardware_versions'):
- results.update({'Supported hardware versions': support_create_list,
- 'Default hardware version': default_config})
+ results.update({'supported_hardware_versions': support_create_list,
+ 'default_hardware_version': default_config})
if self.params.get('get_guest_os_ids') or self.params.get('get_config_options'):
# Get supported guest ID list
@@ -285,7 +285,7 @@ def get_config_option_for_guest(self):
guest_os_options = vm_config_option_guest.guestOSDescriptor
guest_os_option_dict = self.get_config_option_recommended(guest_os_desc=guest_os_options,
hwv_version=vm_config_option_guest.version)
- results.update({'Recommended config options': guest_os_option_dict})
+ results.update({'recommended_config_options': guest_os_option_dict})
self.module.exit_json(changed=False, failed=False, instance=results)
@@ -309,9 +309,6 @@ def main():
['cluster_name', 'esxi_hostname'],
]
)
- module.deprecate(msg="Dict item names in 'instance' result will be changed from strings joined with spaces to"
- " strings joined with underlines, e.g., 'Guest fullname' will be changed to 'guest_fullname'.",
- version='3.0.0', collection_name="community.vmware")
vm_config_option_guest = VmConfigOption(module)
vm_config_option_guest.get_config_option_for_guest()
| diff --git a/tests/integration/targets/vmware_vm_config_option/tasks/main.yml b/tests/integration/targets/vmware_vm_config_option/tasks/main.yml
--- a/tests/integration/targets/vmware_vm_config_option/tasks/main.yml
+++ b/tests/integration/targets/vmware_vm_config_option/tasks/main.yml
@@ -73,7 +73,7 @@
datacenter: "{{ dc1 }}"
esxi_hostname: "{{ esxi_hosts[0] }}"
get_guest_os_ids: true
- hardware_version: "{{ config_option_keys_1['instance']['Supported hardware versions'][0] }}"
+ hardware_version: "{{ config_option_keys_1['instance']['supported_hardware_versions'][0] }}"
register: guest_ids_2
ignore_errors: true
@@ -89,7 +89,7 @@
- not guest_ids_2.failed
when:
- "'instance' in config_option_keys_1"
- - "'Supported hardware versions' in config_option_keys_1['instance']"
+ - "'supported_hardware_versions' in config_option_keys_1['instance']"
# Ignore errors due to there is known issue on vSphere 7.0.x
# https://github.com/vmware/pyvmomi/issues/915
@@ -138,7 +138,7 @@
password: "{{ esxi_password }}"
esxi_hostname: "{{ esxi_hosts[0] }}"
get_guest_os_ids: true
- hardware_version: "{{ config_option_keys_3['instance']['Supported hardware versions'][2] }}"
+ hardware_version: "{{ config_option_keys_3['instance']['supported_hardware_versions'][2] }}"
register: guest_ids_3
ignore_errors: true
| vmware_vm_config_option: Change item names in result
##### SUMMARY
Change item names in result from strings joined with spaces to strings joined with underlines, e.g. `Guest fullname` will be changed to `guest_fullname`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_vm_config_option
##### ADDITIONAL INFORMATION
See #1202
| 2022-09-09T12:32:40 |
|
ansible-collections/community.vmware | 1,459 | ansible-collections__community.vmware-1459 | [
"1195"
] | 52b308dd2a0e5c1ad2bc4c95e4e3bd750db7645f | diff --git a/plugins/modules/vmware_guest_network.py b/plugins/modules/vmware_guest_network.py
--- a/plugins/modules/vmware_guest_network.py
+++ b/plugins/modules/vmware_guest_network.py
@@ -19,7 +19,7 @@
author:
- Diane Wang (@Tomorrow9) <[email protected]>
notes:
- - For backwards compatibility network_data is returned when using the gather_network_info and networks parameters
+ - For backwards compatibility network_data is returned when using the gather_network_info parameter
options:
name:
description:
@@ -150,90 +150,6 @@
description:
- Return information about current guest network adapters.
type: bool
- networks:
- type: list
- elements: dict
- description:
- - This method will be deprecated, use loops in your playbook for multiple interfaces instead.
- - A list of network adapters.
- - C(mac) or C(label) or C(device_type) is required to reconfigure or remove an existing network adapter.
- - 'If there are multiple network adapters with the same C(device_type), you should set C(label) or C(mac) to match
- one of them, or will apply changes on all network adapters with the C(device_type) specified.'
- - 'C(mac), C(label), C(device_type) is the order of precedence from greatest to least if all set.'
- suboptions:
- mac:
- type: str
- description:
- - MAC address of the existing network adapter to be reconfigured or removed.
- label:
- type: str
- description:
- - Label of the existing network adapter to be reconfigured or removed, e.g., "Network adapter 1".
- device_type:
- type: str
- description:
- - 'Valid virtual network device types are C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov).'
- - Used to add new network adapter, reconfigure or remove the existing network adapter with this type.
- - If C(mac) and C(label) not specified or not find network adapter by C(mac) or C(label) will use this parameter.
- name:
- type: str
- description:
- - Name of the portgroup or distributed virtual portgroup for this interface.
- - When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.
- vlan:
- type: int
- description:
- - VLAN number for this interface.
- dvswitch_name:
- type: str
- description:
- - Name of the distributed vSwitch.
- - This value is required if multiple distributed portgroups exists with the same name.
- state:
- type: str
- description:
- - State of the network adapter.
- - If set to C(present), then will do reconfiguration for the specified network adapter.
- - If set to C(new), then will add the specified network adapter.
- - If set to C(absent), then will remove this network adapter.
- manual_mac:
- type: str
- description:
- - Manual specified MAC address of the network adapter when creating, or reconfiguring.
- - If not specified when creating new network adapter, mac address will be generated automatically.
- - When reconfigure MAC address, VM should be in powered off state.
- - There are restrictions on the MAC addresses you can set. Consult the documentation of your vSphere version as to allowed MAC addresses.
- connected:
- type: bool
- description:
- - Indicates that virtual network adapter connects to the associated virtual machine.
- start_connected:
- type: bool
- description:
- - Indicates that virtual network adapter starts with associated virtual machine powers on.
- directpath_io:
- type: bool
- description:
- - If set, Universal Pass-Through (UPT or DirectPath I/O) will be enabled on the network adapter.
- - UPT is only compatible for Vmxnet3 adapter.
- physical_function_backing:
- version_added: '2.3.0'
- type: str
- description:
- - If set, specifies the PCI ID of the physical function to use as backing for a SR-IOV network adapter.
- - This option is only compatible for SR-IOV network adapters.
- virtual_function_backing:
- version_added: '2.3.0'
- type: str
- description:
- - If set, specifies the PCI ID of the physical function to use as backing for a SR-IOV network adapter.
- - This option is only compatible for SR-IOV network adapters.
- allow_guest_os_mtu_change:
- version_added: '2.3.0'
- type: bool
- description:
- - Allows the guest OS to change the MTU on a SR-IOV network adapter.
- - This option is only compatible for SR-IOV network adapters.
extends_documentation_fragment:
- community.vmware.vmware.documentation
'''
@@ -323,7 +239,7 @@
]
network_data:
description: For backwards compatibility, metadata about the virtual machine network adapters
- returned: when using gather_network_info or networks parameters
+ returned: when using gather_network_info parameter
type: dict
sample:
"network_data": {
@@ -385,11 +301,10 @@ def __init__(self, module):
sriov=vim.vm.device.VirtualSriovEthernetCard,
)
- def _get_network_object(self, vm_obj, network_params=None):
+ def _get_network_object(self, vm_obj):
'''
return network object matching given parameters
:param vm_obj: vm object
- :param network_params: dict containing parameters from deprecated networks list method
:return: network object
:rtype: object
'''
@@ -399,14 +314,9 @@ def _get_network_object(self, vm_obj, network_params=None):
compute_resource = self._get_compute_resource_by_name()
pg_lookup = {}
- if network_params:
- vlan_id = network_params['vlan_id']
- network_name = network_params['network_name']
- switch_name = network_params['switch']
- else:
- vlan_id = self.params['vlan_id']
- network_name = self.params['network_name']
- switch_name = self.params['switch']
+ vlan_id = self.params['vlan_id']
+ network_name = self.params['network_name']
+ switch_name = self.params['switch']
for pg in vm_obj.runtime.host.config.network.portgroup:
pg_lookup[pg.spec.name] = {'switch': pg.spec.vswitchName, 'vlan_id': pg.spec.vlanId}
@@ -544,7 +454,7 @@ def _get_compute_resource_by_name(self, recurse=True):
return None
def _new_nic_spec(self, vm_obj, nic_obj=None, network_params=None):
- network = self._get_network_object(vm_obj, network_params)
+ network = self._get_network_object(vm_obj)
if network_params:
connected = network_params['connected']
@@ -713,69 +623,21 @@ def _get_nic_info(self):
rv['network_info'] = nic_info
return rv
- def _deprectated_list_config(self):
- '''
- this only exists to handle the old way of configuring interfaces, which
- should be deprectated in favour of using loops in the playbook instead of
- feeding lists directly into the module.
- '''
- diff = {'before': {}, 'after': {}}
- changed = False
- for i in self.params['networks']:
- network_params = {}
- network_params['mac_address'] = i.get('mac') or i.get('manual_mac')
- network_params['network_name'] = i.get('name')
- network_params['vlan_id'] = i.get('vlan')
- network_params['switch'] = i.get('dvswitch_name')
- network_params['guest_control'] = i.get('allow_guest_control', self.params['guest_control'])
- network_params['physical_function_backing'] = i.get('physical_function_backing')
- network_params['virtual_function_backing'] = i.get('virtual_function_backing')
- network_params['allow_guest_os_mtu_change'] = i.get('allow_guest_os_mtu_change')
-
- for k in ['connected', 'device_type', 'directpath_io', 'force', 'label', 'start_connected', 'state', 'wake_onlan']:
- network_params[k] = i.get(k, self.params[k])
-
- if network_params['state'] in ['new', 'present']:
- n_diff, n_changed, network_info = self._nic_present(network_params)
- diff['before'].update(n_diff['before'])
- diff['after'] = n_diff['after']
- if n_changed:
- changed = True
-
- if network_params['state'] == 'absent':
- n_diff, n_changed, network_info = self._nic_absent(network_params)
- diff['before'].update(n_diff['before'])
- diff['after'] = n_diff['after']
- if n_changed:
- changed = True
-
- return diff, changed, network_info
-
- def _nic_present(self, network_params=None):
+ def _nic_present(self):
changed = False
diff = {'before': {}, 'after': {}}
- # backwards compatibility, clean up when params['networks']
- # has been removed
- if network_params:
- force = network_params['force']
- label = network_params['label']
- mac_address = network_params['mac_address']
- network_name = network_params['network_name']
- switch = network_params['switch']
- vlan_id = network_params['vlan_id']
- else:
- force = self.params['force']
- label = self.params['label']
- mac_address = self.params['mac_address']
- network_name = self.params['network_name']
- switch = self.params['switch']
- vlan_id = self.params['vlan_id']
+ force = self.params['force']
+ label = self.params['label']
+ mac_address = self.params['mac_address']
+ network_name = self.params['network_name']
+ switch = self.params['switch']
+ vlan_id = self.params['vlan_id']
vm_obj = self.get_vm()
if not vm_obj:
self.module.fail_json(msg='could not find vm: {0}'.format(self.params['name']))
- network_obj = self._get_network_object(vm_obj, network_params)
+ network_obj = self._get_network_object(vm_obj)
nic_info, nic_obj_lst = self._get_nics_from_vm(vm_obj)
label_lst = [d.get('label') for d in nic_info]
mac_addr_lst = [d.get('mac_address') for d in nic_info]
@@ -805,7 +667,7 @@ def _nic_present(self, network_params=None):
if (mac_address and mac_address in mac_addr_lst) or (label and label in label_lst):
for nic_obj in nic_obj_lst:
if (mac_address and nic_obj.macAddress == mac_address) or (label and label == nic_obj.deviceInfo.label):
- device_spec = self._new_nic_spec(vm_obj, nic_obj, network_params)
+ device_spec = self._new_nic_spec(vm_obj, nic_obj)
# fabricate diff for check_mode
if self.module.check_mode:
@@ -826,7 +688,7 @@ def _nic_present(self, network_params=None):
diff['after'].update({nic_mac: copy.deepcopy(nic)})
if (not mac_address or mac_address not in mac_addr_lst) and (not label or label not in label_lst):
- device_spec = self._new_nic_spec(vm_obj, None, network_params)
+ device_spec = self._new_nic_spec(vm_obj, None)
device_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
if self.module.check_mode:
# fabricate diff/returns for checkmode
@@ -902,7 +764,6 @@ def main():
allow_guest_os_mtu_change=dict(type='bool', default=True),
force=dict(type='bool', default=False),
gather_network_info=dict(type='bool', default=False, aliases=['gather_network_facts']),
- networks=dict(type='list', default=[], elements='dict'),
guest_control=dict(type='bool', default=True),
state=dict(type='str', default='present', choices=['absent', 'present'])
)
@@ -931,23 +792,6 @@ def main():
module.exit_json(network_info=nics.get('network_info'), network_data=network_data, changed=False)
- if module.params['networks']:
- network_data = {}
- module.deprecate(
- msg='The old way of configuring interfaces by supplying an arbitrary list will be removed, loops should be used to handle multiple interfaces',
- version='3.0.0',
- collection_name='community.vmware'
- )
- diff, changed, network_info = pyv._deprectated_list_config()
- nd = copy.deepcopy(network_info)
- nics_sorted = sorted(nd, key=lambda k: k['unit_number'])
- for n, i in enumerate(nics_sorted):
- key_name = '{0}'.format(n)
- network_data[key_name] = i
- network_data[key_name].update({'mac_addr': i['mac_address'], 'name': i['network_name']})
-
- module.exit_json(changed=changed, network_info=network_info, network_data=network_data, diff=diff)
-
if module.params['state'] == 'present':
diff, changed, network_info = pyv._nic_present()
| diff --git a/tests/integration/targets/vmware_guest_network/tasks/main.yml b/tests/integration/targets/vmware_guest_network/tasks/main.yml
--- a/tests/integration/targets/vmware_guest_network/tasks/main.yml
+++ b/tests/integration/targets/vmware_guest_network/tasks/main.yml
@@ -11,7 +11,7 @@
setup_dvswitch: true
- name: Create VMs
- vmware_guest:
+ community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -36,7 +36,7 @@
networks:
- name: VM Network
-- vmware_guest_tools_wait:
+- community.vmware.vmware_guest_tools_wait:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -45,7 +45,7 @@
name: test_vm1
- name: gather network adapters' facts of the virtual machine
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -56,48 +56,50 @@
- debug: var=netadapter_info
-- name: get number of existing netowrk adapters
+- name: get number of existing network adapters
set_fact:
netadapter_num: "{{ netadapter_info.network_data | length }}"
- name: add new network adapters to virtual machine
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "VM Network"
- state: new
- device_type: e1000e
- manual_mac: "aa:50:56:58:59:60"
- connected: true
- - name: "VM Network"
- state: new
- connected: true
- device_type: vmxnet3
- manual_mac: "aa:50:56:58:59:61"
+ network_name: "{{ item.network_name }}"
+ device_type: "{{ item.device_type }}"
+ mac_address: "{{ item.mac_address }}"
+ connected: "{{ item.connected }}"
+ state: present
+ loop:
+ - network_name: "VM Network"
+ device_type: e1000e
+ mac_address: "aa:50:56:58:59:60"
+ connected: true
+ - network_name: "VM Network"
+ device_type: vmxnet3
+ mac_address: "aa:50:56:58:59:61"
+ connected: true
register: add_netadapter
- debug: var=add_netadapter
-- name: assert the new netowrk adapters were added to VM
+- name: assert the new network adapters were added to VM
assert:
that:
- add_netadapter is changed
- - "{{ add_netadapter.network_data | length | int }} == {{ netadapter_num | int + 2 }}"
+ - "{{ add_netadapter.results[1].network_info | length | int }} == {{ netadapter_num | int + 2 }}"
- name: delete one specified network adapter
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - state: absent
- mac: "aa:50:56:58:59:60"
+ mac_address: "aa:50:56:58:59:60"
+ state: absent
register: del_netadapter
- debug: var=del_netadapter
@@ -106,10 +108,10 @@
assert:
that:
- del_netadapter is changed
- - "{{ del_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
+ - "{{ del_netadapter.network_info | length | int }} == {{ netadapter_num | int + 1 }}"
- name: get instance uuid of virtual machines
- vmware_guest_info:
+ community.vmware.vmware_guest_info:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -121,19 +123,17 @@
- set_fact: vm1_instance_uuid="{{ guest_info['instance']['instance_uuid'] }}"
- name: add new network adapters to virtual machine with instance uuid
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: '{{ vm1_instance_uuid }}'
use_instance_uuid: true
- networks:
- - name: "VM Network"
- state: new
- connected: true
- device_type: e1000e
- manual_mac: "bb:50:56:58:59:60"
+ network_name: "VM Network"
+ device_type: e1000e
+ mac_address: "bb:50:56:58:59:60"
+ connected: true
register: add_netadapter_instanceuuid
- debug: var=add_netadapter_instanceuuid
@@ -142,87 +142,82 @@
assert:
that:
- add_netadapter_instanceuuid is changed
- - "{{ add_netadapter_instanceuuid.network_data | length | int }} == {{ netadapter_num | int + 2 }}"
+ - "{{ add_netadapter_instanceuuid.network_info | length | int }} == {{ netadapter_num | int + 2 }}"
- name: delete one specified network adapter
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - state: absent
- mac: "bb:50:56:58:59:60"
+ mac_address: "bb:50:56:58:59:60"
+ state: absent
register: del_netadapter
- name: assert the network adapter was removed
assert:
that:
- del_netadapter is changed
- - "{{ del_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
+ - "{{ del_netadapter.network_info | length | int }} == {{ netadapter_num | int + 1 }}"
-- name: delete again one specified network adapter
- vmware_guest_network:
+- name: delete again one specified network adapter (idempotency)
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - state: absent
- mac: "bb:50:56:58:59:60"
+ mac_address: "bb:50:56:58:59:60"
+ state: absent
register: del_again_netadapter
- debug: var=del_again_netadapter
-- name: assert the network adapter was removed
+- name: assert no change (idempotency)
assert:
that:
- not (del_again_netadapter is changed)
- - "{{ del_again_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
+ - "{{ del_again_netadapter.network_info | length | int }} == {{ netadapter_num | int + 1 }}"
- name: disable DirectPath I/O on a Vmxnet3 adapter
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "test_vm1"
- networks:
- - state: present
- mac: "aa:50:56:58:59:61"
- directpath_io: false
+ state: present
+ mac_address: "aa:50:56:58:59:61"
+ directpath_io: false
register: disable_directpath_io
- debug: var=disable_directpath_io
- name: enable DirectPath I/O on a Vmxnet3 adapter
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "test_vm1"
- networks:
- - state: present
- mac: "aa:50:56:58:59:61"
- directpath_io: true
+ state: present
+ mac_address: "aa:50:56:58:59:61"
+ directpath_io: true
register: enable_directpath_io
- debug: var=enable_directpath_io
- name: disconnect one specified network adapter
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - state: present
- mac: "aa:50:56:58:59:61"
- connected: false
+ state: present
+ mac_address: "aa:50:56:58:59:61"
+ connected: false
register: disc_netadapter
- debug: var=disc_netadapter
@@ -231,19 +226,18 @@
assert:
that:
- disc_netadapter is changed
- - "{{ disc_netadapter.network_data[netadapter_num]['connected'] }} == false"
+ - "{{ disc_netadapter.network_info[netadapter_num | int]['connected'] }} == false"
- name: Check if network does not exists
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: non-existing-nw
- manual_mac: "aa:50:56:11:22:33"
- state: new
+ network_name: non-existing-nw
+ mac_address: "aa:50:56:11:22:33"
+ state: present
register: no_nw_details
ignore_errors: true
@@ -256,18 +250,17 @@
- no_nw_details.failed
- name: Change portgroup to dvPortgroup
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "{{ dvpg1 }}"
- label: "Network adapter 1"
- connected: false
- start_connected: true
- state: present
+ network_name: "{{ dvpg1 }}"
+ label: "Network adapter 1"
+ connected: false
+ start_connected: true
+ state: present
register: change_netaddr_dvp
- debug: var=change_netaddr_dvp
@@ -278,18 +271,17 @@
- change_netaddr_dvp.changed is sameas true
- name: Change portgroup to dvPortgroup
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "{{ dvpg1 }}"
- label: "Network adapter 1"
- connected: false
- start_connected: true
- state: present
+ network_name: "{{ dvpg1 }}"
+ label: "Network adapter 1"
+ connected: false
+ start_connected: true
+ state: present
register: change_netaddr_dvp
- debug: var=change_netaddr_dvp
@@ -300,18 +292,17 @@
- change_netaddr_dvp.changed is sameas false
- name: Change dvPortgroup to PortGroup
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "VM Network"
- label: "Network adapter 1"
- connected: false
- start_connected: true
- state: present
+ network_name: "VM Network"
+ label: "Network adapter 1"
+ connected: false
+ start_connected: true
+ state: present
register: change_netaddr_pg
- debug: var=change_netaddr_pg
@@ -320,21 +311,20 @@
assert:
that:
- change_netaddr_pg.changed is sameas true
- - change_netaddr_pg.network_data['0'].name == "VM Network"
+ - change_netaddr_pg.network_info[0].network_name == "VM Network"
- name: Change dvPortgroup to PortGroup
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "VM Network"
- label: "Network adapter 1"
- connected: false
- start_connected: true
- state: present
+ network_name: "VM Network"
+ label: "Network adapter 1"
+ connected: false
+ start_connected: true
+ state: present
register: change_netaddr_pg
- debug: var=change_netaddr_pg
@@ -343,22 +333,21 @@
assert:
that:
- change_netaddr_pg.changed is sameas false
- - change_netaddr_pg.network_data['0'].name == "VM Network"
+ - change_netaddr_pg.network_info[0].network_name == "VM Network"
# https://github.com/ansible/ansible/issues/65968
- name: Create a network with dvPortgroup
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "{{ dvpg1 }}"
- label: "Network adapter 2"
- connected: true
- start_connected: true
- state: new
+ network_name: "{{ dvpg1 }}"
+ label: "Network adapter 2"
+ connected: true
+ start_connected: true
+ state: present
register: create_netaddr_pg
- debug: var=create_netaddr_pg
@@ -369,7 +358,7 @@
- create_netaddr_pg.changed is sameas true
- name: gather network adapters' facts of the virtual machine
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -384,7 +373,7 @@
- nic_info.network_info is defined
- name: Remove all network interfaces with loop
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -395,7 +384,7 @@
loop: "{{ nic_info.network_info }}"
- name: gather network adapters' facts of the virtual machine
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -410,7 +399,7 @@
- "{{ nic_info2.network_info | length | int }} == 0"
- name: add new adapter(s)
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
@@ -429,7 +418,7 @@
- name: "Change a dvpg with in same DVS(integration test for 204)"
block:
- name: "Prepare the integration test for 204"
- vmware_dvs_portgroup:
+ community.vmware.vmware_dvs_portgroup:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
@@ -448,16 +437,15 @@
- prepare_integration_test_204_result.changed is sameas true
- name: "Change a port group to a dvport group"
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: "{{ dvpg1 }}"
- label: Network adapter 1
- state: present
+ network_name: "{{ dvpg1 }}"
+ label: "Network adapter 1"
+ state: present
register: change_port_group_result
- assert:
@@ -465,16 +453,15 @@
- change_port_group_result.changed is sameas true
- name: "Change a dvport group with in same DVS"
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: 204dvpg
- label: Network adapter 1
- state: present
+ network_name: 204dvpg
+ label: Network adapter 1
+ state: present
register: change_dvport_group_result
- assert:
@@ -482,16 +469,15 @@
- change_dvport_group_result.changed is sameas true
- name: "Revert a dvport group to port group"
- vmware_guest_network:
+ community.vmware.vmware_guest_network:
validate_certs: false
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
- networks:
- - name: VM Network
- label: Network adapter 1
- state: present
+ network_name: VM Network
+ label: "Network adapter 1"
+ state: present
register: revert_dvport_group_result
- assert:
@@ -499,7 +485,7 @@
- revert_dvport_group_result.changed is sameas true
- name: "Delete a dvport group for 204 integration test"
- vmware_dvs_portgroup:
+ community.vmware.vmware_dvs_portgroup:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
| vmware_guest_network: Remove deprecated parameter networks
##### SUMMARY
The `networks` parameter is deprecated and should be removed.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_guest_network
##### ADDITIONAL INFORMATION
https://github.com/ansible-collections/community.vmware/blob/67b9506a306da2caec9a2eda60003fd54a9df71e/plugins/modules/vmware_guest_network.py#L865-L871
| 2022-09-12T17:48:25 |
|
ansible-collections/community.vmware | 1,461 | ansible-collections__community.vmware-1461 | [
"1257"
] | 75df7120482669d2c8eba3ab608afb668d8d1d4b | diff --git a/plugins/modules/vmware_guest_boot_manager.py b/plugins/modules/vmware_guest_boot_manager.py
--- a/plugins/modules/vmware_guest_boot_manager.py
+++ b/plugins/modules/vmware_guest_boot_manager.py
@@ -83,7 +83,6 @@
description:
- Choose if EFI secure boot should be enabled. EFI secure boot can only be enabled with boot_firmware = efi
type: 'bool'
- default: False
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -292,19 +291,16 @@ def ensure(self):
change_needed = True
boot_firmware_required = True
- if self.vm.config.bootOptions.efiSecureBootEnabled != self.params.get('secure_boot_enabled'):
+ if self.params.get('secure_boot_enabled') is not None:
if self.params.get('secure_boot_enabled') and self.params.get('boot_firmware') == "bios":
- self.module.fail_json(msg="EFI secure boot cannot be enabled when boot_firmware = bios, but both are specified")
-
- # If the user is not specifying boot_firmware, make sure they aren't trying to enable it on a
- # system with boot_firmware already set to 'bios'
- if self.params.get('secure_boot_enabled') and \
- self.params.get('boot_firmware') is None and \
- self.vm.config.firmware == 'bios':
- self.module.fail_json(msg="EFI secure boot cannot be enabled when boot_firmware = bios. VM's boot_firmware currently set to bios")
-
- kwargs.update({'efiSecureBootEnabled': self.params.get('secure_boot_enabled')})
- change_needed = True
+ self.module.fail_json(msg="Secure boot cannot be enabled when boot_firmware = bios")
+ elif self.params.get('secure_boot_enabled') and \
+ self.params.get('boot_firmware') != 'efi' and \
+ self.vm.config.firmware == 'bios':
+ self.module.fail_json(msg="Secure boot cannot be enabled since the VM's boot firmware is currently set to bios")
+ elif self.vm.config.bootOptions.efiSecureBootEnabled != self.params.get('secure_boot_enabled'):
+ kwargs.update({'efiSecureBootEnabled': self.params.get('secure_boot_enabled')})
+ change_needed = True
changed = False
results = dict(
@@ -381,7 +377,6 @@ def main():
),
secure_boot_enabled=dict(
type='bool',
- default=False,
),
boot_firmware=dict(
type='str',
| diff --git a/tests/integration/targets/vmware_guest_boot_manager/tasks/main.yml b/tests/integration/targets/vmware_guest_boot_manager/tasks/main.yml
--- a/tests/integration/targets/vmware_guest_boot_manager/tasks/main.yml
+++ b/tests/integration/targets/vmware_guest_boot_manager/tasks/main.yml
@@ -9,19 +9,68 @@
setup_datastore: true
setup_virtualmachines: true
-- name: Enter BIOS setup
+- name: Enable Secure Boot
community.vmware.vmware_guest_boot_manager:
validate_certs: false
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
name: "{{ virtual_machines[0].name }}"
- enter_bios_setup: true
- register: enter_bios_setup
+ boot_firmware: efi
+ secure_boot_enabled: true
+ register: enable_secure_boot
-- ansible.builtin.debug: var=enter_bios_setup
+- ansible.builtin.debug: var=enable_secure_boot
-- name: assert that configuration is changed
+- name: Get VM boot info 1
+ community.vmware.vmware_guest_boot_info:
+ validate_certs: false
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ name: "{{ virtual_machines[0].name }}"
+ register: boot_info1
+
+- ansible.builtin.debug: var=boot_info1
+
+- name: assert that Secure Boot is enabled
assert:
that:
- - enter_bios_setup.changed
+ - enable_secure_boot.changed
+ - boot_info1.vm_boot_info.current_secure_boot_enabled is true
+
+- name: Issue https://github.com/ansible-collections/community.vmware/issues/1257
+ block:
+ - name: Enter BIOS setup
+ community.vmware.vmware_guest_boot_manager:
+ validate_certs: false
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ name: "{{ virtual_machines[0].name }}"
+ enter_bios_setup: true
+ register: enter_bios_setup
+
+ - ansible.builtin.debug: var=enter_bios_setup
+
+ - name: Get VM boot info 2
+ community.vmware.vmware_guest_boot_info:
+ validate_certs: false
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ name: "{{ virtual_machines[0].name }}"
+ register: boot_info2
+
+ - ansible.builtin.debug: var=boot_info2
+
+ - name: assert that configuration is changed
+ assert:
+ that:
+ - enter_bios_setup.changed
+ - boot_info2.vm_boot_info.current_enter_bios_setup is true
+
+ - name: assert that Secure Boot is still enabled
+ assert:
+ that:
+ - boot_info2.vm_boot_info.current_secure_boot_enabled is true
| vmware_guest_boot_manager changed default secure_boot_enabled value
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
"ansible [core 2.12.1]",
" config file = /root/ansible-vsphere-gos-validation/ansible.cfg",
" configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']",
" ansible python module location = /usr/lib/python3.9/site-packages/ansible",
" ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections",
" executable location = /usr/bin/ansible",
" python version = 3.9.1 (default, Aug 19 2021, 02:58:42) [GCC 10.2.0]",
" jinja version = 3.0.1",
" libyaml = True"
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
"Collection Version",
"----------------- -------",
"ansible.netcommon 2.5.0 ",
"ansible.posix 1.3.0 ",
"ansible.utils 2.4.3 ",
"ansible.windows 1.9.0 ",
"community.crypto 2.1.0 ",
"community.general 4.3.0 ",
"community.vmware 2.1.0 ",
"community.windows 1.9.0 "
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
"CALLBACKS_ENABLED(/home/worker/workspace/Ansible_Regression_Windows_11_64/ansible-vsphere-gos-validation/ansible.cfg) = ['timer']",
"DEFAULT_CALLBACK_PLUGIN_PATH(/home/worker/workspace/Ansible_Regression_Windows_11_64/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/worker/workspace/Ansible_Regression_Windows_11_64/ansible-vsphere-gos-validation/plugin']",
"DISPLAY_SKIPPED_HOSTS(/home/worker/workspace/Ansible_Regression_Windows_11_64/ansible-vsphere-gos-validation/ansible.cfg) = False",
"RETRY_FILES_ENABLED(/home/worker/workspace/Ansible_Regression_Windows_11_64/ansible-vsphere-gos-validation/ansible.cfg) = False"
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
vars:
vsphere_host_name:
vsphere_host_user:
vsphere_host_user_password:
vm_name:
tasks:
- name: Get VM boot facts before boot option changing
vmware_guest_boot_info:
validate_certs: False
hostname: "{{ vsphere_host_name }}"
username: "{{ vsphere_host_user }}"
password: "{{ vsphere_host_user_password }}"
name: "{{ vm_name }}"
register: boot_facts_before_update
- name: Display the VM boot options
debug: var=boot_facts_before_update
- name: Set VM boot options
vmware_guest_boot_manager:
validate_certs: False
hostname: "{{ vsphere_host_name }}"
username: "{{ vsphere_host_user }}"
password: "{{ vsphere_host_user_password }}"
name: "{{ vm_name }}"
enter_bios_setup: True
register: set_boot_opts
- name: Display set boot options result
debug: var=set_boot_opts
- name: Get VM boot facts after boot option changing
vmware_guest_boot_info:
validate_certs: False
hostname: "{{ vsphere_host_name }}"
username: "{{ vsphere_host_user }}"
password: "{{ vsphere_host_user_password }}"
name: "{{ vm_name }}"
register: boot_facts_after_update
- name: Display the VM boot options
debug: var=boot_facts_after_update
```
output:
```
PLAY [localhost] *******************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]
TASK [Get VM boot facts after boot option changing] ********************************************************************
ok: [localhost]
TASK [Display the VM boot options] *************************************************************************************
ok: [localhost] => {
"boot_facts_before_update": {
"changed": false,
"failed": false,
"vm_boot_info": {
"current_boot_delay": 0,
"current_boot_firmware": "efi",
"current_boot_order": [],
"current_boot_retry_delay": 10000,
"current_boot_retry_enabled": false,
"current_enter_bios_setup": false,
"current_secure_boot_enabled": true
}
}
}
TASK [Set VM boot options] *********************************************************************************************
changed: [localhost]
TASK [Display set boot options result] *********************************************************************************
ok: [localhost] => {
"set_boot_opts": {
"changed": true,
"failed": false,
"vm_boot_status": {
"current_boot_delay": 0,
"current_boot_firmware": "efi",
"current_boot_order": [],
"current_boot_retry_delay": 0,
"current_boot_retry_enabled": true,
"current_enter_bios_setup": true,
"current_secure_boot_enabled": false,
"previous_boot_delay": 0,
"previous_boot_firmware": "efi",
"previous_boot_order": [],
"previous_boot_retry_delay": 10000,
"previous_boot_retry_enabled": false,
"previous_enter_bios_setup": false,
"previous_secure_boot_enabled": true
}
}
}
TASK [Get VM boot facts after boot option changing] ********************************************************************
ok: [localhost]
TASK [Display the VM boot options] *************************************************************************************
ok: [localhost] => {
"boot_facts_after_update": {
"changed": false,
"failed": false,
"vm_boot_info": {
"current_boot_delay": 0,
"current_boot_firmware": "efi",
"current_boot_order": [],
"current_boot_retry_delay": 0,
"current_boot_retry_enabled": true,
"current_enter_bios_setup": true,
"current_secure_boot_enabled": false
}
}
}
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
After changing boot option enter_bios_setup with true, current_secure_boot_enabled should still be true.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
After changing boot option enter_bios_setup with true, current_secure_boot_enabled became false.
<!--- Paste verbatim command output between quotes -->
```paste below
```
| Files identified in the description:
None
If these files are inaccurate, please update the `component name` section of the description or use the `!component` bot command.
[click here for bot help](https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md)
<!--- boilerplate: components_banner --->
I don't think this is a bug. The default for `secure_boot_enabled` is `False`. If you don't set this to `True` the module works as intended (and documented) and disables secure boot.
So I suggest you either close this issue or change it to a feature request to _not_ have a default for `secure_boot_enabled`.
I think there might be good reasons to not have a default for this. But I'm afraid changing this might break existing playbooks. So we would wait with this until version 3 of the collection.
What do you say, could you live with `False` as the default for `secure_boot_enabled` or should we change this in version 3?
Hi @mariolenz, thanks for your response.
For a VM with secureboot enabled before making changes to boot options, in this case,
```
"current_secure_boot_enabled": true
```
I think it is better to leave it as what it is when vmware_guest_boot_manager has no secure_boot_enabled set, and changing other boot option shouldn't change current_secure_boot_enabled meanwhile.
> I think it is better to leave it as what it is when vmware_guest_boot_manager has no secure_boot_enabled set, and changing other boot option shouldn't change current_secure_boot_enabled meanwhile.
I agree. But as I've said, I don't think we'll change this within 2.x.
I've added your issue to the [3.0.0 milestone](https://github.com/ansible-collections/community.vmware/milestone/14) so we don't forget.
Thank you, @mariolenz. | 2022-09-15T13:58:46 |
ansible-collections/community.vmware | 1,463 | ansible-collections__community.vmware-1463 | [
"1197"
] | ce764aaa1317960deeca086517e5f9d7e62f9cdd | diff --git a/plugins/modules/vmware_host_firewall_manager.py b/plugins/modules/vmware_host_firewall_manager.py
--- a/plugins/modules/vmware_host_firewall_manager.py
+++ b/plugins/modules/vmware_host_firewall_manager.py
@@ -41,6 +41,39 @@
default: []
type: list
elements: dict
+ suboptions:
+ name:
+ description:
+ - Rule set name.
+ type: str
+ required: true
+ enabled:
+ description:
+ - Whether the rule set is enabled or not.
+ type: bool
+ required: true
+ allowed_hosts:
+ description:
+ - Define the allowed hosts for this rule set.
+ type: dict
+ suboptions:
+ all_ip:
+ description:
+ - Whether all hosts should be allowed or not.
+ type: bool
+ required: true
+ ip_address:
+ description:
+ - List of allowed IP addresses.
+ type: list
+ elements: str
+ default: []
+ ip_network:
+ description:
+ - List of allowed IP networks.
+ type: list
+ elements: str
+ default: []
extends_documentation_fragment:
- community.vmware.vmware.documentation
@@ -221,35 +254,27 @@ def check_params(self):
for rule_option in self.rule_options:
rule_name = rule_option.get('name')
- if rule_name is None:
- self.module.fail_json(msg="Please specify rule.name for rule set"
- " as it is required parameter.")
hosts_with_rule_name = [h for h, r in rules_by_host.items() if rule_name in r]
hosts_without_rule_name = set([i.name for i in self.hosts]) - set(hosts_with_rule_name)
if hosts_without_rule_name:
self.module.fail_json(msg="rule named '%s' wasn't found on hosts: %s" % (
rule_name, hosts_without_rule_name))
- if 'enabled' not in rule_option:
- self.module.fail_json(msg="Please specify rules.enabled for rule set"
- " %s as it is required parameter." % rule_name)
-
- allowed_hosts = rule_option.get('allowed_hosts', {})
- ip_addresses = allowed_hosts.get('ip_address', [])
- ip_networks = allowed_hosts.get('ip_network', [])
- for ip_address in ip_addresses:
- try:
- is_ipaddress(ip_address)
- except ValueError:
- self.module.fail_json(msg="The provided IP address %s is not a valid IP"
- " for the rule %s" % (ip_address, rule_name))
-
- for ip_network in ip_networks:
- try:
- is_ipaddress(ip_network)
- except ValueError:
- self.module.fail_json(msg="The provided IP network %s is not a valid network"
- " for the rule %s" % (ip_network, rule_name))
+ allowed_hosts = rule_option.get('allowed_hosts')
+ if allowed_hosts is not None:
+ for ip_address in allowed_hosts.get('ip_address'):
+ try:
+ is_ipaddress(ip_address)
+ except ValueError:
+ self.module.fail_json(msg="The provided IP address %s is not a valid IP"
+ " for the rule %s" % (ip_address, rule_name))
+
+ for ip_network in allowed_hosts.get('ip_network'):
+ try:
+ is_ipaddress(ip_network)
+ except ValueError:
+ self.module.fail_json(msg="The provided IP network %s is not a valid network"
+ " for the rule %s" % (ip_network, rule_name))
def ensure(self):
"""
@@ -297,10 +322,10 @@ def ensure(self):
rule_allowed_ips = set(permitted_networking['allowed_hosts']['ip_address'])
rule_allowed_networks = set(permitted_networking['allowed_hosts']['ip_network'])
- allowed_hosts = rule_option.get('allowed_hosts', {})
- playbook_allows_all = allowed_hosts.get('all_ip', False)
- playbook_allowed_ips = set(allowed_hosts.get('ip_address', []))
- playbook_allowed_networks = set(allowed_hosts.get('ip_network', []))
+ allowed_hosts = rule_option.get('allowed_hosts')
+ playbook_allows_all = False if allowed_hosts is None else allowed_hosts.get('all_ip')
+ playbook_allowed_ips = set([]) if allowed_hosts is None else set(allowed_hosts.get('ip_address'))
+ playbook_allowed_networks = set([]) if allowed_hosts is None else set(allowed_hosts.get('ip_network'))
# compare what is configured on the firewall rule with what the playbook provides
allowed_all_ips_different = bool(rule_allows_all != playbook_allows_all)
@@ -371,7 +396,24 @@ def main():
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
- rules=dict(type='list', default=list(), required=False, elements='dict'),
+ rules=dict(
+ type='list',
+ default=list(),
+ required=False,
+ elements='dict',
+ options=dict(
+ name=dict(type='str', required=True),
+ enabled=dict(type='bool', required=True),
+ allowed_hosts=dict(
+ type='dict',
+ options=dict(
+ all_ip=dict(type='bool', required=True),
+ ip_address=dict(type='list', elements='str', default=list()),
+ ip_network=dict(type='list', elements='str', default=list()),
+ ),
+ ),
+ ),
+ ),
)
module = AnsibleModule(
@@ -382,29 +424,6 @@ def main():
supports_check_mode=True
)
- for rule_option in module.params.get("rules", []):
- if 'allowed_hosts' in rule_option:
- if isinstance(rule_option['allowed_hosts'], list):
- if len(rule_option['allowed_hosts']) == 1:
- allowed_hosts = rule_option['allowed_hosts'][0]
- rule_option['allowed_hosts'] = allowed_hosts
- module.deprecate(
- msg='allowed_hosts should be a dict, not a list',
- version='3.0.0',
- collection_name='community.vmware'
- )
- if not rule_option.get("enabled"):
- continue
- try:
- isinstance(rule_option["allowed_hosts"]["all_ip"], bool)
- except (KeyError, IndexError):
- module.deprecate(
- msg=('Please adjust your playbook to ensure the `allowed_hosts` '
- 'entries come with an `all_ip` key (boolean).'),
- version='3.0.0',
- collection_name='community.vmware'
- )
-
vmware_firewall_manager = VmwareFirewallManager(module)
vmware_firewall_manager.check_params()
vmware_firewall_manager.ensure()
| diff --git a/tests/integration/targets/vmware_host_firewall_manager/tasks/main.yml b/tests/integration/targets/vmware_host_firewall_manager/tasks/main.yml
--- a/tests/integration/targets/vmware_host_firewall_manager/tasks/main.yml
+++ b/tests/integration/targets/vmware_host_firewall_manager/tasks/main.yml
@@ -6,234 +6,255 @@
vars:
setup_attach_host: true
-- name: Enable vvold rule set on all hosts of {{ ccr1 }}
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- cluster_name: "{{ ccr1 }}"
- rules:
- - name: vvold
- enabled: true
- allowed_hosts:
- all_ip: true
- register: all_hosts_result
-- debug: msg="{{ all_hosts_result }}"
-- name: ensure everything is changed for all hosts of {{ ccr1 }}
- assert:
- that:
- - all_hosts_result.changed
- - all_hosts_result.rule_set_state is defined
-
-- name: ensure info are gathered for all hosts of {{ ccr1 }}
- assert:
- that:
- - all_hosts_result.rule_set_state[item]['vvold']['current_state'] == true
- - all_hosts_result.rule_set_state[item]['vvold']['desired_state'] == true
- - all_hosts_result.rule_set_state[item]['vvold']['previous_state'] == False
- with_items:
- - '{{ esxi1 }}'
- - '{{ esxi2 }}'
-
-- name: Disable vvold for {{ host1 }}
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- esxi_hostname: '{{ esxi1 }}'
- rules:
- - name: vvold
- enabled: false
- register: host_result
-- debug: msg="{{ host_result }}"
-- name: ensure vvold is disabled for {{ host1 }}
- assert:
- that:
- - host_result.changed
- - host_result.rule_set_state is defined
-
-- name: ensure info are gathered for {{ host1 }}
- assert:
- that:
- - host_result.rule_set_state[item]['vvold']['current_state'] == False
- - host_result.rule_set_state[item]['vvold']['desired_state'] == False
- - host_result.rule_set_state[item]['vvold']['previous_state'] == true
- with_items:
- - '{{ esxi1 }}'
-
-- name: Enable vvold rule set on all hosts of {{ ccr1 }} in check mode
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- cluster_name: "{{ ccr1 }}"
- rules:
- - name: vvold
- enabled: true
- allowed_hosts:
- all_ip: true
- register: all_hosts_result_check_mode
- check_mode: true
-- debug: var=all_hosts_result_check_mode
-- name: ensure everything is changed for all hosts of {{ ccr1 }}
- assert:
- that:
- - all_hosts_result_check_mode.changed
- - all_hosts_result_check_mode.rule_set_state is defined
-
-- name: ensure info are gathered for all hosts of {{ ccr1 }}
- assert:
- that:
- - all_hosts_result_check_mode.rule_set_state[esxi1]['vvold']['current_state'] == true
- - all_hosts_result_check_mode.rule_set_state[esxi2]['vvold']['current_state'] == true
- - all_hosts_result_check_mode.rule_set_state[esxi2]['vvold']['desired_state'] == true
-
-- name: Disable vvold for {{ host1 }} in check mode
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- esxi_hostname: '{{ esxi1 }}'
- rules:
- - name: vvold
- enabled: false
- register: host_result_check_mode
- check_mode: true
-- debug: msg="{{ host_result_check_mode }}"
-- name: ensure vvold is disabled for {{ host1 }}
- assert:
+- name: Run tests and clean up
+ block:
+ - name: Enable vvold rule set on all hosts of {{ ccr1 }}
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ cluster_name: "{{ ccr1 }}"
+ rules:
+ - name: vvold
+ enabled: true
+ allowed_hosts:
+ all_ip: true
+ register: all_hosts_result
+
+ - debug: msg="{{ all_hosts_result }}"
+
+ - name: ensure everything is changed for all hosts of {{ ccr1 }}
+ assert:
+ that:
+ - all_hosts_result.changed
+ - all_hosts_result.rule_set_state is defined
+
+ - name: ensure info are gathered for all hosts of {{ ccr1 }}
+ assert:
+ that:
+ - all_hosts_result.rule_set_state[item]['vvold']['current_state'] == true
+ - all_hosts_result.rule_set_state[item]['vvold']['desired_state'] == true
+ - all_hosts_result.rule_set_state[item]['vvold']['previous_state'] == False
+ with_items:
+ - '{{ esxi1 }}'
+ - '{{ esxi2 }}'
+
+ - name: Disable vvold for {{ host1 }}
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ esxi_hostname: '{{ esxi1 }}'
+ rules:
+ - name: vvold
+ enabled: false
+ register: host_result
+
+ - debug: msg="{{ host_result }}"
+
+ - name: ensure vvold is disabled for {{ host1 }}
+ assert:
+ that:
+ - host_result.changed
+ - host_result.rule_set_state is defined
+
+ - name: ensure info are gathered for {{ host1 }}
+ assert:
+ that:
+ - host_result.rule_set_state[item]['vvold']['current_state'] == False
+ - host_result.rule_set_state[item]['vvold']['desired_state'] == False
+ - host_result.rule_set_state[item]['vvold']['previous_state'] == true
+ with_items:
+ - '{{ esxi1 }}'
+
+ - name: Enable vvold rule set on all hosts of {{ ccr1 }} in check mode
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ cluster_name: "{{ ccr1 }}"
+ rules:
+ - name: vvold
+ enabled: true
+ allowed_hosts:
+ all_ip: true
+ register: all_hosts_result_check_mode
+ check_mode: true
+
+ - debug: var=all_hosts_result_check_mode
+
+ - name: ensure everything is changed for all hosts of {{ ccr1 }}
+ assert:
+ that:
+ - all_hosts_result_check_mode.changed
+ - all_hosts_result_check_mode.rule_set_state is defined
+
+ - name: ensure info are gathered for all hosts of {{ ccr1 }}
+ assert:
+ that:
+ - all_hosts_result_check_mode.rule_set_state[esxi1]['vvold']['current_state'] == true
+ - all_hosts_result_check_mode.rule_set_state[esxi2]['vvold']['current_state'] == true
+ - all_hosts_result_check_mode.rule_set_state[esxi2]['vvold']['desired_state'] == true
+
+ - name: Disable vvold for {{ host1 }} in check mode
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ esxi_hostname: '{{ esxi1 }}'
+ rules:
+ - name: vvold
+ enabled: false
+ register: host_result_check_mode
+ check_mode: true
+
+ - debug: msg="{{ host_result_check_mode }}"
+
+ - name: ensure vvold is disabled for {{ host1 }}
+ assert:
that:
- - host_result_check_mode.changed == False
- - host_result_check_mode.rule_set_state is defined
-
-- name: ensure info are gathered for {{ host1 }}
- assert:
- that:
- - host_result_check_mode.rule_set_state[item]['vvold']['current_state'] == False
- - host_result_check_mode.rule_set_state[item]['vvold']['desired_state'] == False
- - host_result_check_mode.rule_set_state[item]['vvold']['previous_state'] == False
- with_items:
- - '{{ esxi1 }}'
-
-- name: Configure CIMHttpServer rule set on all hosts of {{ ccr1 }}
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- cluster_name: "{{ ccr1 }}"
- rules:
- - name: CIMHttpServer
- enabled: true
- allowed_hosts:
- all_ip: false
- ip_address:
- - "192.168.100.11"
- - "192.168.100.12"
- ip_network:
- - "192.168.200.0/24"
- register: all_hosts_ip_specific
-- debug: var=all_hosts_ip_specific
-- name: ensure everything is changed for all hosts of {{ ccr1 }}
- assert:
- that:
- - all_hosts_ip_specific.changed
- - all_hosts_ip_specific.rule_set_state is defined
-
-- name: ensure CIMHttpServer is configured for all hosts in {{ ccr1 }}
- assert:
- that:
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['current_state'] == true
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['desired_state'] == true
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['previous_state'] == true
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_all'] == False
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['previous_allowed_all'] == true
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_all'] == False
- - "'192.168.100.11' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_ip']"
- - "'192.168.100.12' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_ip']"
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['previous_allowed_ip'] == []
- - "'192.168.100.11' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_ip']"
- - "'192.168.100.12' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_ip']"
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_networks'] == ["192.168.200.0/24"]
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['previous_allowed_networks'] == []
- - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_networks'] == ["192.168.200.0/24"]
- with_items:
- - '{{ esxi1 }}'
- - '{{ esxi2 }}'
-
-- name: Configure the NFC firewall rule to only allow traffic from one IP on one ESXi host
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- esxi_hostname: "{{ esxi1 }}"
- rules:
- - name: NFC
- enabled: true
- allowed_hosts:
- all_ip: false
- ip_address:
- - "192.168.100.11"
- register: single_host_ip_specific
-- set_fact:
- nfc_state: "{{ single_host_ip_specific.rule_set_state[esxi1]['NFC'] }}"
-- debug: var=single_host_ip_specific
-- debug: var=nfc_state
-- name: ensure NFC is configured on that host
- assert:
- that:
- - nfc_state.current_state == true
- - nfc_state.desired_state == true
- - nfc_state.previous_state == true
- - nfc_state.allowed_hosts.current_allowed_all == False
- - nfc_state.allowed_hosts.previous_allowed_all == true
- - nfc_state.allowed_hosts.desired_allowed_all == False
- - nfc_state.allowed_hosts.current_allowed_ip == ["192.168.100.11"]
- - nfc_state.allowed_hosts.previous_allowed_all == true
- - nfc_state.allowed_hosts.desired_allowed_ip == ["192.168.100.11"]
- - nfc_state.allowed_hosts.current_allowed_networks == []
- - nfc_state.allowed_hosts.previous_allowed_networks == []
- - nfc_state.allowed_hosts.desired_allowed_networks == []
-
-- name: Ensure we can still pass the allowed_hosts configuration through a list for compat
- vmware_host_firewall_manager:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- validate_certs: false
- esxi_hostname: "{{ esxi1 }}"
- rules:
- - name: NFC
- enabled: true
- allowed_hosts:
- - all_ip: false
- ip_address:
- - "1.2.3.4"
- register: using_list
-- debug: var=using_list
-- set_fact:
- nfc_state: "{{ using_list.rule_set_state[esxi1]['NFC'] }}"
-- name: ensure the correct host is set
- assert:
- that:
- - nfc_state.allowed_hosts.current_allowed_ip == ["1.2.3.4"]
-- name: Clean up the firewall rules
- vmware_host_firewall_manager:
- cluster_name: '{{ ccr1 }}'
- rules:
- - name: vvold
- enabled: false
- - name: CIMHttpServer
- enabled: true
- allowed_hosts:
- all_ip: true
- - name: NFC
- enabled: true
- allowed_hosts:
- all_ip: true
- ignore_errors: true
+ - host_result_check_mode.changed == False
+ - host_result_check_mode.rule_set_state is defined
+
+ - name: ensure info are gathered for {{ host1 }}
+ assert:
+ that:
+ - host_result_check_mode.rule_set_state[item]['vvold']['current_state'] == False
+ - host_result_check_mode.rule_set_state[item]['vvold']['desired_state'] == False
+ - host_result_check_mode.rule_set_state[item]['vvold']['previous_state'] == False
+ with_items:
+ - '{{ esxi1 }}'
+
+ - name: Configure CIMHttpServer rule set on all hosts of {{ ccr1 }}
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ cluster_name: "{{ ccr1 }}"
+ rules:
+ - name: CIMHttpServer
+ enabled: true
+ allowed_hosts:
+ all_ip: false
+ ip_address:
+ - "192.168.100.11"
+ - "192.168.100.12"
+ ip_network:
+ - "192.168.200.0/24"
+ register: all_hosts_ip_specific
+
+ - debug: var=all_hosts_ip_specific
+
+ - name: ensure everything is changed for all hosts of {{ ccr1 }}
+ assert:
+ that:
+ - all_hosts_ip_specific.changed
+ - all_hosts_ip_specific.rule_set_state is defined
+
+ - name: ensure CIMHttpServer is configured for all hosts in {{ ccr1 }}
+ assert:
+ that:
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['current_state'] == true
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['desired_state'] == true
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['previous_state'] == true
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_all'] == False
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['previous_allowed_all'] == true
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_all'] == False
+ - "'192.168.100.11' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_ip']"
+ - "'192.168.100.12' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_ip']"
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['previous_allowed_ip'] == []
+ - "'192.168.100.11' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_ip']"
+ - "'192.168.100.12' in all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_ip']"
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['current_allowed_networks'] == ["192.168.200.0/24"]
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['previous_allowed_networks'] == []
+ - all_hosts_ip_specific.rule_set_state[item]['CIMHttpServer']['allowed_hosts']['desired_allowed_networks'] == ["192.168.200.0/24"]
+ with_items:
+ - '{{ esxi1 }}'
+ - '{{ esxi2 }}'
+
+ - name: Configure the NFC firewall rule to only allow traffic from one IP on one ESXi host
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ esxi_hostname: "{{ esxi1 }}"
+ rules:
+ - name: NFC
+ enabled: true
+ allowed_hosts:
+ all_ip: false
+ ip_address:
+ - "192.168.100.11"
+ register: single_host_ip_specific
+
+ - set_fact:
+ nfc_state: "{{ single_host_ip_specific.rule_set_state[esxi1]['NFC'] }}"
+
+ - debug: var=single_host_ip_specific
+
+ - debug: var=nfc_state
+
+ - name: ensure NFC is configured on that host
+ assert:
+ that:
+ - nfc_state.current_state == true
+ - nfc_state.desired_state == true
+ - nfc_state.previous_state == true
+ - nfc_state.allowed_hosts.current_allowed_all == False
+ - nfc_state.allowed_hosts.previous_allowed_all == true
+ - nfc_state.allowed_hosts.desired_allowed_all == False
+ - nfc_state.allowed_hosts.current_allowed_ip == ["192.168.100.11"]
+ - nfc_state.allowed_hosts.previous_allowed_all == true
+ - nfc_state.allowed_hosts.desired_allowed_ip == ["192.168.100.11"]
+ - nfc_state.allowed_hosts.current_allowed_networks == []
+ - nfc_state.allowed_hosts.previous_allowed_networks == []
+ - nfc_state.allowed_hosts.desired_allowed_networks == []
+
+ - name: Ensure we can still pass the allowed_hosts configuration through a list for compat
+ community.vmware.vmware_host_firewall_manager:
+ hostname: "{{ vcenter_hostname }}"
+ username: "{{ vcenter_username }}"
+ password: "{{ vcenter_password }}"
+ validate_certs: false
+ esxi_hostname: "{{ esxi1 }}"
+ rules:
+ - name: NFC
+ enabled: true
+ allowed_hosts:
+ all_ip: false
+ ip_address:
+ - "1.2.3.4"
+ register: using_list
+
+ - debug: var=using_list
+
+ - set_fact:
+ nfc_state: "{{ using_list.rule_set_state[esxi1]['NFC'] }}"
+
+ - name: ensure the correct host is set
+ assert:
+ that:
+ - nfc_state.allowed_hosts.current_allowed_ip == ["1.2.3.4"]
+
+ always:
+ - name: Clean up the firewall rules
+ community.vmware.vmware_host_firewall_manager:
+ cluster_name: '{{ ccr1 }}'
+ rules:
+ - name: vvold
+ enabled: false
+ - name: CIMHttpServer
+ enabled: true
+ allowed_hosts:
+ all_ip: true
+ - name: NFC
+ enabled: true
+ allowed_hosts:
+ all_ip: true
+ ignore_errors: true
| vmware_host_firewall_manager: Remove deprecated functionality
##### SUMMARY
There's a deprecation in `vmware_host_firewall_manager` that we should address in version 3.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_host_firewall_manager
##### ADDITIONAL INFORMATION
https://github.com/ansible-collections/community.vmware/blob/67b9506a306da2caec9a2eda60003fd54a9df71e/plugins/modules/vmware_host_firewall_manager.py#L402-L410
| 2022-09-18T12:57:18 |
|
ansible-collections/community.vmware | 1,465 | ansible-collections__community.vmware-1465 | [
"1265"
] | b9284f777758b79d2e6ff8138b09ce185b9f8370 | diff --git a/plugins/modules/vmware_tag_manager.py b/plugins/modules/vmware_tag_manager.py
--- a/plugins/modules/vmware_tag_manager.py
+++ b/plugins/modules/vmware_tag_manager.py
@@ -283,6 +283,7 @@ def ensure_state(self):
changed=False,
tag_status=dict(),
)
+ tag_objs = []
changed = False
action = self.params.get('state')
try:
@@ -297,7 +298,6 @@ def ensure_state(self):
results['tag_status']['desired_tags'] = self.tag_names
# Check if category and tag combination exists as per user request
- removed_tags_for_set = False
for tag in self.tag_names:
category_obj, category_name, tag_name = None, None, None
if isinstance(tag, dict):
@@ -327,7 +327,20 @@ def ensure_state(self):
if not tag_obj:
self.module.fail_json(msg="Unable to find the tag %s" % tag_name)
- if action in ('add', 'present'):
+ tag_objs.append(tag_obj)
+
+ if action in ('add', 'present'):
+ for tag_obj in tag_objs:
+ if tag_obj not in available_tag_obj:
+ # Tag is not already applied
+ try:
+ self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
+ changed = True
+ except Error as error:
+ self.module.fail_json(msg="%s" % self.get_error_message(error))
+
+ elif action == 'set':
+ for tag_obj in tag_objs:
if tag_obj not in available_tag_obj:
# Tag is not already applied
try:
@@ -336,19 +349,17 @@ def ensure_state(self):
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
- elif action == 'set':
- # Remove all tags first
- try:
- if not removed_tags_for_set:
- for av_tag in available_tag_obj:
- self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
- removed_tags_for_set = True
- self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
- changed = True
- except Error as error:
- self.module.fail_json(msg="%s" % self.get_error_message(error))
-
- elif action in ('remove', 'absent'):
+ for av_tag in available_tag_obj:
+ if av_tag not in tag_objs:
+ # Tag not in the defined list
+ try:
+ self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
+ changed = True
+ except Error as error:
+ self.module.fail_json(msg="%s" % self.get_error_message(error))
+
+ elif action in ('remove', 'absent'):
+ for tag_obj in tag_objs:
if tag_obj in available_tag_obj:
try:
self.tag_association_svc.detach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
| diff --git a/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml b/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml
--- a/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml
+++ b/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml
@@ -1,5 +1,5 @@
- name: Delete Tags
- vmware_tag:
+ community.vmware.vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -10,7 +10,7 @@
register: delete_tag
- name: Delete Categories
- vmware_category:
+ community.vmware.vmware_category:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
diff --git a/tests/integration/targets/vmware_tag_manager/tasks/main.yml b/tests/integration/targets/vmware_tag_manager/tasks/main.yml
--- a/tests/integration/targets/vmware_tag_manager/tasks/main.yml
+++ b/tests/integration/targets/vmware_tag_manager/tasks/main.yml
@@ -15,7 +15,7 @@
- block:
- name: Create first category
- vmware_category:
+ community.vmware.vmware_category:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
diff --git a/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml b/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
--- a/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
+++ b/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
@@ -5,7 +5,7 @@
# Testcase for https://github.com/ansible/ansible/issues/65765
- name: Create tag with colon in name
- vmware_tag:
+ community.vmware.vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -21,7 +21,7 @@
- tag_one_create.changed
- name: Get VM Facts
- vmware_vm_info:
+ community.vmware.vmware_vm_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -33,7 +33,7 @@
vm_moid: "{{ vm_info['virtual_machines'][0]['moid'] }}"
- name: Assign tag to given virtual machine
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -47,7 +47,7 @@
register: vm_tag_info
- name: Assign tag to rw_datastore
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -67,7 +67,7 @@
- datastore_tag_info.changed
- name: Remove tag to given virtual machine
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -80,8 +80,61 @@
state: remove
register: vm_tag_info
+- name: Test idempotency for state set
+ block:
+ - name: Set the tags on a VM to a given list
+ community.vmware.vmware_tag_manager:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: false
+ tag_names:
+ - category: "{{ cat_one }}"
+ tag: "{{ tag_one }}"
+ object_name: "{{ vm_name }}"
+ object_type: VirtualMachine
+ state: set
+ register: vm_tag_info
+
+ - name: Check the module assigned the tags
+ assert:
+ that:
+ - vm_tag_info.changed
+
+ - name: Set the tags on a VM to a given list again
+ community.vmware.vmware_tag_manager:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: false
+ tag_names:
+ - category: "{{ cat_one }}"
+ tag: "{{ tag_one }}"
+ object_name: "{{ vm_name }}"
+ object_type: VirtualMachine
+ state: set
+ register: vm_tag_info
+
+ - name: Check idempotency
+ assert:
+ that:
+ - not vm_tag_info.changed
+
+ - name: Remove tag from given virtual machine
+ community.vmware.vmware_tag_manager:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: false
+ tag_names:
+ - category: "{{ cat_one }}"
+ tag: "{{ tag_one }}"
+ object_name: "{{ vm_name }}"
+ object_type: VirtualMachine
+ state: remove
+
- name: Assign tag to given virtual machine using moid
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -100,7 +153,7 @@
- vm_tag_info.changed
- name: Remove tag to datastore
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -114,7 +167,7 @@
register: datastore_tag_info
- name: Remove tag to given virtual machine
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
| vmware_tag_manager always returns `changed: True` when in `set` mode.
##### SUMMARY
vmware_tag_manager always returns `changed: True` when in `set` mode.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`plugins/modules/vmware_tag_manager.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible [core 2.12.2]
config file = /mnt/d/Workspace/infrastructure/configuration/ansible.cfg
configured module search path = ['/mnt/d/Workspace/infrastructure/configuration/library']
ansible python module location = /mnt/d/Workspace/infrastructure/configuration/.env/lib/python3.9/site-packages/ansible
ansible collection location = /usr/share/ansible/collections
executable location = /mnt/d/Workspace/infrastructure/configuration/.env/bin/ansible
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
# /mnt/d/Workspace/infrastructure/configuration/.env/lib/python3.9/site-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.17.1
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = True
DEFAULT_FORCE_HANDLERS(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = True
DEFAULT_GATHERING(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = smart
DEFAULT_PRIVATE_ROLE_VARS(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Targeting vSphere 6.5.0.38000
##### STEPS TO REPRODUCE
```yaml
- name: Set VM tags
delegate_to: localhost
community.vmware.vmware_tag_manager:
hostname: "{{ vsphere_server }}"
username: "{{ vsphere_user }}"
password: "{{ vsphere_password }}"
object_type: VirtualMachine
moid: "{{ vm_info.instance.moid }}"
tag_names: "{{ autovm_tags }}"
state: set
```
##### EXPECTED RESULTS
The task should be `changed` only if the set of tags was modified by the plugin.
##### ACTUAL RESULTS
The task is always `changed` if it contains at least one tag.
Additional note: on this line, `changed` is set to True unconditionally when in `set` mode.
https://github.com/ansible-collections/community.vmware/blob/main/plugins/modules/vmware_tag_manager.py#L349
| Yes, it looks like the module always removes all tags and then adds them again one by one when specifying `set`:
https://github.com/ansible-collections/community.vmware/blob/fac9919fc3af36996173413c0c25bb816459f12a/plugins/modules/vmware_tag_manager.py#L339-L349 | 2022-09-21T11:49:26 |
ansible-collections/community.vmware | 1,469 | ansible-collections__community.vmware-1469 | [
"1265"
] | f086c17bd348f2f0291646ff731da4e4aa989b06 | diff --git a/plugins/modules/vmware_tag_manager.py b/plugins/modules/vmware_tag_manager.py
--- a/plugins/modules/vmware_tag_manager.py
+++ b/plugins/modules/vmware_tag_manager.py
@@ -283,6 +283,7 @@ def ensure_state(self):
changed=False,
tag_status=dict(),
)
+ tag_objs = []
changed = False
action = self.params.get('state')
try:
@@ -297,7 +298,6 @@ def ensure_state(self):
results['tag_status']['desired_tags'] = self.tag_names
# Check if category and tag combination exists as per user request
- removed_tags_for_set = False
for tag in self.tag_names:
category_obj, category_name, tag_name = None, None, None
if isinstance(tag, dict):
@@ -327,7 +327,20 @@ def ensure_state(self):
if not tag_obj:
self.module.fail_json(msg="Unable to find the tag %s" % tag_name)
- if action in ('add', 'present'):
+ tag_objs.append(tag_obj)
+
+ if action in ('add', 'present'):
+ for tag_obj in tag_objs:
+ if tag_obj not in available_tag_obj:
+ # Tag is not already applied
+ try:
+ self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
+ changed = True
+ except Error as error:
+ self.module.fail_json(msg="%s" % self.get_error_message(error))
+
+ elif action == 'set':
+ for tag_obj in tag_objs:
if tag_obj not in available_tag_obj:
# Tag is not already applied
try:
@@ -336,19 +349,17 @@ def ensure_state(self):
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
- elif action == 'set':
- # Remove all tags first
- try:
- if not removed_tags_for_set:
- for av_tag in available_tag_obj:
- self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
- removed_tags_for_set = True
- self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
- changed = True
- except Error as error:
- self.module.fail_json(msg="%s" % self.get_error_message(error))
-
- elif action in ('remove', 'absent'):
+ for av_tag in available_tag_obj:
+ if av_tag not in tag_objs:
+ # Tag not in the defined list
+ try:
+ self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
+ changed = True
+ except Error as error:
+ self.module.fail_json(msg="%s" % self.get_error_message(error))
+
+ elif action in ('remove', 'absent'):
+ for tag_obj in tag_objs:
if tag_obj in available_tag_obj:
try:
self.tag_association_svc.detach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
| diff --git a/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml b/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml
--- a/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml
+++ b/tests/integration/targets/vmware_tag_manager/tasks/cleanup.yml
@@ -1,5 +1,5 @@
- name: Delete Tags
- vmware_tag:
+ community.vmware.vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -10,7 +10,7 @@
register: delete_tag
- name: Delete Categories
- vmware_category:
+ community.vmware.vmware_category:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
diff --git a/tests/integration/targets/vmware_tag_manager/tasks/main.yml b/tests/integration/targets/vmware_tag_manager/tasks/main.yml
--- a/tests/integration/targets/vmware_tag_manager/tasks/main.yml
+++ b/tests/integration/targets/vmware_tag_manager/tasks/main.yml
@@ -15,7 +15,7 @@
- block:
- name: Create first category
- vmware_category:
+ community.vmware.vmware_category:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
diff --git a/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml b/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
--- a/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
+++ b/tests/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
@@ -5,7 +5,7 @@
# Testcase for https://github.com/ansible/ansible/issues/65765
- name: Create tag with colon in name
- vmware_tag:
+ community.vmware.vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -21,7 +21,7 @@
- tag_one_create.changed
- name: Get VM Facts
- vmware_vm_info:
+ community.vmware.vmware_vm_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -33,7 +33,7 @@
vm_moid: "{{ vm_info['virtual_machines'][0]['moid'] }}"
- name: Assign tag to given virtual machine
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -47,7 +47,7 @@
register: vm_tag_info
- name: Assign tag to rw_datastore
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -67,7 +67,7 @@
- datastore_tag_info.changed
- name: Remove tag to given virtual machine
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -80,8 +80,61 @@
state: remove
register: vm_tag_info
+- name: Test idempotency for state set
+ block:
+ - name: Set the tags on a VM to a given list
+ community.vmware.vmware_tag_manager:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: false
+ tag_names:
+ - category: "{{ cat_one }}"
+ tag: "{{ tag_one }}"
+ object_name: "{{ vm_name }}"
+ object_type: VirtualMachine
+ state: set
+ register: vm_tag_info
+
+ - name: Check the module assigned the tags
+ assert:
+ that:
+ - vm_tag_info.changed
+
+ - name: Set the tags on a VM to a given list again
+ community.vmware.vmware_tag_manager:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: false
+ tag_names:
+ - category: "{{ cat_one }}"
+ tag: "{{ tag_one }}"
+ object_name: "{{ vm_name }}"
+ object_type: VirtualMachine
+ state: set
+ register: vm_tag_info
+
+ - name: Check idempotency
+ assert:
+ that:
+ - not vm_tag_info.changed
+
+ - name: Remove tag from given virtual machine
+ community.vmware.vmware_tag_manager:
+ hostname: '{{ vcenter_hostname }}'
+ username: '{{ vcenter_username }}'
+ password: '{{ vcenter_password }}'
+ validate_certs: false
+ tag_names:
+ - category: "{{ cat_one }}"
+ tag: "{{ tag_one }}"
+ object_name: "{{ vm_name }}"
+ object_type: VirtualMachine
+ state: remove
+
- name: Assign tag to given virtual machine using moid
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -100,7 +153,7 @@
- vm_tag_info.changed
- name: Remove tag to datastore
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
@@ -114,7 +167,7 @@
register: datastore_tag_info
- name: Remove tag to given virtual machine
- vmware_tag_manager:
+ community.vmware.vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
| vmware_tag_manager always returns `changed: True` when in `set` mode.
##### SUMMARY
vmware_tag_manager always returns `changed: True` when in `set` mode.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`plugins/modules/vmware_tag_manager.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible [core 2.12.2]
config file = /mnt/d/Workspace/infrastructure/configuration/ansible.cfg
configured module search path = ['/mnt/d/Workspace/infrastructure/configuration/library']
ansible python module location = /mnt/d/Workspace/infrastructure/configuration/.env/lib/python3.9/site-packages/ansible
ansible collection location = /usr/share/ansible/collections
executable location = /mnt/d/Workspace/infrastructure/configuration/.env/bin/ansible
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```paste below
# /mnt/d/Workspace/infrastructure/configuration/.env/lib/python3.9/site-packages/ansible_collections
Collection Version
---------------- -------
community.vmware 1.17.1
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = True
DEFAULT_FORCE_HANDLERS(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = True
DEFAULT_GATHERING(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = smart
DEFAULT_PRIVATE_ROLE_VARS(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(/mnt/d/Workspace/infrastructure/configuration/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Targeting vSphere 6.5.0.38000
##### STEPS TO REPRODUCE
```yaml
- name: Set VM tags
delegate_to: localhost
community.vmware.vmware_tag_manager:
hostname: "{{ vsphere_server }}"
username: "{{ vsphere_user }}"
password: "{{ vsphere_password }}"
object_type: VirtualMachine
moid: "{{ vm_info.instance.moid }}"
tag_names: "{{ autovm_tags }}"
state: set
```
##### EXPECTED RESULTS
The task should be `changed` only if the set of tags was modified by the plugin.
##### ACTUAL RESULTS
The task is always `changed` if it contains at least one tag.
Additional note: on this line, `changed` is set to True unconditionally when in `set` mode.
https://github.com/ansible-collections/community.vmware/blob/main/plugins/modules/vmware_tag_manager.py#L349
| Yes, it looks like the module always removes all tags and then adds them again one by one when specifying `set`:
https://github.com/ansible-collections/community.vmware/blob/fac9919fc3af36996173413c0c25bb816459f12a/plugins/modules/vmware_tag_manager.py#L339-L349 | 2022-09-22T06:41:47 |
ansible-collections/community.vmware | 1,502 | ansible-collections__community.vmware-1502 | [
"1501",
"1501"
] | 39c310198487cfa467d93901d2d3d00f4e6bb977 | diff --git a/plugins/modules/vmware_tag_manager.py b/plugins/modules/vmware_tag_manager.py
--- a/plugins/modules/vmware_tag_manager.py
+++ b/plugins/modules/vmware_tag_manager.py
@@ -339,20 +339,20 @@ def ensure_state(self):
self.module.fail_json(msg="%s" % self.get_error_message(error))
elif action == 'set':
- for tag_obj in tag_objs:
- if tag_obj not in available_tag_obj:
- # Tag is not already applied
+ for av_tag in available_tag_obj:
+ if av_tag not in tag_objs:
+ # Tag not in the defined list
try:
- self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
+ self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
changed = True
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
- for av_tag in available_tag_obj:
- if av_tag not in tag_objs:
- # Tag not in the defined list
+ for tag_obj in tag_objs:
+ if tag_obj not in available_tag_obj:
+ # Tag is not already applied
try:
- self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
+ self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
changed = True
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
| vmware_tag_manager: Changing the value of a single cardinal tag category fails with 'Tagging cardinality violation'
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Changing the value of a single cardinal tag category fails with `Tagging cardinality violation`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_tag_manager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible [core 2.13.4]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible/.local/bin/ansible
python version = 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
download_url: https://galaxy.ansible.com/download/community-vmware-3.0.0.tar.gz
format_version: 1.0.0
name: vmware
namespace: community
server: https://galaxy.ansible.com/api/
signatures: []
version: 3.0.0
version_url: https://galaxy.ansible.com/api/v2/collections/community/vmware/versions/3.0.0/
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
INVENTORY_ENABLED(/home/ansible/automation/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
TRANSFORM_INVALID_GROUP_CHARS(/home/ansible/automation/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
vSphere 7.0.3
The issue is not dependent of the underlying OS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Set and modify a single cardinal tag category
community.vmware.vmware_tag_manager:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
state: set
object_name: "{{ vm_name }}"
object_type: VirtualMachine
tag_names:
- category: Environment
tag: "{{ env }}"
loop:
- Prod
- Lab
loop_control:
loop_var: env
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The tag tag category `Prodtype` has the value `Lab`.
##### ACTUAL RESULTS
The task fails:
```
TASK [vsphere : Set and modify a single cardinal tag category] **********************
fatal: [molecule-ubuntu-jammy.local -> localhost]: FAILED! => {"changed": false, "msg": "Tagging cardinality violation"}
```
vmware_tag_manager: Changing the value of a single cardinal tag category fails with 'Tagging cardinality violation'
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Changing the value of a single cardinal tag category fails with `Tagging cardinality violation`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_tag_manager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible [core 2.13.4]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible/.local/bin/ansible
python version = 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
download_url: https://galaxy.ansible.com/download/community-vmware-3.0.0.tar.gz
format_version: 1.0.0
name: vmware
namespace: community
server: https://galaxy.ansible.com/api/
signatures: []
version: 3.0.0
version_url: https://galaxy.ansible.com/api/v2/collections/community/vmware/versions/3.0.0/
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
INVENTORY_ENABLED(/home/ansible/automation/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
TRANSFORM_INVALID_GROUP_CHARS(/home/ansible/automation/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
vSphere 7.0.3
The issue is not dependent of the underlying OS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Set and modify a single cardinal tag category
community.vmware.vmware_tag_manager:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
state: set
object_name: "{{ vm_name }}"
object_type: VirtualMachine
tag_names:
- category: Environment
tag: "{{ env }}"
loop:
- Prod
- Lab
loop_control:
loop_var: env
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The tag tag category `Prodtype` has the value `Lab`.
##### ACTUAL RESULTS
The task fails:
```
TASK [vsphere : Set and modify a single cardinal tag category] **********************
fatal: [molecule-ubuntu-jammy.local -> localhost]: FAILED! => {"changed": false, "msg": "Tagging cardinality violation"}
```
| 2022-10-13T19:19:07 |
||
ansible-collections/community.vmware | 1,504 | ansible-collections__community.vmware-1504 | [
"1503"
] | 39c310198487cfa467d93901d2d3d00f4e6bb977 | diff --git a/plugins/module_utils/vmware_rest_client.py b/plugins/module_utils/vmware_rest_client.py
--- a/plugins/module_utils/vmware_rest_client.py
+++ b/plugins/module_utils/vmware_rest_client.py
@@ -486,29 +486,32 @@ def get_category_by_name(self, category_name=None):
return self.search_svc_object_by_name(service=self.api_client.tagging.Category, svc_obj_name=category_name)
- def get_tag_by_category(self, tag_name=None, category_name=None):
+ def get_tag_by_category(self, tag_name=None, category_name=None, category_id=None):
"""
Return tag object by name and category name specified
Args:
tag_name: Name of tag
- category_name: Name of category
-
+ category_name: Name of category (mutually exclusive with 'category_id')
+ category_id: Id of category, if known in advance (mutually exclusive with 'category_name')
Returns: Tag object if found else None
"""
if not tag_name:
return None
- if category_name:
- category_obj = self.get_category_by_name(category_name=category_name)
+ if category_id or category_name:
+ if not category_id:
+ category_obj = self.get_category_by_name(category_name=category_name)
+
+ if not category_obj:
+ return None
- if not category_obj:
- return None
+ category_id = category_obj.id
- for tag_object in self.api_client.tagging.Tag.list():
+ for tag_object in self.api_client.tagging.Tag.list_tags_for_category(category_id):
tag_obj = self.api_client.tagging.Tag.get(tag_object)
- if tag_obj.name == tag_name and tag_obj.category_id == category_obj.id:
+ if tag_obj.name == tag_name:
return tag_obj
else:
return self.search_svc_object_by_name(service=self.api_client.tagging.Tag, svc_obj_name=tag_name)
diff --git a/plugins/modules/vmware_tag_manager.py b/plugins/modules/vmware_tag_manager.py
--- a/plugins/modules/vmware_tag_manager.py
+++ b/plugins/modules/vmware_tag_manager.py
@@ -318,7 +318,9 @@ def ensure_state(self):
# User specified only tag
tag_name = tag
- if category_name:
+ if category_obj:
+ tag_obj = self.get_tag_by_category(tag_name=tag_name, category_id=category_obj.id)
+ elif category_name:
tag_obj = self.get_tag_by_category(tag_name=tag_name, category_name=category_name)
else:
tag_obj = self.get_tag_by_name(tag_name=tag_name)
| vmware_tag_manager: Tag processing is very slow
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Tag processing of a managed object takes a long time.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_tag_manager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible [core 2.13.4]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible/.local/bin/ansible
python version = 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
download_url: https://galaxy.ansible.com/download/community-vmware-3.0.0.tar.gz
format_version: 1.0.0
name: vmware
namespace: community
server: https://galaxy.ansible.com/api/
signatures: []
version: 3.0.0
version_url: https://galaxy.ansible.com/api/v2/collections/community/vmware/versions/3.0.0/
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
INVENTORY_ENABLED(/home/ansible/automation/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
TRANSFORM_INVALID_GROUP_CHARS(/home/ansible/automation/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
vSphere 7.0.3
The issue is not dependent of the underlying OS
##### STEPS TO REPRODUCE
Set a tag on a managed object, for example on a virtual machine:
```
- name: Set tag on virtual machine
community.vmware.vmware_tag_manager:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
state: set
moid: "{{ vm.moid }}"
object_type: VirtualMachine
tag_names:
- category: Environment
tag: Prod
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Tag processing is fast.
##### ACTUAL RESULTS
In my case, it took more than `32s` until the tag was added to the virtual machine. Subsequent plays are not faster, even when nothing has to be changed:
<!--- Paste verbatim command output between quotes -->
```
TASK [vsphere : Set tag on virtual machine] ************************************
Friday 14 October 2022 13:12:11 +0200 (0:00:00.059) 0:00:06.178 ********
changed: [molecule-debian-bullseye.local -> localhost]
TASK [vsphere : Include tasks from 'guest_boot_options.yml'] *******************
Friday 14 October 2022 13:12:43 +0200 (0:00:32.651) 0:00:38.830 ********
[...]
Friday 14 October 2022 13:12:53 +0200 (0:00:00.052) 0:00:48.454 ********
===============================================================================
vsphere : Set tag on virtual machine ----------------------------------- 32.65s
[...]
```
| 2022-10-14T11:58:36 |
||
ansible-collections/community.vmware | 1,507 | ansible-collections__community.vmware-1507 | [
"1503"
] | fb1f3a6064abac3f5dd2342e7c5163b3946af09d | diff --git a/plugins/module_utils/vmware_rest_client.py b/plugins/module_utils/vmware_rest_client.py
--- a/plugins/module_utils/vmware_rest_client.py
+++ b/plugins/module_utils/vmware_rest_client.py
@@ -486,29 +486,32 @@ def get_category_by_name(self, category_name=None):
return self.search_svc_object_by_name(service=self.api_client.tagging.Category, svc_obj_name=category_name)
- def get_tag_by_category(self, tag_name=None, category_name=None):
+ def get_tag_by_category(self, tag_name=None, category_name=None, category_id=None):
"""
Return tag object by name and category name specified
Args:
tag_name: Name of tag
- category_name: Name of category
-
+ category_name: Name of category (mutually exclusive with 'category_id')
+ category_id: Id of category, if known in advance (mutually exclusive with 'category_name')
Returns: Tag object if found else None
"""
if not tag_name:
return None
- if category_name:
- category_obj = self.get_category_by_name(category_name=category_name)
+ if category_id or category_name:
+ if not category_id:
+ category_obj = self.get_category_by_name(category_name=category_name)
+
+ if not category_obj:
+ return None
- if not category_obj:
- return None
+ category_id = category_obj.id
- for tag_object in self.api_client.tagging.Tag.list():
+ for tag_object in self.api_client.tagging.Tag.list_tags_for_category(category_id):
tag_obj = self.api_client.tagging.Tag.get(tag_object)
- if tag_obj.name == tag_name and tag_obj.category_id == category_obj.id:
+ if tag_obj.name == tag_name:
return tag_obj
else:
return self.search_svc_object_by_name(service=self.api_client.tagging.Tag, svc_obj_name=tag_name)
diff --git a/plugins/modules/vmware_tag_manager.py b/plugins/modules/vmware_tag_manager.py
--- a/plugins/modules/vmware_tag_manager.py
+++ b/plugins/modules/vmware_tag_manager.py
@@ -319,7 +319,9 @@ def ensure_state(self):
# User specified only tag
tag_name = tag
- if category_name:
+ if category_obj:
+ tag_obj = self.get_tag_by_category(tag_name=tag_name, category_id=category_obj.id)
+ elif category_name:
tag_obj = self.get_tag_by_category(tag_name=tag_name, category_name=category_name)
else:
tag_obj = self.get_tag_by_name(tag_name=tag_name)
| vmware_tag_manager: Tag processing is very slow
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Tag processing of a managed object takes a long time.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_tag_manager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible [core 2.13.4]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible/.local/bin/ansible
python version = 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
download_url: https://galaxy.ansible.com/download/community-vmware-3.0.0.tar.gz
format_version: 1.0.0
name: vmware
namespace: community
server: https://galaxy.ansible.com/api/
signatures: []
version: 3.0.0
version_url: https://galaxy.ansible.com/api/v2/collections/community/vmware/versions/3.0.0/
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
INVENTORY_ENABLED(/home/ansible/automation/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
TRANSFORM_INVALID_GROUP_CHARS(/home/ansible/automation/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
vSphere 7.0.3
The issue is not dependent of the underlying OS
##### STEPS TO REPRODUCE
Set a tag on a managed object, for example on a virtual machine:
```
- name: Set tag on virtual machine
community.vmware.vmware_tag_manager:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
state: set
moid: "{{ vm.moid }}"
object_type: VirtualMachine
tag_names:
- category: Environment
tag: Prod
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Tag processing is fast.
##### ACTUAL RESULTS
In my case, it took more than `32s` until the tag was added to the virtual machine. Subsequent plays are not faster, even when nothing has to be changed:
<!--- Paste verbatim command output between quotes -->
```
TASK [vsphere : Set tag on virtual machine] ************************************
Friday 14 October 2022 13:12:11 +0200 (0:00:00.059) 0:00:06.178 ********
changed: [molecule-debian-bullseye.local -> localhost]
TASK [vsphere : Include tasks from 'guest_boot_options.yml'] *******************
Friday 14 October 2022 13:12:43 +0200 (0:00:32.651) 0:00:38.830 ********
[...]
Friday 14 October 2022 13:12:53 +0200 (0:00:00.052) 0:00:48.454 ********
===============================================================================
vsphere : Set tag on virtual machine ----------------------------------- 32.65s
[...]
```
| After the performance fix it takes only:
```
[...]
vsphere : Set tag on virtual machine ------------------------------------ 4.24s
``` | 2022-10-18T14:47:37 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.