repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
EleutherAI/gpt-neox | 72 | EleutherAI__gpt-neox-72 | [
"69"
] | 755181b0416883cb2f68f5321afbc3b61294d33c | diff --git a/train_pipeline.py b/train_pipeline.py
--- a/train_pipeline.py
+++ b/train_pipeline.py
@@ -1,16 +1,21 @@
+import argparse
+import json
import random
+from collections import defaultdict
+import os
import deepspeed
import torch
from torch.utils.data import DataLoader
from tqdm.auto import trange
-import torch.distributed as distributed
-from gpt_neox import (GPTNeoX_Pipe, AutoregressiveWrapper, GPT2Dataset, extract_tarfile,
- prepare_optimizer_parameters, get_tokenizer, is_main, prepare_data)
+from gpt_neox import (GPTNeoX, AutoregressiveWrapper, TextSamplerDataset,
+ cycle, prepare_optimizer_parameters, decode_tokens, prepare_data,
+ GPTNeoX_Pipe)
+from gpt_neox.datasets import GPT2Dataset
+from gpt_neox.utils import is_main
+import gpt_neox
-from gpt_neox.utils import get_args, get_params
-
-import GPUtil
+WORLD_SIZE = os.getenv('WORLD_SIZE')
# arguments
train_args = get_args()
| Implement 1-Bit Adam
Integrate 1-bit Adam into our model. The DeepSpeed tutorial can be found [here](https://www.deepspeed.ai/tutorials/onebit-adam/)
| 2021-01-18T17:17:16 |
||
EleutherAI/gpt-neox | 637 | EleutherAI__gpt-neox-637 | [
"588"
] | 991cdeed67d0b370b3d9e936b089e4298c3f7258 | diff --git a/megatron/neox_arguments/neox_args.py b/megatron/neox_arguments/neox_args.py
--- a/megatron/neox_arguments/neox_args.py
+++ b/megatron/neox_arguments/neox_args.py
@@ -939,6 +939,11 @@ class NeoXArgsTextgen(NeoXArgsTemplate):
integer between 0 and the models vocab size. Filters out any logits with a probability less than that of the top_kth token.
"""
+ return_logits: bool = False
+ """
+ Boolean for whether to return the logits for generated tokens
+ """
+
maximum_tokens: int = 64
"""
maximum number of tokens to be generated
diff --git a/megatron/text_generation_utils.py b/megatron/text_generation_utils.py
--- a/megatron/text_generation_utils.py
+++ b/megatron/text_generation_utils.py
@@ -278,6 +278,7 @@ def stream_tokens(
# initialize generation variables
state_is_done = torch.zeros([batch_size]).byte().cuda()
token_generation_end_index = torch.ones([batch_size]).long().cuda() * (-1)
+ generation_logits = torch.empty(maximum_tokens, neox_args.padded_vocab_size).float().cuda()
while token_index_to_generate <= last_token_index_to_generate:
if recompute: # recompute all tokens
@@ -333,6 +334,9 @@ def stream_tokens(
next_token_log_probs, num_samples=1
).view(-1)
+ if neox_args.return_logits:
+ generation_logits[token_index_to_generate - 1] = generated_token_logits[0]
+
if neox_args.is_pipe_parallel:
# broadcast generated tokens to pipe parallel group
src_rank = model.grid.stage_to_global(model.num_stages - 1)
@@ -378,7 +382,7 @@ def stream_tokens(
token_index_to_generate += 1
- yield context_tokens, token_generation_start_index, token_generation_end_index, state_is_done.bool()
+ yield context_tokens, token_generation_start_index, token_generation_end_index, generation_logits, state_is_done.bool()
if torch.all(state_is_done):
break
@@ -473,6 +477,7 @@ def generate_samples_from_prompt(
batch_context_tokens,
batch_token_generation_start_index,
batch_token_generation_end_index,
+ batch_generated_token_logits,
is_done,
) in stream_tokens(
neox_args=neox_args,
@@ -526,6 +531,10 @@ def generate_samples_from_prompt(
"message": message,
"duration_seconds": float(time.time() - start_time),
}
+
+ if neox_args.return_logits:
+ data["logits"] = batch_generated_token_logits.cpu().numpy().tolist()
+
generated_texts.append(data)
return generated_texts
| Restore ability to get logits from generation?
It's often useful when generating samples to be able to get the logits / probabilities of the generated tokens (e.g. for ranking suggestions). It looks like this used to be available but was removed by 7aed13305c48b0048ec8b9efdba01e40fc69f123. It would be great if this functionality could be ported to the most recent version of the code.
I attempted to hack it back in, but must have messed something up because it made evaluation ~3.5x slower; I assume I accidentally introduced some GPU-CPU copies but unfortunately I don't really understand the Megatron/Deepspeed code well enough to see how to fix it. That branch is here: https://github.com/moyix/gpt-neox/tree/return_logits
Edit: fixed the link to my attempt at implementation, not sure how that happened
| Okay, I think I managed to get it working (at least for my use case). I think the problem previously was that I was trying to save all 52K logits for every token. This version just saves the logit for the token that was actually chosen (actually it saves the normalized log-probability of the token, which is what I really wanted — but it's easy enough to undo that if raw logits are desired):
https://github.com/EleutherAI/gpt-neox/compare/main...moyix:logit_try2
Happy to make this into a PR if desired!
@moyix Have you looked at [this code](https://github.com/EleutherAI/gpt-neox/blob/8229d921d329266323706c01dd6778fa71649ac7/eval_tasks/eval_adapter.py#L133)? It’s not currently exposed by `generate.py` but looks like it has the desired functionality. Maybe we can clean up the `generate.py` interface and introduce this as an option?
I hadn't seen that! I will try to take a look and see if I can expose it in `generate.py`.
@moyix Any updates on this?
I can take this on if no one else is working on this.
@Kyle1668 please do :)
Great! Feel free to assign me the ticket. | 2022-07-01T15:48:59 |
|
EleutherAI/gpt-neox | 667 | EleutherAI__gpt-neox-667 | [
"666"
] | e197cbd324bd4a26f88e6c1b7faf88d569e9c2e7 | diff --git a/megatron/neox_arguments/arguments.py b/megatron/neox_arguments/arguments.py
--- a/megatron/neox_arguments/arguments.py
+++ b/megatron/neox_arguments/arguments.py
@@ -288,7 +288,6 @@ def consume_deepy_args(cls):
"-H",
"--hostfile",
type=str,
- default=DLTS_HOSTFILE,
help="Hostfile path (in MPI style) that defines the "
"resource pool available to the job (e.g., "
"worker-0 slots=4)"
| diff --git a/tests/neox_args/test_neoxargs_commandline.py b/tests/neox_args/test_neoxargs_commandline.py
--- a/tests/neox_args/test_neoxargs_commandline.py
+++ b/tests/neox_args/test_neoxargs_commandline.py
@@ -63,6 +63,34 @@ def test_neoxargs_consume_deepy_args_without_yml_suffix():
assert args_loaded_yamls == args_loaded_consume
[email protected]
+def test_neoxargs_consume_deepy_args_with_hostfile_param():
+ """
+ Verify consume_deepy_args processes command line arguments without yaml suffix.
+ Also test the hostfile CLI arg
+ """
+
+ from megatron.neox_arguments import NeoXArgs
+
+ # load neox args with command line
+ with patch(
+ "sys.argv",
+ [str(get_root_directory() / "deepy.py"), "train.py"]
+ + get_configs_with_path(["small", "local_setup"])
+ + ["--hostfile=/mock_path"]
+ ):
+ args_loaded_consume = NeoXArgs.consume_deepy_args()
+
+ # load neox args directly from yaml files
+ args_loaded_yamls = NeoXArgs.from_ymls(
+ get_configs_with_path(["small.yml", "local_setup.yml"])
+ )
+
+ # update values from yaml files that cannot otherwise be matched
+ args_loaded_yamls.update_value("user_script", "train.py")
+ args_loaded_yamls.wandb_group = args_loaded_consume.wandb_group
+
+ assert args_loaded_yamls == args_loaded_consume
@pytest.mark.cpu
def test_neoxargs_consume_deepy_args_with_config_dir():
| The `hostfile` NeoX Argument Is No Longer Working
**Describe the bug**
The `hostile` NeoX configuration argument is no longer taking effect. Regardless of the value set, the hostile path is always `/job/hostfile`.

The hostile updates if you use the `--hostfile` parameter.

**To Reproduce**
Steps to reproduce the behavior:
1. On the latest main build, run `pytest tests -m cpu` and observe that the `test_neoxargs_consume_deepy_args_without_yml_suffix` unit test is failing.
2. (Alt) Try to set a custom value for the `hostfile` config and observe that the program nonetheless uses the `/job/hostfile` path.
**Expected behavior**
When the `hostfile` argument is set, the value used in the program (even if it is a dummy path) matches the config value.

The `test_neoxargs_consume_deepy_args_without_yml_suffix` unit test passes.

**Proposed solution**
I suspect that this bug was introduced in 5e0d614744983cc87cb33eb127c60f13562399bc . When I comment out the CLI parameter, the test passes.

The default value for this parameter is likely overwriting any values read from the config.
**Environment (please complete the following information):**
- GPUs: 8x NVIDIA A40
- Configs: Default
- Environment: Ubuntu in EleutherAI Cluster
| Removing the default value argument in the CLI argument parses logic seems to fix the overwriting issue.

That default value logic also seems redundant. The default value for the `hostfile` argument is already covered in the `deepspeed` package.


| 2022-09-05T20:22:54 |
EleutherAI/gpt-neox | 670 | EleutherAI__gpt-neox-670 | [
"669"
] | 87d01ad5d57b6143c01492b567ab42dc9178ad98 | diff --git a/megatron/training.py b/megatron/training.py
--- a/megatron/training.py
+++ b/megatron/training.py
@@ -144,9 +144,10 @@ def pretrain(neox_args):
forward_step_func=forward_step,
data_iterator=test_data_iterator,
model=model,
- iteration=0, # iteration 0 in order to always use full test data
+ iteration=iteration,
verbose=True,
timers=timers,
+ chart_name="test"
)
@@ -736,6 +737,7 @@ def evaluate_and_print_results(
iteration,
verbose=False,
timers=None,
+ chart_name="validation"
):
"""Helper function to evaluate and dump results on screen."""
total_loss_dict = evaluate(
@@ -746,14 +748,14 @@ def evaluate_and_print_results(
verbose=verbose,
timers=timers,
)
- string = f" validation results at {prefix} | "
+ string = f" {chart_name} results at {prefix} | "
for k, v in total_loss_dict.items():
if isinstance(v, dict):
for k2, v2 in v.items():
k3 = "_".join([k, k2])
string += f"{k3} value: {v2:.6E} | "
tb_wandb_log(
- f"validation/{k3}",
+ f"{chart_name}/{k3}",
v2,
iteration,
use_wandb=neox_args.use_wandb,
@@ -762,7 +764,7 @@ def evaluate_and_print_results(
else:
string += f"{k} value: {v:.6E} | "
tb_wandb_log(
- f"validation/{k}",
+ f"{chart_name}/{k}",
v,
iteration,
use_wandb=neox_args.use_wandb,
| Test set metrics overwrite validation set metrics in TensorBoard and are rejected for logging by Weights and Biases (W&B)
**Describe the bug**
At model train completion, the test set loss is written as iteration 0 to the TensorBoard / W&B chart `validation/lm_loss`, and the test set perplexity is written as iteration 0 to the chart `validation/lm_loss_ppl`. As the validation loss and perplexity has already been written to this chart, this results in TensorBoard deleting all the validation metrics, overwriting them with the test loss and perplexity values. W&B refuses to add the test metrics to the charts at all, throwing a warning that looks like `wandb: WARNING Step must only increase in log calls. Step 0 < 32000; dropping {'validation/lm_loss': 1.715476632118225}.`
**To Reproduce**
Steps to reproduce the behavior:
1. Pip install and setup TensorBoard and W&B
2. Begin training a model with a train, validation, and test set
3. Observe in both TensorBoard and W&B that validation metrics are being logged
4. Allow the model to train to completion
5. Observe that the TensorBoard validation metrics are now gone, overwritten by the test set metrics
6. Observe the W&B error in the text logs / program output
**Expected behavior**
Test metrics should be written to their own charts.
**Proposed solution**
Test loss and perplexity should be written to their own charts `test/lm_loss` and `test/lm_loss_ppl` respectively.
**Screenshots**

**Environment (please complete the following information):**
- GPUs: 4x A100 80 GB
- Configs: (configs that I used to reproduce the bug and test bug fixes are included below)
```
# GPT-2 pretraining setup
{
# parallelism settings ( you will want to change these based on your cluster setup, ideally scheduling pipeline stages
# across the node boundaries )
"pipe-parallel-size": 1,
"model-parallel-size": 1,
# model settings
"num-layers": 24,
"hidden-size": 1024,
"num-attention-heads": 16,
"seq-length": 4096,
"max-position-embeddings": 4096,
"norm": "layernorm",
"pos-emb": "rotary",
"no-weight-tying": true,
# these should provide some speedup but takes a while to build, set to true if desired
"scaled-upper-triang-masked-softmax-fusion": false,
"bias-gelu-fusion": false,
# optimizer settings
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00003,
"betas": [0.9, 0.999],
"eps": 1.0e-8,
}
},
"zero_optimization": {
"stage": 1,
"allgather_partitions": True,
"allgather_bucket_size": 500000000,
"overlap_comm": True,
"reduce_scatter": True,
"reduce_bucket_size": 500000000,
"contiguous_gradients": True,
"cpu_offload": False
},
# batch / data settings
"train_micro_batch_size_per_gpu": 16,
"data-impl": "mmap",
"split": "949,50,1",
# activation checkpointing
"checkpoint-activations": true,
"checkpoint-num-layers": 1,
"partition-activations": true,
"synchronize-each-layer": true,
# regularization
"gradient_clipping": 1.0,
"weight-decay": 0.01,
"hidden-dropout": 0,
"attention-dropout": 0,
# precision settings
"fp16": {
"fp16": true,
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
# misc. training settings
"train-iters": 100,
"lr-decay-iters": 100,
"distributed-backend": "nccl",
"lr-decay-style": "constant",
"warmup": 0.1,
"save-interval": 25,
"eval-interval": 25,
"eval-iters": 10,
# Checkpoint
"finetune": true,
# logging
"log-interval": 10,
"steps_per_print": 10,
"keep-last-n-checkpoints": 4,
"wall_clock_breakdown": true,
}
```
```
# Suggested data paths when using GPT-NeoX locally
{
"train-data-paths": ["/mnt/4TBNVME/gpt-neox/data/preprocessed/train_text_document"],
"test-data-paths": ["/mnt/4TBNVME/gpt-neox/data/preprocessed/test_text_document"],
"valid-data-paths": ["/mnt/4TBNVME/gpt-neox/data/preprocessed/val_text_document"],
"vocab-file": "/mnt/4TBNVME/gpt-neox/data/gpt2-vocab.json",
"merge-file": "/mnt/4TBNVME/gpt-neox/data/gpt2-merges.txt",
"save": "/mnt/4TBNVME/checkpoints_test",
"load": "/mnt/4TBNVME/checkpoints_test",
"checkpoint_validation_with_forward_pass": False,
"tensorboard-dir": "/mnt/4TBNVME/logs/tensorboard/bug_fix_test",
"log-dir": "/mnt/4TBNVME/logs/gptneox/bug_fix_test",
"use_wandb": True,
"wandb_host": "https://api.wandb.ai",
"wandb_project": "neox_test"
}
```
```
# Add this to your config for sparse attention every other layer
{
"attention_config": [[["local", "global"], "all"]],
# sparsity config:
# (these are the defaults for local sliding window sparsity, training will work without this here, but it's left in for
# illustrative purposes)
# see https://www.deepspeed.ai/tutorials/sparse-attention/#how-to-config-sparsity-structures for
# more detailed config instructions and available parameters
"sparsity_config": {
"block": 16, # block size
"num_local_blocks": 32,
}
}
```
**Additional context**
I have a bug fix ready, will follow up with it.
| 2022-09-12T21:07:19 |
||
EleutherAI/gpt-neox | 733 | EleutherAI__gpt-neox-733 | [
"731"
] | 4fde59ef53d4347d7b5ce1f36b99b80594c3aaa2 | diff --git a/megatron/checkpointing.py b/megatron/checkpointing.py
--- a/megatron/checkpointing.py
+++ b/megatron/checkpointing.py
@@ -17,6 +17,7 @@
"""Input/output checkpointing."""
+import json
import os
import re
import shutil
@@ -198,7 +199,10 @@ def save_ds_checkpoint(iteration, model, neox_args):
os.makedirs(configs_directory, exist_ok=True)
for config_filename, config_data in neox_args.config_files.items():
with open(os.path.join(configs_directory, config_filename), "w") as f:
- f.write(config_data)
+ if isinstance(config_data, str):
+ f.write(config_data)
+ else:
+ json.dump(config_data, f)
def save_checkpoint(neox_args, iteration, model, optimizer, lr_scheduler):
| Checkpointing fails to save config data because it is a `dict` not a `str`
**Describe the bug**
Saving model checkpoints fails with stack trace:
```
Traceback (most recent call last):^M
File "/mnt/nvme/home/dashiell/gpt-neox/train.py", line 27, in <module>^M
pretrain(neox_args=neox_args)^M
File "/mnt/nvme/home/dashiell/gpt-neox/megatron/training.py", line 106, in pretrain^M
iteration = train(^M
File "/mnt/nvme/home/dashiell/gpt-neox/megatron/training.py", line 613, in train^M
save_checkpoint(^M
File "/mnt/nvme/home/dashiell/gpt-neox/megatron/checkpointing.py", line 208, in save_checkpoint^M
save_ds_checkpoint(iteration, model, neox_args)^M
File "/mnt/nvme/home/dashiell/gpt-neox/megatron/checkpointing.py", line 201, in save_ds_checkpoint^M
f.write(config_data)^M
TypeError: write() argument must be str, not dict^M
```
**To Reproduce**
`python3 deepy.py --conf_dir configs 1-3B.yml local_setup.yml`
**Expected behavior**
The checkpoint should save without failing.
| Time for everyone’s favorite game: running a binary search on the commit history until we find out what went wrong 🙄
@dashstander Have you ruled out this being a weirdness of the new Stability cluster by running it on another GPU source?
@haileyschoelkopf what is the most recent commit that you know you’ve successfully saved a model with?
I've *loaded* a model with the most recent commit on `main` (ebd47f6e9b72cbdba51f5393958d0d3bd95c6e64), would need to dig for most recent commit. (probably have saved a model as of https://github.com/EleutherAI/gpt-neox/commit/46b7d82c34bac79dba9beb02149393a6d59efe2c)
maybe relevant (though I'll make a separate issue for it):
Checkpoints saved using DeeperSpeed can't be loaded using the `deepspeed_main` branch with new upstream Deepspeed. Not sure if this is solely due to the issue Igor mentioned about not saving `iteration` in Adam optimizer state, or if it's some other format change over Deepspeed versions.
traceback for that:
```
Traceback (most recent call last):
File "/fsx/hailey/deepspeed-main-neox/gpt-neox/evaluate.py", line 76, in <module>
main()
File "/fsx/hailey/deepspeed-main-neox/gpt-neox/evaluate.py", line 35, in main
model, neox_args = setup_for_inference_or_eval(use_cache=False)
File "/fsx/hailey/deepspeed-main-neox/gpt-neox/megatron/utils.py", line 440, in setup_for_inference_or_eval
model, _, _ = setup_model_and_optimizer(
File "/fsx/hailey/deepspeed-main-neox/gpt-neox/megatron/training.py", line 437, in setup_model_and_optimizer
neox_args.iteration = load_checkpoint(
File "/fsx/hailey/deepspeed-main-neox/gpt-neox/megatron/checkpointing.py", line 235, in load_checkpoint
checkpoint_name, state_dict = model.load_checkpoint(
File "/fsx/shiv/torchtest/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2647, in load_checkpoint
load_path, client_states = self._load_checkpoint(load_dir,
File "/fsx/shiv/torchtest/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2713, in _load_checkpoint
self.load_module_state_dict(state_dict=checkpoint['module'],
File "/fsx/shiv/torchtest/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2507, in load_module_state_dict
self.module.load_state_dict(state_dict, # TODO
File "/fsx/shiv/torchtest/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1620, in load_state_dict
raise TypeError("Expected state_dict to be dict-like, got {}.".format(type(state_dict)))
```
I should prob make an issue on Deepspeed about whether there's a way to update checkpoints to newer DeepSpeed versions. This just came up, and also isn't too big a deal bc I can still run on old Deepspeed for models trained pre-upstream deepspeed main branch switch, just inconvenient
Actually yeah, I suspect this is bc of a mismatch between Deepspeed version and which NeoX branch you're on. What's the deepspeed version and what branch is this (main?)
I'm on DeeperSpeed and GPT-NeoX main. I actually think this is an(other) `SlurmRunner` issue. It's failing specifically with the version of the config files that are saved as strings in the arguments. Those specifically were giving me trouble when implementing the SlurmRunner and there's Deep(er)Speed logic to basically clean them up. They get written back to strings to get passed in to `srun` as a command line arg, but it's too much of a coincidence for me not to suspect it.
Maybe make another issue about loading the checkpoints? Or comment on #673 ? That seems like something we should all be aware of
Ok, yeah, this is a `SlurmRunner` thing. To make NeoX and the DeepSpeed SlurmRunner compatible I needed to go in and clean up those configs before passing them to `srun`. But now the when the argument dict is parsed [here](https://github.com/EleutherAI/gpt-neox/blob/ebd47f6e9b72cbdba51f5393958d0d3bd95c6e64/megatron/neox_arguments/arguments.py#L381) all of the configs are so clean they're getting parsed into `dict`s instead of being left as strings.
My proposed solution is to check for this when serializing the configs and either use `json.dumps` or `f.write` as needed. I'm about to get kicked out of the cafe I'm in, but I'll make a PR to that effect tonight.
(Made an issue at #732 for the above error I described)! | 2022-12-08T03:41:00 |
|
EleutherAI/gpt-neox | 752 | EleutherAI__gpt-neox-752 | [
"750"
] | ee9a14365f3e178a1ae2b9950605cbe4cff1b7dc | diff --git a/tools/convert_sequential_to_hf.py b/tools/convert_sequential_to_hf.py
new file mode 100644
--- /dev/null
+++ b/tools/convert_sequential_to_hf.py
@@ -0,0 +1,353 @@
+# Copyright (c) 2023, EleutherAI
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import sys
+
+import yaml
+import argparse
+from tqdm import tqdm
+
+import torch
+from transformers import GPTNeoXConfig, GPTNeoXForCausalLM
+
+from typing import List
+
+sys.path.append(
+ os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir))
+)
+from megatron.tokenizer import build_tokenizer
+
+
+"""
+A script for converting saved NeoX Checkpoints to Huggingface (HF) compatible GPT-NeoX type models.
+
+Note that this script does not support all NeoX features.
+Please investigate carefully whether your model is compatible with all architectures supported by the GPTNeoXForCausalLM class in HF.
+
+(e.g. position embeddings such as AliBi may not be supported by Huggingface's GPT-NeoX architecture.
+"""
+
+
+def load_partitions(input_checkpoint_path, mp_partitions) -> List[torch.Tensor]:
+ """Returns a list containing all states from a model (across MP partitions)"""
+
+ loaded_tp_ranks = [
+ torch.load(
+ os.path.join(
+ input_checkpoint_path,
+ f"mp_rank_{i:02}_model_states.pt",
+ ),
+ map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
+ )
+ for i in range(mp_partitions)
+ ]
+
+ return loaded_tp_ranks
+
+
+def get_state(
+ state_dicts: list[torch.Tensor],
+ key: str,
+ layer_idx: int,
+) -> torch.Tensor:
+ """Accesses all MP partitions of a given layer/weight's state."""
+ # main DeepSpeed saves each MP partition
+ key = f"sequential.{layer_idx}.{key}"
+
+ return [state_dict["module"][key] for state_dict in state_dicts]
+
+
+def get_key(loaded_config, key, default=None):
+ """
+ Search for a given key in a NeoX yaml. normalizes underscores -> hyphens
+ """
+ key = key.replace("_", "-")
+ try:
+ return loaded_config[key]
+ except KeyError:
+ key = key.replace("-", "_")
+ try:
+ return loaded_config[key]
+ except KeyError:
+ return default
+
+
+def create_config(neox_config):
+ """take in a loaded yaml from NeoX and assign relevant values to HF config.
+ Returns: GPTNeoXConfig() object
+ """
+
+ class TokenizerArgs:
+ # kinda hacky.
+ # this is to get something with the same interface as is used in build_tokenizer()
+ # without diving into loading a neox_args object or using argparse etc.
+ def __init__(self, neox_config):
+ self.make_vocab_size_divisible_by = get_key(
+ neox_config, "make-vocab-size-divisible-by", default=128
+ )
+ self.model_parallel_size = get_key(neox_config, "model-parallel-size")
+ self.vocab_file = get_key(neox_config, "vocab-file")
+ self.merge_file = get_key(neox_config, "merge-file")
+ self.tokenizer_type = get_key(neox_config, "tokenizer-type")
+
+ self.rank = 0
+
+ args = TokenizerArgs(neox_config)
+ tokenizer = build_tokenizer(args)
+ try: # GPT2TokenizerFast raises NotImplementedError
+ pad_token = tokenizer.pad
+ except:
+ pad_token = (
+ 1 # pad defaulting to 1. follows convention from GPT-NeoX-20b tokenizer
+ )
+
+ # TODO: change the default value here based on discussion regarding `gpt_j_tied` config parameter's default
+ use_tied_lns = get_key(neox_config, "gpt-j-tied", False)
+
+ if use_tied_lns:
+ raise NotImplementedError(
+ """ERROR: Huggingface Transformers does not yet support a single shared layernorm
+ per transformer block for GPT-NeoX models trained w/ GPT-J parallel residuals.
+ See https://github.com/EleutherAI/gpt-neox/pull/481 for further details."""
+ )
+
+ # set all config values.
+ hf_config = GPTNeoXConfig(
+ vocab_size=args.padded_vocab_size,
+ hidden_size=get_key(neox_config, "hidden-size"),
+ num_hidden_layers=get_key(neox_config, "num-layers"),
+ num_attention_heads=get_key(neox_config, "num-attention-heads"),
+ intermediate_size=(get_key(neox_config, "hidden-size") * 4),
+ hidden_act=get_key(neox_config, "activation", default="gelu"),
+ rotary_pct=get_key(neox_config, "rotary-pct", default=1.0),
+ rotary_emb_base=get_key(neox_config, "rotary-emb-base", default=10000),
+ max_position_embeddings=get_key(neox_config, "max-position-embeddings"),
+ initializer_range=get_key(neox_config, "init-method-std", 0.02),
+ layer_norm_eps=get_key(neox_config, "layernorm-epsilon", 1e-5),
+ use_cache=True,
+ bos_token_id=tokenizer.eod,
+ eos_token_id=tokenizer.eod,
+ tie_word_embeddings=(not get_key(neox_config, "no-weight-tying", False)),
+ use_parallel_residual=get_key(neox_config, "gpt-j-residual", False),
+ )
+ return hf_config
+
+
+def convert(input_checkpoint_path, loaded_config, output_checkpoint_path):
+ """convert a NeoX checkpoint to a HF model format.
+ should perform model-parallel merging correctly
+ but only supports features allowed by HF GPT-NeoX implementation (e.g. rotary embeddings)
+ """
+
+ hf_config = GPTNeoXConfig()
+
+ hf_config = create_config(loaded_config)
+
+ hf_model = GPTNeoXForCausalLM(
+ hf_config
+ ).half() # nice-to-have: lazy init weights somehow?
+
+ mp_partitions = get_key(loaded_config, "model-parallel-size")
+
+ # DeepSpeed main saves all model states from an MP rank in one file. load the MP ranks only once and index into them with get_state()
+ loaded_tp_ranks = load_partitions(input_checkpoint_path, mp_partitions)
+
+ ### Embedding layer ###
+ # Embedding is layer idx 0
+ hf_model.gpt_neox.embed_in.load_state_dict(
+ {
+ "weight": torch.cat(
+ get_state(loaded_tp_ranks, "word_embeddings.weight", 0), dim=0
+ )
+ }
+ )
+ assert (
+ hf_config.vocab_size == hf_model.gpt_neox.embed_in.weight.shape[0]
+ ), f"ERROR: calculated vocab size {hf_config.vocab_size} != embed param size {hf_model.gpt_neox.embed_in.shape[0]}"
+ ### End Embedding Layer ###
+
+ for layer_i in tqdm(range(get_key(loaded_config, "num-layers"))):
+
+ # get layer from hf model
+ hf_layer = hf_model.gpt_neox.layers[layer_i]
+
+ # + 2 bc of embed layer and a dummy _pre_transformer_block
+ state_dict = {}
+ for key in [
+ "attention.dense.weight",
+ "mlp.dense_4h_to_h.weight",
+ ]:
+ state_dict[key] = torch.cat(
+ get_state(loaded_tp_ranks, key, layer_i + 2), dim=1
+ )
+
+ # average layernorm stats over mp ranks
+ for key in [
+ "input_layernorm.weight",
+ "input_layernorm.bias",
+ "post_attention_layernorm.weight",
+ "post_attention_layernorm.bias",
+ ]:
+ state_dict[key] = sum(get_state(loaded_tp_ranks, key, layer_i + 2)) / len(
+ loaded_tp_ranks
+ )
+
+ # LinearWithTPMerge
+ for key in [
+ "mlp.dense_h_to_4h.weight",
+ "mlp.dense_h_to_4h.bias",
+ "attention.query_key_value.weight",
+ "attention.query_key_value.bias",
+ ]:
+ state_dict[key] = torch.cat(
+ get_state(loaded_tp_ranks, key, layer_i + 2), dim=0
+ )
+
+ # LinearWithTPSplitBias
+ for key in [
+ "mlp.dense_4h_to_h.bias",
+ "attention.dense.bias",
+ ]:
+ state_dict[key] = sum(get_state(loaded_tp_ranks, key, layer_i + 2))
+
+ # Just take one
+ state_dict["attention.rotary_emb.inv_freq"] = get_state(
+ loaded_tp_ranks, "attention.rotary_emb.inv_freq", layer_i + 2
+ )[0]
+
+ state_dict["attention.bias"] = hf_layer.state_dict()["attention.bias"]
+ state_dict["attention.masked_bias"] = hf_layer.state_dict()[
+ "attention.masked_bias"
+ ]
+
+ # load state_dict into layer
+ hf_layer.load_state_dict(state_dict)
+
+ # Load final layer norm
+ hf_model.gpt_neox.final_layer_norm.load_state_dict(
+ {
+ "weight": (
+ sum(
+ get_state(
+ loaded_tp_ranks,
+ "norm.weight",
+ get_key(loaded_config, "num-layers") + 3,
+ )
+ )
+ )
+ / len(loaded_tp_ranks),
+ "bias": (
+ sum(
+ get_state(
+ loaded_tp_ranks,
+ "norm.bias",
+ get_key(loaded_config, "num-layers") + 3,
+ )
+ )
+ )
+ / len(loaded_tp_ranks),
+ }
+ )
+ # output embedding / LM head
+ hf_model.embed_out.load_state_dict(
+ {
+ "weight": torch.cat(
+ get_state(
+ loaded_tp_ranks,
+ "final_linear.weight",
+ get_key(loaded_config, "num-layers") + 4,
+ ),
+ dim=0,
+ ),
+ }
+ )
+
+ del loaded_tp_ranks
+
+ return hf_model
+
+
+if __name__ == "__main__":
+
+ # before running script:
+ # `pip install --upgrade transformers`
+ # `huggingface-cli login`
+ #
+ from huggingface_hub import create_repo, HfApi
+
+ parser = argparse.ArgumentParser(
+ description="Merge MP partitions and convert to HF Model."
+ )
+ parser.add_argument(
+ "--input_dir",
+ type=str,
+ help="Path to NeoX checkpoint, e.g. /path/to/model/global_step143000",
+ )
+ parser.add_argument(
+ "--config_file",
+ type=str,
+ help="Path to config file for the input NeoX checkpoint.",
+ )
+ parser.add_argument(
+ "--output_dir",
+ type=str,
+ help="Output dir, where to save the HF Model, tokenizer, and configs",
+ )
+ parser.add_argument(
+ "--upload",
+ action="store_true",
+ help="Set to true in order to upload to the HF Hub directly.",
+ )
+ args = parser.parse_args()
+
+ with open(args.config_file) as f:
+ loaded_config = yaml.full_load(f)
+
+ hf_model = convert(args.input_dir, loaded_config, args.output_dir)
+
+ hf_model.save_pretrained(args.output_dir)
+
+ # save tokenizer to directory as well, for easy loading of model as a HF model
+ tokenizer_type = get_key(loaded_config, "tokenizer-type")
+
+ if tokenizer_type == "HFTokenizer":
+ print(f"saving tokenizer from file {get_key(loaded_config, 'vocab-file')}")
+ from transformers import PreTrainedTokenizerFast
+
+ tokenizer = PreTrainedTokenizerFast(
+ tokenizer_file=get_key(loaded_config, "vocab-file")
+ )
+ print("loaded tokenizer: ", tokenizer)
+ tokenizer.save_pretrained(args.output_dir)
+ print("tokenizer saved!")
+
+ print(
+ tokenizer.decode(
+ hf_model.generate(
+ tokenizer.encode("Hello, I am testing ", return_tensors="pt")
+ )[0]
+ )
+ )
+
+ if args.upload:
+ repo_name = input("Provide a repository name for the HF Hub: ")
+ create_repo(repo_name, repo_type="model", private=False, use_auth_token=True)
+
+ api = HfApi()
+ api.upload_folder(
+ folder_path=args.output_dir,
+ repo_id=repo_name,
+ repo_type="model",
+ )
diff --git a/tools/convert_to_hf.py b/tools/convert_v1.0_to_hf.py
similarity index 97%
rename from tools/convert_to_hf.py
rename to tools/convert_v1.0_to_hf.py
--- a/tools/convert_to_hf.py
+++ b/tools/convert_v1.0_to_hf.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2021, EleutherAI
+# Copyright (c) 2023, EleutherAI
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -50,7 +50,7 @@ def load_partitions(
input_checkpoint_path,
f"layer_{layer_idx:02}-model_{i:02}-model_states.pt",
),
- map_location=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
+ map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
for i in range(mp_partitions)
]
@@ -144,12 +144,10 @@ def convert(input_checkpoint_path, loaded_config, output_checkpoint_path):
hf_config = create_config(loaded_config)
- hf_model = GPTNeoXForCausalLM(
- hf_config
- )
-
+ hf_model = GPTNeoXForCausalLM(hf_config)
+
# save model in FP16 if Deepspeed fp16 was used in config, else 32 bit
- fp16 = get_key(loaded_config, "fp16")
+ fp16 = get_key(loaded_config, "fp16")
if fp16:
if fp16["fp16"]:
hf_model.half()
| Upstream DeepSpeed breaks HF conversion script
The tools/convert_to_hf.py script will need to be updated / a different version may need to be created for checkpoints saved with DeepSpeed. Checkpoints are no longer saved layer-by-layer, it seems, and now all weights are in several `mp_rank_{MP_RANK}_model_states.pt` files for each Model Parallel partition.
Upstream DeepSpeed checkpoint:
```
drwxr-xr-x 2 hailey eleuther 33280 Dec 18 14:29 configs
-rw-r--r-- 1 hailey eleuther 810771646 Dec 18 14:29 mp_rank_00_model_states.pt
-rw-r--r-- 1 hailey eleuther 608006863 Dec 18 14:29 zero_pp_rank_0_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608008079 Dec 18 14:29 zero_pp_rank_1_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608008079 Dec 18 14:29 zero_pp_rank_2_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608008143 Dec 18 14:29 zero_pp_rank_3_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608008143 Dec 18 14:29 zero_pp_rank_4_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608008143 Dec 18 14:29 zero_pp_rank_5_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608008079 Dec 18 14:29 zero_pp_rank_6_mp_rank_00_optim_states.pt
-rw-r--r-- 1 hailey eleuther 608006863 Dec 18 14:29 zero_pp_rank_7_mp_rank_00_optim_states.pt
```
DeeperSpeed checkpoint:
```
drwxrwxrwx 2 hailey eleuther 33280 Nov 18 04:55 configs
-rwxrwxrwx 1 hailey eleuther 206045931 Nov 18 04:55 layer_00-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_02-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_03-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_04-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_05-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_06-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_07-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_08-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_09-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_10-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_11-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_12-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_13-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_14-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_15-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_16-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_17-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_18-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_19-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_20-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_21-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_22-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_23-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_24-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 100720126 Nov 18 04:55 layer_25-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 9127 Nov 18 04:55 layer_27-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 206045931 Nov 18 04:55 layer_28-model_00-model_states.pt
-rwxrwxrwx 1 hailey eleuther 16291 Nov 18 04:55 mp_rank_00_model_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_0_mp_rank_00_optim_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_10_mp_rank_00_optim_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_11_mp_rank_00_optim_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_12_mp_rank_00_optim_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_13_mp_rank_00_optim_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_14_mp_rank_00_optim_states.pt
-rwxrwxrwx 1 hailey eleuther 287605953 Nov 18 04:55 zero_pp_rank_15_mp_rank_00_optim_states.pt
...
```
Updating the script shouldn't be too hard at all though.
| 2022-12-21T03:52:47 |
||
EleutherAI/gpt-neox | 853 | EleutherAI__gpt-neox-853 | [
"798"
] | 5c06ec1b9885f74d810a6d33e6015abb55f16ff2 | diff --git a/tools/convert_to_hf.py b/tools/convert_to_hf.py
--- a/tools/convert_to_hf.py
+++ b/tools/convert_to_hf.py
@@ -49,7 +49,8 @@ def load_partitions(
os.path.join(
input_checkpoint_path,
f"layer_{layer_idx:02}-model_{i:02}-model_states.pt",
- )
+ ),
+ map_location=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
)
for i in range(mp_partitions)
]
@@ -145,7 +146,13 @@ def convert(input_checkpoint_path, loaded_config, output_checkpoint_path):
hf_model = GPTNeoXForCausalLM(
hf_config
- ).half() # nice-to-have: lazy init weights somehow?
+ )
+
+ # save model in FP16 if Deepspeed fp16 was used in config, else 32 bit
+ fp16 = get_key(loaded_config, "fp16")
+ if fp16:
+ if fp16["fp16"]:
+ hf_model.half()
mp_partitions = get_key(loaded_config, "model-parallel-size")
diff --git a/tools/upload.py b/tools/upload.py
--- a/tools/upload.py
+++ b/tools/upload.py
@@ -13,26 +13,34 @@
# limitations under the License.
import os
+import sys
from huggingface_hub import HfApi, create_repo
-converted_ckpt = input("Where is the checkpoint folder (HF format) you want to use? ")
-repo_name = input("Provide a repository name for the HF Hub: ")
-branch_name = input("Provide a repo branch for the HF Hub (choose main as default): ")
-create_repo(repo_name, repo_type="model", private=False)
+converted_ckpt = sys.argv[1]
+repo_name = sys.argv[2]
+branch_name = sys.argv[3]
+try:
+ create_repo(repo_name, repo_type="model", private=False)
+except:
+ print("repo {repo_name} already exists!")
+ pass
files = os.listdir(converted_ckpt)
api = HfApi()
if branch_name != "main":
- api.create_branch(
- repo_id=repo_name,
- repo_type="model",
- branch=branch_name,
- )
+ try:
+ api.create_branch(
+ repo_id=repo_name,
+ repo_type="model",
+ branch=branch_name,
+ )
+ except:
+ print(f"branch {branch_name} already exists, try again...")
print(f"to upload: {files}")
for file in files:
- print(f"Uploading {file}...")
+ print(f"Uploading {file} to branch {branch_name}...")
api.upload_file(
path_or_fileobj=os.path.join(converted_ckpt, file),
path_in_repo=file,
| Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
**Describe the bug**
A clear and concise description of what the bug is.
>**1-3B model trained with 8 gpus and 1 node, when i try to convert it to hf model:**
Traceback (most recent call last):
File "./tools/convert_to_hf.py", line 292, in <module>
hf_model = convert(args.input_dir, loaded_config, args.output_dir)
File "./tools/convert_to_hf.py", line 156, in convert
"weight": torch.cat(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_cat)
**To Reproduce**
Steps to reproduce the behavior:
>python ./tools/convert_to_hf.py --input_dir checkpoints/global_step30000 --config_file configs/1-3B.yml --output_dir ../huggingface
**Expected behavior**
A clear and concise description of what you expected to happen.
> The expected result is I could convert ckpt to huggingface model properly
**Proposed solution**
If you have an idea for how we can fix this problem, describe it here.
> I am not sure if the model is incorrectly configed, or I need to modify the convert code.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
>- CUDA 11.3
>- torch 1.13.0.dev20220727+cu113
>- python 3.8.16
**Additional context**
Add any other context about the problem here.
>In training stage, the`model-parallel-size=8` in yml conf:
"pipe-parallel-size": 1,
**"model-parallel-size": 8**
| Your issue has boilerplate that hasn’t been filled out. Please complete the template.
@haileyschoelkopf it seems like there’s an issue with your HF conversation code?
Hi @zscwind ! I haven't encountered the exact error you're reporting, but adding
`map_location=torch.device('cuda:0')` to line 53 as a keyword arg to torch.load should fix this and load all your model weights to device 0. Alternatively, `map_location=torch.device('cpu')` should also have the same effect.
I'll push this fix later today :)
> Hi @zscwind ! I haven't encountered the exact error you're reporting, but adding
>
> `map_location=torch.device('cuda:0')` to line 53 as a keyword arg to torch.load should fix this and load all your model weights to device 0. Alternatively, `map_location=torch.device('cpu')` should also have the same effect.
>
> I'll push this fix later today :)
Yeah,I have tried your method. It works, thank you!
| 2023-03-23T13:52:50 |
|
EleutherAI/gpt-neox | 877 | EleutherAI__gpt-neox-877 | [
"875"
] | 43cc879d36c7faf44793cfa0c7d4d599e4f57a55 | diff --git a/tools/convert_sequential_to_hf.py b/tools/convert_sequential_to_hf.py
--- a/tools/convert_sequential_to_hf.py
+++ b/tools/convert_sequential_to_hf.py
@@ -58,7 +58,7 @@ def load_partitions(input_checkpoint_path, mp_partitions) -> List[torch.Tensor]:
def get_state(
- state_dicts: list[torch.Tensor],
+ state_dicts: List[torch.Tensor],
key: str,
layer_idx: int,
) -> torch.Tensor:
@@ -157,7 +157,27 @@ def convert(input_checkpoint_path, loaded_config, output_checkpoint_path):
hf_model = GPTNeoXForCausalLM(
hf_config
- ).half() # nice-to-have: lazy init weights somehow?
+ )
+
+ # save model in FP16 if Deepspeed fp16 was used in config, else 32 bit
+ fp16 = get_key(loaded_config, "fp16")
+ # save model in fp16/bf16 if Deepspeed fp16 or bf16 mixed precision was used in config, else 32 bit weights
+ fp16 = get_key(loaded_config, "fp16")
+ if fp16:
+ try:
+ # current behavior is to pass "fp16": {"enabled": true}, when using upstream Deepspeed
+ if fp16["enabled"]:
+ hf_model.half()
+ print("Saving weights in fp16 precision...")
+ except:
+ try:
+ # attempt to access bf16 dict in yaml file, if fp16 not enabled
+ bf16 = get_key(loaded_config, "bf16")
+ if bf16:
+ hf_model.to(dtype=torch.bfloat16)
+ print("Saving weights in bf16 precision...")
+ except:
+ print("Model not trained in fp16 / bf16 mixed precision, saving weights in fp32...")
mp_partitions = get_key(loaded_config, "model-parallel-size")
diff --git a/tools/convert_v1.0_to_hf.py b/tools/convert_v1.0_to_hf.py
--- a/tools/convert_v1.0_to_hf.py
+++ b/tools/convert_v1.0_to_hf.py
@@ -18,6 +18,7 @@
import yaml
import argparse
from tqdm import tqdm
+from typing import List
import torch
from transformers import GPTNeoXConfig, GPTNeoXForCausalLM
@@ -41,7 +42,7 @@
def load_partitions(
input_checkpoint_path, mp_partitions, layer_idx
-) -> list[torch.Tensor]:
+) -> List[torch.Tensor]:
"""Returns a list containing all weights in a given layer from a model (across MP partitions)"""
loaded_tp_ranks = [
@@ -146,12 +147,21 @@ def convert(input_checkpoint_path, loaded_config, output_checkpoint_path):
hf_model = GPTNeoXForCausalLM(hf_config)
- # save model in FP16 if Deepspeed fp16 was used in config, else 32 bit
+ # save model in fp16/bf16 if Deepspeed fp16 or bf16 mixed precision was used in config, else 32 bit weights
fp16 = get_key(loaded_config, "fp16")
if fp16:
- if fp16["fp16"]:
- hf_model.half()
-
+ try:
+ # this conditional is quite messy because there were a number of ways to specify bf16 or fp16 training
+ # in DeeperSpeed v1.0 .
+ if (fp16.get("fp16", None) or fp16["enabled"]) and not (fp16.get("type", None) == "bfloat16"):
+ hf_model.half()
+ print("Saving weights in fp16 precision...")
+ elif fp16.get("type", None) == "bfloat16":
+ hf_model.to(dtype=torch.bfloat16)
+ print("Saving weights in bf16 precision...")
+ except:
+ print("Model not trained in fp16 / bf16 mixed precision, saving weights in fp32...")
+
mp_partitions = get_key(loaded_config, "model-parallel-size")
### Embedding layer ###
| 1.0 HF conversion script fails on Python 3.8
**Describe the bug**
While the official recommended Python version for NeoX is 3.8, **convert_v1.0_to_hf.py** only works with Python 3.9 or later due to minor incompatibility (tested on Python 3.8.10).
>TypeError: 'type' object is not subscriptable
**Suggested change**
Modify `list[torch.Tensor]` at the line 42 to `List[torch.Tensor]` (capitalized L)
Add import `from typing import List`
| 2023-04-09T17:27:11 |
||
EleutherAI/gpt-neox | 1,024 | EleutherAI__gpt-neox-1024 | [
"1013"
] | 43ea51c2f3aeef2fc642ba401ce08844eb5a0240 | diff --git a/tools/corpora.py b/tools/corpora.py
--- a/tools/corpora.py
+++ b/tools/corpora.py
@@ -290,7 +290,7 @@ class C4OpenWebText(DataDownloader):
class Enwik8(DataDownloader):
name = "enwik8"
- urls = ["https://data.deepai.org/enwik8.zip"]
+ urls = ["http://mattmahoney.net/dc/enwik8.zip"]
def maybe_download_gpt2_tokenizer_data(tokenizer_type, data_dir):
| 'attention.bias' and 'attention.masked_bias' not in `hf_layer.state_dict()` when converting gpt-neox model to huggingface
**Describe the bug**
A clear and concise description of what the bug is.
I encounter the following error when I am converting GPTNeoX models to Huggingface using the `tools/convert_module_to_hf.py` script.
```
(gpt-neox) johnny@ink-lucy:~/gpt-neox$ bash haveibeentrainedon/wikitext/pilot/convert_to_hf.sh
[2023-08-18 23:37:21,695] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
> building GPT2BPETokenizer tokenizer ...
> padded vocab (size: 50257) with 47 dummy tokens (new size: 50304)
Saving weights in fp16 precision...
0%| | 0/24 [00:00<?, ?it/s]
Traceback (most recent call last):
File "./tools/convert_module_to_hf.py", line 307, in <module>
hf_model = convert(args.input_dir, loaded_config, args.output_dir)
File "./tools/convert_module_to_hf.py", line 230, in convert
state_dict["attention.bias"] = hf_layer.state_dict()["attention.bias"]
KeyError: 'attention.bias'
```
**Expected behavior**
Successful conversion.
**Proposed solution**
If you comment out lines 230 and 231, the script will run through. From an eyeballing of the results, it doesn't seem like language modelling performance seriously degraded. Could this be some code that was supposed to be taken out?
**Additional context**
This is for a model trained with the config `configs/pythia/410m.yml`
| Use `pip install transformers==4.30.2`
`transformers>=4.31.0`:
ipdb> hf_layer.state_dict().keys()
odict_keys(['input_layernorm.weight', 'input_layernorm.bias', 'post_attention_layernorm.weight', 'post_attention_layernorm.bias', 'attention.rotary_emb.inv_freq', 'attention.query_key_value.weight', 'attention.query_key_value.bias', 'attention.dense.weight', 'attention.dense.bias', 'mlp.dense_h_to_4h.weight', 'mlp.dense_h_to_4h.bias', 'mlp.dense_4h_to_h.weight', 'mlp.dense_4h_to_h.bias'])
`transformers<=4.30.2` :
ipdb> hf_layer.state_dict().keys()
odict_keys(['input_layernorm.weight', 'input_layernorm.bias', 'post_attention_layernorm.weight', 'post_attention_layernorm.bias', 'attention.bias', 'attention.masked_bias', 'attention.rotary_emb.inv_freq', 'attention.query_key_value.weight', 'attention.query_key_value.bias', 'attention.dense.weight', 'attention.dense.bias', 'mlp.dense_h_to_4h.weight', 'mlp.dense_h_to_4h.bias', 'mlp.dense_4h_to_h.weight', 'mlp.dense_4h_to_h.bias'])
You can find missing 'attention.bias', 'attention.masked_bias' before `transformers==4.30.2`. | 2023-09-13T20:30:34 |
|
EleutherAI/gpt-neox | 1,026 | EleutherAI__gpt-neox-1026 | [
"854"
] | 2922bef79c43cf6b9511886b07e8716a1adc190a | diff --git a/megatron/model/utils.py b/megatron/model/utils.py
--- a/megatron/model/utils.py
+++ b/megatron/model/utils.py
@@ -190,6 +190,12 @@ def exec_func(*inputs):
x = exec_range_func(start_idx, end_idx)(*x)
return x
+ def clear_cache(self):
+ """
+ Recursively clears the kv cache on all layers
+ """
+ recursive_setattr(self.sequential, "layer_past", None)
+
def recursive_setattr(m, attr, value, assert_type=None, type_filter=None):
"""
| [Broken] Generation with Sequential Model
Take up a config and set `"pipe-parallel-size": 1`
Run `python deepy.py generate.py configs/70M-deduped.yml -i input_prompt.txt -o prompt_out.txt`
This will bring sequential model in action.
Error: `'SequentialWrapper' object has no attribute 'clear_cache'` in [line](https://github.com/EleutherAI/gpt-neox/blob/5c06ec1b9885f74d810a6d33e6015abb55f16ff2/megatron/text_generation_utils.py#L448)
| I don't have the bandwidth to handle this for now. Would appreciate if someone can take a look.
Will take a look!
is this error solved now? I encountered the same problem when I train and inference a 125M GPT2 model according to the guidelines.
> is this error solved now? I encountered the same problem when I train and inference a 125M GPT2 model according to the guidelines.
If you encountered the same problem, it’s safe to say that it’s not solved.
The same problem here. Could you please help to check it?
Same here. Would the temporary comment the line out affect anything?
Sorry all. This happened due to a line that snuck into the neox 2.0 release. To clarify for all, there are three pipeline parallelism cases in gpt-neox:
**(pipe_parallel_size == 0):** In this case, the model is wrapped in a standard nn.Sequential module
https://github.com/EleutherAI/gpt-neox/blob/c64bacccf3a68a27e1f79fa176db4ecd2206a5e9/megatron/training.py#L416
This is done to reduce memory overhead and latency. This case is rarely used, and is where the issue you're seeing above lies.
**(pipe_parallel_size == 1):** This is the most common case. The model is wrapped in a single GPT2ModelPipe module, which makes it easier for both DeepSpeed and us to handle. This should be the default case, which we just resolved in https://github.com/EleutherAI/gpt-neox/pull/866.
**(pipe_parallel_size > 1):** This is for large models that require multiple pipeline module stages to distribute the model. This case remains unchanged.
**What should you do:** If you trained a model and need those weights to stay in a sequential module, we're working on a fix and will share it shortly. If you don't need those weights and instead simply need to run inference on a public model, apply the patch in https://github.com/EleutherAI/gpt-neox/pull/866 and run again.
@satpalsr @FourWinds021 @DaoD @yizhilll have your issues been resolved by the recent patches?
yes | 2023-09-14T12:08:28 |
|
EleutherAI/gpt-neox | 1,138 | EleutherAI__gpt-neox-1138 | [
"1110"
] | 3d8fec028acf3c2d69b8e23254e1a4a525f73c90 | diff --git a/megatron/neox_arguments/arguments.py b/megatron/neox_arguments/arguments.py
--- a/megatron/neox_arguments/arguments.py
+++ b/megatron/neox_arguments/arguments.py
@@ -510,7 +510,7 @@ def get_deepspeed_main_args(self):
args_list.extend(
self.convert_key_value_to_command_line_arg("account", account)
)
-
+
# master_address = os.environ['SLURM_JOB_NODELIST'].split('\n')[0]
# args_list.extend(
# self.convert_key_value_to_command_line_arg('master_addr', master_address)
| Add a Contributor Guide
We should document some of our contributor guidelines (pre-commit, tests, etc) now that we have them.
A good one to model after: https://github.com/microsoft/DeepSpeed/blob/master/CONTRIBUTING.md
| 2024-01-29T00:14:44 |
||
EleutherAI/gpt-neox | 1,180 | EleutherAI__gpt-neox-1180 | [
"1174"
] | c1fa9949d27f83e5de3f7dc96ff2cd7a552d9b81 | diff --git a/megatron/model/transformer.py b/megatron/model/transformer.py
--- a/megatron/model/transformer.py
+++ b/megatron/model/transformer.py
@@ -1046,6 +1046,7 @@ def _get_bias_dropout(self):
def forward(self, x, attention_mask, layer_past=None):
layer_past = layer_past if layer_past is not None else self.layer_past
bias_dropout_fn = self._get_bias_dropout()
+ moe_loss = torch.tensor(0.0, device=x.device, dtype=x.dtype)
# x: [b, s, h]
if self.gpt_j_residual:
# pseudocode:
@@ -1127,9 +1128,6 @@ def forward(self, x, attention_mask, layer_past=None):
# output = x + mlp(ln2(x))
layernorm_output = self.post_attention_layernorm(attention_output)
- moe_loss = torch.tensor(
- 0.0, device=layernorm_output.device, dtype=layernorm_output.dtype
- )
mlp_bias = torch.tensor(
0.0, device=layernorm_output.device, dtype=layernorm_output.dtype
)
| MoE loss variable not defined in gpt j residual code path
Running pythia 14M on master:
```
File "/gpt-neox/train.py", line 34, in <module>
main()
File "/gpt-neox/train.py", line 30, in main
pretrain(neox_args=neox_args)
File "/gpt-neox/megatron/training.py", line 228, in pretrain
iteration = train(
File "/gpt-neox/megatron/training.py", line 913, in train
loss_dict, skipped_iter = train_step(
File "/gpt-neox/megatron/training.py", line 793, in train_step
loss = forward_step(
File "/gpt-neox/megatron/training.py", line 391, in forward_step
maybe_tuple = model((tokens, position_ids, attention_mask), neox_args=neox_args)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/deepspeed/runtime/engine.py", line 1822, in forward
loss = self.module(*inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/gpt-neox/megatron/model/utils.py", line 190, in forward
x = func(forward_input)
File "/gpt-neox/megatron/model/utils.py", line 181, in exec_func
inputs = layer(inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/gpt-neox/megatron/model/transformer.py", line 1167, in forward
output, moe_loss = super().forward(hidden_states, attention_mask)
File "/gpt-neox/megatron/model/transformer.py", line 1155, in forward
return output, moe_loss
```
The `moe_loss` variable is not defined inside the `forward()` for `ParallelTransformerLayer`, [in the gpt-j-residual==true branch](https://github.com/EleutherAI/gpt-neox/blob/86758c350bff4a06beefe859b4a27a5bc930facf/megatron/model/transformer.py#L1049)
The variable was introduced when #1129 was merged. I am not too familiar with MoE, maybe @yang can comment on this?
| 2024-03-08T23:11:36 |
||
EleutherAI/gpt-neox | 1,190 | EleutherAI__gpt-neox-1190 | [
"1189"
] | 277141ebc37f59dac7ff9efa758cd425eed7c101 | diff --git a/megatron/training.py b/megatron/training.py
--- a/megatron/training.py
+++ b/megatron/training.py
@@ -24,6 +24,7 @@
import math
import sys
+from contextlib import nullcontext
import torch
import deepspeed
@@ -426,13 +427,15 @@ def get_model(neox_args, use_cache=False):
# If mup isn't being used anyways, this has no effect.
old_use_mup = neox_args.use_mup
neox_args.use_mup = False
- model = GPT2ModelPipe(
- neox_args=neox_args,
- num_tokentypes=0,
- parallel_output=True,
- topology=mpu.get_topology(),
- use_cache=use_cache,
- )
+
+ with deepspeed.zero.Init() if neox_args.zero_stage == 3 else nullcontext() as gs:
+ model = GPT2ModelPipe(
+ neox_args=neox_args,
+ num_tokentypes=0,
+ parallel_output=True,
+ topology=mpu.get_topology(),
+ use_cache=use_cache,
+ )
### soft prompt tuning stuff ###
if neox_args.soft_prompt_tuning is not None and neox_args.soft_prompt_tuning.get(
| Large model instantiation using `DeepSpeed.zero.Init` under ZeRO-3
**Is your feature request related to a problem? Please describe.**
Currently GPT-NeoX doesn't support partitioned model initialization when using ZeRO-3, which will cause OOM error in most cases.
**Describe the solution you'd like**
A simple fix like this will do the trick inside `get_model`
```
if neox_args.zero_stage == 3:
with deepspeed.zero.Init():
model = GPT2ModelPipe(
neox_args=neox_args,
num_tokentypes=0,
parallel_output=True,
topology=mpu.get_topology(),
use_cache=use_cache,
)
```
**Describe alternatives you've considered**
Other things I have in mind is to figure out a way to properly test this, I have tested this on a 175B model and it works. Please let me know if there's other testing needed
**Additional context**
Related issue: https://github.com/huggingface/accelerate/issues/922
| I am working on a branch addressing this issue | 2024-03-18T09:37:51 |
|
pre-commit/pre-commit | 33 | pre-commit__pre-commit-33 | [
"32"
] | 601e643960014a7f5d1470d5e1dce6da57cad34a | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -3,6 +3,7 @@
import os.path
import pkg_resources
import re
+import stat
from plumbum import local
from pre_commit.util import memoize_by_cwd
@@ -32,6 +33,8 @@ def create_pre_commit():
path = get_pre_commit_path()
pre_commit_file = pkg_resources.resource_filename('pre_commit', 'resources/pre-commit.sh')
local.path(path).write(local.path(pre_commit_file).read())
+ original_mode = os.stat(path).st_mode
+ os.chmod(path, original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
def remove_pre_commit():
| diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -1,6 +1,7 @@
import os
import pytest
+import stat
from plumbum import local
from pre_commit import git
@@ -25,6 +26,10 @@ def test_get_pre_commit_path(empty_git_dir):
def test_create_pre_commit(empty_git_dir):
git.create_pre_commit()
assert len(open(git.get_pre_commit_path(), 'r').read()) > 0
+ stat_result = os.stat(git.get_pre_commit_path())
+ assert stat_result.st_mode & stat.S_IXUSR
+ assert stat_result.st_mode & stat.S_IXGRP
+ assert stat_result.st_mode & stat.S_IXOTH
def test_remove_pre_commit(empty_git_dir):
| pre-commit -i does not install the file with +x
No executable = no run :'(
| 2014-03-19T04:42:08 |
|
pre-commit/pre-commit | 37 | pre-commit__pre-commit-37 | [
"34"
] | cd0714d0593df1fb3f993ecc17a31d9d4fdbc97d | diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py
--- a/pre_commit/languages/helpers.py
+++ b/pre_commit/languages/helpers.py
@@ -1,6 +1,37 @@
+import subprocess
+
+
def run_hook(env, hook, file_args):
return env.run(
- ' '.join([hook['entry']] + hook.get('args', []) + list(file_args)),
- retcode=None,
- )
\ No newline at end of file
+ ' '.join(['xargs', hook['entry']] + hook.get('args', [])),
+ stdin='\n'.join(list(file_args) + ['']),
+ )
+
+
+class Environment(object):
+ @property
+ def env_prefix(self):
+ """env_prefix is a value that is prefixed to the command that is run.
+
+ Usually this is to source a virtualenv, etc.
+
+ Commands basically end up looking like:
+
+ bash -c '{env_prefix} {cmd}'
+
+ so you'll often want to end your prefix with &&
+ """
+ raise NotImplementedError
+
+ def run(self, cmd, stdin=None, **kwargs):
+ """Returns (returncode, stdout, stderr)."""
+ proc = subprocess.Popen(
+ ['bash', '-c', ' '.join([self.env_prefix, cmd])],
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )
+ stdout, stderr = proc.communicate(stdin)
+
+ return proc.returncode, stdout, stderr
diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -8,37 +8,42 @@
NODE_ENV = 'node_env'
-class NodeEnv(object):
- def __init__(self, py_env):
- self.py_env = py_env
- self.env_prefix = '. {0}/bin/activate &&'.format(NODE_ENV)
-
- def run(self, cmd, **kwargs):
- return self.py_env.run(' '.join([self.env_prefix, cmd]), **kwargs)
+class NodeEnv(python.PythonEnv):
+ @property
+ def env_prefix(self):
+ base = super(NodeEnv, self).env_prefix
+ return ' '.join([base, '. {0}/bin/activate &&'.format(NODE_ENV)])
@contextlib.contextmanager
-def in_env(py_env):
- yield NodeEnv(py_env)
+def in_env():
+ yield NodeEnv()
def install_environment():
assert local.path('package.json').exists()
- if local.path('node_env').exists():
+ if local.path(NODE_ENV).exists():
return
local['virtualenv'][python.PY_ENV]()
with python.in_env() as python_env:
python_env.run('pip install nodeenv')
- python_env.run('nodeenv --jobs 4 {0}'.format(NODE_ENV))
- with in_env(python_env) as node_env:
+ try:
+ # Try and use the system level node executable first
+ python_env.run('nodeenv -n system {0}'.format(NODE_ENV))
+ except Exception:
+ # TODO: log exception here
+ # cleanup
+ local.path(NODE_ENV).remove()
+ python_env.run('nodeenv --jobs 4 {0}'.format(NODE_ENV))
+
+ with in_env() as node_env:
node_env.run('npm install -g')
def run_hook(hook, file_args):
- with python.in_env() as py_env:
- with in_env(py_env) as node_env:
- return helpers.run_hook(node_env, hook, file_args)
\ No newline at end of file
+ with in_env() as node_env:
+ return helpers.run_hook(node_env, hook, file_args)
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -6,12 +6,10 @@
PY_ENV = 'py_env'
-class PythonEnv(object):
- def __init__(self):
- self.env_prefix = '. {0}/bin/activate &&'.format(PY_ENV)
-
- def run(self, cmd, **kwargs):
- return local['bash']['-c', ' '.join([self.env_prefix, cmd])].run(**kwargs)
+class PythonEnv(helpers.Environment):
+ @property
+ def env_prefix(self):
+ return '. {0}/bin/activate &&'.format(PY_ENV)
@contextlib.contextmanager
| diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -67,6 +67,8 @@ def python_pre_commit_git_repo(dummy_git_repo):
local.path('__init__.py').write('')
local.path('main.py').write("""
def func():
+ import sys
+ print repr(sys.argv[1:])
print 'Hello World'
return 0
""")
@@ -142,4 +144,4 @@ def config_for_python_pre_commit_git_repo(python_pre_commit_git_repo):
jsonschema.validate([config], CONFIG_JSON_SCHEMA)
- return config
\ No newline at end of file
+ return config
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -46,13 +46,25 @@ def test_install_python_repo_in_env(python_pre_commit_git_repo, config_for_pytho
def test_run_a_python_hook(config_for_python_pre_commit_git_repo):
repo = Repository(config_for_python_pre_commit_git_repo)
repo.install()
- ret = repo.run_hook('foo', [])
+ ret = repo.run_hook('foo', ['/dev/null'])
+
+ assert ret[0] == 0
+ assert ret[1] == "['/dev/null']\nHello World\n"
+
+
[email protected]
+def test_run_a_hook_lots_of_files(config_for_python_pre_commit_git_repo):
+ repo = Repository(config_for_python_pre_commit_git_repo)
+ repo.install()
+ ret = repo.run_hook('foo', ['/dev/null'] * 15000)
assert ret[0] == 0
- assert ret[1] == 'Hello World\n'
[email protected](True, reason="TODO: make this test not super slow")
[email protected](
+ os.environ.get('slowtests', None) == 'false',
+ reason="TODO: make this test not super slow",
+)
def test_run_a_node_hook(config_for_node_pre_commit_git_repo):
repo = Repository(config_for_node_pre_commit_git_repo)
repo.install()
| Fix lots of files problem
https://github.com/pre-commit/pre-commit/blob/master/pre_commit/languages/helpers.py#L4
Fix is prefix with xargs and push filenames into stdin
| 2014-03-22T23:57:04 |
|
pre-commit/pre-commit | 38 | pre-commit__pre-commit-38 | [
"28"
] | 64745fb0b43a6b8d3ea1f8a67ff63edd237b5715 | diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -56,7 +56,7 @@ def create(self):
# Project already exists, no reason to re-create it
return
- local['git']['clone', self.repo_url, self.sha]()
+ local['git']['clone', '--no-checkout', self.repo_url, self.sha]()
with self.in_checkout():
local['git']['checkout', self.sha]()
@@ -69,4 +69,4 @@ def install(self):
def run_hook(self, hook_id, file_args):
with self.in_checkout():
hook = self.hooks[hook_id]
- return languages[hook['language']].run_hook(hook, file_args)
\ No newline at end of file
+ return languages[hook['language']].run_hook(hook, file_args)
| Consider using --no-checkout for cloning
I'd assume it is faster...
| Seems to work (tests pass)
| 2014-03-23T00:04:22 |
|
pre-commit/pre-commit | 43 | pre-commit__pre-commit-43 | [
"42"
] | bee6b0fb27c07b9c80fb38134a28a658cba60502 | diff --git a/pre_commit/run.py b/pre_commit/run.py
--- a/pre_commit/run.py
+++ b/pre_commit/run.py
@@ -17,10 +17,10 @@
PASS_FAIL_LENGTH = 6
-def _run_single_hook(repository, hook_id, run_all_the_things=False):
+def _run_single_hook(repository, hook_id, all_files=False):
repository.install()
- if run_all_the_things:
+ if all_files:
get_filenames = git.get_all_files_matching
else:
get_filenames = git.get_staged_files_matching
@@ -61,30 +61,28 @@ def _run_single_hook(repository, hook_id, run_all_the_things=False):
return retcode
-def run_hooks(run_all_the_things=False):
+def run_hooks(runner, all_files=False):
"""Actually run the hooks."""
retval = 0
- runner = Runner.create()
for repo in runner.repositories:
for hook_id in repo.hooks:
retval |= _run_single_hook(
repo,
hook_id,
- run_all_the_things=run_all_the_things,
+ all_files=all_files,
)
return retval
-def run_single_hook(hook_id, run_all_the_things=False):
- runner = Runner.create()
+def run_single_hook(runner, hook_id, all_files=False):
for repo in runner.repositories:
if hook_id in repo.hooks:
return _run_single_hook(
repo,
hook_id,
- run_all_the_things=run_all_the_things,
+ all_files=all_files,
)
else:
print 'No hook with id {0}'.format(hook_id)
@@ -95,40 +93,60 @@ def run_single_hook(hook_id, run_all_the_things=False):
def run(argv):
parser = argparse.ArgumentParser()
- group = parser.add_mutually_exclusive_group(required=False)
- group.add_argument(
- '-i', '--install',
- action='store_true',
- help='Install the pre-commit script.',
- )
- group.add_argument(
- '-u', '--uninstall',
- action='store_true',
- help='Uninstall the pre-commit script.',
+ subparsers = parser.add_subparsers(dest='command')
+
+ subparsers.add_parser('install', help='Intall the pre-commit script.')
+
+ subparsers.add_parser('uninstall', help='Uninstall the pre-commit script.')
+
+ execute_hook = subparsers.add_parser(
+ 'execute-hook', help='Run a single hook.'
)
- group.add_argument(
- '-r', '--run', metavar='HOOK', help='Run a single hook.',
+ execute_hook.add_argument('hook', help='The hook-id to run.')
+ execute_hook.add_argument(
+ '--all-files', '-a', action='store_true', default=False,
+ help='Run on all the files in the repo.',
)
- parser.add_argument(
- '--run-fucking-everything', action='store_true', default=False,
- help='Run on all the files in the repo',
+ run = subparsers.add_parser('run', help='Run hooks.')
+ run.add_argument('hook', nargs='?', help='A single hook-id to run'),
+ run.add_argument(
+ '--all-files', '-a', action='store_true', default=False,
+ help='Run on all the files in the repo.',
)
+ help = subparsers.add_parser('help', help='Show help for a specific command.')
+ help.add_argument('help_cmd', nargs='?', help='Command to show help for.')
+
+ # Argparse doesn't really provide a way to use a `default` subparser
+ if len(argv) == 0:
+ argv = ['run']
args = parser.parse_args(argv)
- if args.install:
+ runner = Runner.create()
+
+ if args.command == 'install':
git.create_pre_commit()
print 'pre-commit installed at {0}'.format(git.get_pre_commit_path())
return 0
- elif args.uninstall:
+ elif args.command == 'uninstall':
git.remove_pre_commit()
print 'pre-commit uninstalled'
return 0
- elif args.run:
- return run_single_hook(args.run, run_all_the_things=args.run_fucking_everything)
+ elif args.command == 'run':
+ if args.hook:
+ return run_single_hook(runner, args.hook, all_files=args.all_files)
+ else:
+ return run_hooks(runner, all_files=args.all_files)
+ elif args.command == 'help':
+ if args.help_cmd:
+ parser.parse_args([args.help_cmd, '--help'])
+ else:
+ parser.parse_args(['--help'])
else:
- return run_hooks(run_all_the_things=args.run_fucking_everything)
+ raise NotImplementedError(
+ 'Command {0} not implemented.'.format(args.command)
+ )
if __name__ == '__main__':
| Make pre-commit use `commands` style of arguments
See: http://stackoverflow.com/questions/9729919/gem-git-style-command-line-arguments-in-python
| @struys: You'll like this one
| 2014-03-24T00:21:33 |
|
pre-commit/pre-commit | 64 | pre-commit__pre-commit-64 | [
"63"
] | e98d2e1e7907dddba370680e71a0a1a021d1075d | diff --git a/pre_commit/color.py b/pre_commit/color.py
new file mode 100644
--- /dev/null
+++ b/pre_commit/color.py
@@ -0,0 +1,38 @@
+
+import sys
+
+RED = '\033[41m'
+GREEN = '\033[42m'
+NORMAL = '\033[0m'
+
+
+class InvalidColorSetting(ValueError): pass
+
+
+def format_color(text, color, use_color):
+ """Format text with color.
+
+ Args:
+ text - Text to be formatted with color if `use_color`
+ color - The color start string
+ use_color - Whether or not to color
+ """
+ if not use_color:
+ return text
+ else:
+ return u'{0}{1}{2}'.format(color, text, NORMAL)
+
+
+def use_color(setting):
+ """Choose whether to use color based on the command argument.
+
+ Args:
+ setting - Either `auto`, `always`, or `never`
+ """
+ if setting not in ('auto', 'always', 'never'):
+ raise InvalidColorSetting(setting)
+
+ return (
+ setting == 'always' or
+ (setting == 'auto' and sys.stdout.isatty())
+ )
diff --git a/pre_commit/run.py b/pre_commit/run.py
--- a/pre_commit/run.py
+++ b/pre_commit/run.py
@@ -5,22 +5,20 @@
import subprocess
import sys
+from pre_commit import color
from pre_commit import commands
from pre_commit import git
from pre_commit.runner import Runner
from pre_commit.util import entry
-RED = '\033[41m'
-GREEN = '\033[42m'
-NORMAL = '\033[0m'
COLS = int(subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0])
PASS_FAIL_LENGTH = 6
-def _run_single_hook(runner, repository, hook_id, all_files=False, verbose=False):
- if all_files:
+def _run_single_hook(runner, repository, hook_id, args):
+ if args.all_files:
get_filenames = git.get_all_files_matching
else:
get_filenames = git.get_staged_files_matching
@@ -46,54 +44,49 @@ def _run_single_hook(runner, repository, hook_id, all_files=False, verbose=False
output = '\n'.join([stdout, stderr]).strip()
if retcode != repository.hooks[hook_id]['expected_return_value']:
retcode = 1
- color = RED
+ print_color = color.RED
pass_fail = 'Failed'
else:
retcode = 0
- color = GREEN
+ print_color = color.GREEN
pass_fail = 'Passed'
- print('{0}{1}{2}'.format(color, pass_fail, NORMAL))
+ print(color.format_color(pass_fail, print_color, args.color))
- if output and (retcode or verbose):
+ if output and (retcode or args.verbose):
print('\n' + output)
return retcode
-def run_hooks(runner, all_files=False, verbose=False):
+def run_hooks(runner, args):
"""Actually run the hooks."""
retval = 0
for repo in runner.repositories:
for hook_id in repo.hooks:
- retval |= _run_single_hook(
- runner,
- repo,
- hook_id,
- all_files=all_files,
- verbose=verbose,
- )
+ retval |= _run_single_hook(runner, repo, hook_id, args)
return retval
-def run_single_hook(runner, hook_id, all_files=False, verbose=False):
+def run_single_hook(runner, hook_id, args):
for repo in runner.repositories:
if hook_id in repo.hooks:
- return _run_single_hook(
- runner,
- repo,
- hook_id,
- all_files=all_files,
- verbose=verbose,
- )
+ return _run_single_hook(runner, repo, hook_id, args)
else:
print('No hook with id `{0}`'.format(hook_id))
return 1
+def _run(runner, args):
+ if args.hook:
+ return run_single_hook(runner, args.hook, args)
+ else:
+ return run_hooks(runner, args)
+
+
@entry
def run(argv):
parser = argparse.ArgumentParser()
@@ -115,6 +108,10 @@ def run(argv):
help='Run on all the files in the repo.',
)
run.add_argument('--verbose', '-v', action='store_true', default=False)
+ run.add_argument(
+ '--color', default='auto', type=color.use_color,
+ help='Whether to use color in output. Defaults to `auto`',
+ )
help = subparsers.add_parser('help', help='Show help for a specific command.')
help.add_argument('help_cmd', nargs='?', help='Command to show help for.')
@@ -135,17 +132,7 @@ def run(argv):
elif args.command == 'autoupdate':
return commands.autoupdate(runner)
elif args.command == 'run':
- if args.hook:
- return run_single_hook(
- runner,
- args.hook,
- all_files=args.all_files,
- verbose=args.verbose,
- )
- else:
- return run_hooks(
- runner, all_files=args.all_files, verbose=args.verbose,
- )
+ return _run(runner, args)
elif args.command == 'help':
if args.help_cmd:
parser.parse_args([args.help_cmd, '--help'])
| diff --git a/tests/color_test.py b/tests/color_test.py
new file mode 100644
--- /dev/null
+++ b/tests/color_test.py
@@ -0,0 +1,41 @@
+
+import mock
+import pytest
+import sys
+
+from pre_commit.color import format_color
+from pre_commit.color import GREEN
+from pre_commit.color import InvalidColorSetting
+from pre_commit.color import use_color
+
+
[email protected](('in_text', 'in_color', 'in_use_color', 'expected'), (
+ ('foo', GREEN, True, '{0}foo\033[0m'.format(GREEN)),
+ ('foo', GREEN, False, 'foo'),
+))
+def test_format_color(in_text, in_color, in_use_color, expected):
+ ret = format_color(in_text, in_color, in_use_color)
+ assert ret == expected
+
+
+def test_use_color_never():
+ assert use_color('never') is False
+
+
+def test_use_color_always():
+ assert use_color('always') is True
+
+
+def test_use_color_no_tty():
+ with mock.patch.object(sys.stdout, 'isatty', return_value=False):
+ assert use_color('auto') is False
+
+
+def test_use_color_tty():
+ with mock.patch.object(sys.stdout, 'isatty', return_value=True):
+ assert use_color('auto') is True
+
+
+def test_use_color_raises_if_given_shenanigans():
+ with pytest.raises(InvalidColorSetting):
+ use_color('herpaderp')
| Implement color correctly
Commandline needs `--color` option which has `'always', 'never', 'auto'` as options.
`auto` - Use `sys.stdout.isatty()`
| 2014-04-06T02:32:14 |
|
pre-commit/pre-commit | 67 | pre-commit__pre-commit-67 | [
"66"
] | 460582dacd0e030965af4beb5e89a1591a9ae25f | diff --git a/pre_commit/logging_handler.py b/pre_commit/logging_handler.py
--- a/pre_commit/logging_handler.py
+++ b/pre_commit/logging_handler.py
@@ -16,7 +16,7 @@
class LoggingHandler(logging.Handler):
def __init__(self, use_color):
- super(LoggingHandler, self).__init__()
+ logging.Handler.__init__(self)
self.use_color = use_color
def emit(self, record):
| diff --git a/tests/logging_handler_test.py b/tests/logging_handler_test.py
new file mode 100644
--- /dev/null
+++ b/tests/logging_handler_test.py
@@ -0,0 +1,38 @@
+import __builtin__
+import mock
+import pytest
+
+from pre_commit import color
+from pre_commit.logging_handler import LoggingHandler
+
+
[email protected]_fixture
+def print_mock():
+ with mock.patch.object(__builtin__, 'print', autospec=True) as print_mock:
+ yield print_mock
+
+
+class FakeLogRecord(object):
+ def __init__(self, message, levelname, levelno):
+ self.message = message
+ self.levelname = levelname
+ self.levelno = levelno
+
+ def getMessage(self):
+ return self.message
+
+
+def test_logging_handler_color(print_mock):
+ handler = LoggingHandler(True)
+ handler.emit(FakeLogRecord('hi', 'WARNING', 30))
+ print_mock.assert_called_once_with(
+ color.YELLOW + '[WARNING]' + color.NORMAL + ' hi',
+ )
+
+
+def test_logging_handler_no_color(print_mock):
+ handler = LoggingHandler(False)
+ handler.emit(FakeLogRecord('hi', 'WARNING', 30))
+ print_mock.assert_called_once_with(
+ '[WARNING] hi',
+ )
| TypeError while instantiating LoggingHandler (2.6)
I assume this is new-style vs old-style classes being grumpy?
```
>>> from pre_commit.logging_handler import LoggingHandler
>>> LoggingHandler(True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../py_env/lib/python2.6/site-packages/pre_commit/logging_handler.py", line 19, in __init__
super(LoggingHandler, self).__init__()
TypeError: super() argument 1 must be type, not classobj
```
| 2014-04-08T00:08:50 |
|
pre-commit/pre-commit | 79 | pre-commit__pre-commit-79 | [
"69"
] | e297b210a576ae6a784440efb8c63cb54d653027 | diff --git a/pre_commit/commands.py b/pre_commit/commands.py
--- a/pre_commit/commands.py
+++ b/pre_commit/commands.py
@@ -151,6 +151,26 @@ def _run_single_hook(runner, repository, hook_id, args, write):
hook = repository.hooks[hook_id]
+ filenames = get_filenames(hook['files'], hook['exclude'])
+ if not filenames:
+ no_files_msg = '(no files to check) '
+ skipped_msg = 'Skipped'
+ write(
+ '{0}{1}{2}{3}\n'.format(
+ hook['name'],
+ '.' * (
+ COLS -
+ len(hook['name']) -
+ len(no_files_msg) -
+ len(skipped_msg) -
+ 6
+ ),
+ no_files_msg,
+ color.format_color(skipped_msg, color.TURQUOISE, args.color),
+ )
+ )
+ return 0
+
# Print the hook and the dots first in case the hook takes hella long to
# run.
write(
@@ -164,7 +184,7 @@ def _run_single_hook(runner, repository, hook_id, args, write):
retcode, stdout, stderr = repository.run_hook(
runner.cmd_runner,
hook_id,
- get_filenames(hook['files'], hook['exclude']),
+ filenames,
)
if retcode != repository.hooks[hook_id]['expected_return_value']:
| diff --git a/tests/commands_test.py b/tests/commands_test.py
--- a/tests/commands_test.py
+++ b/tests/commands_test.py
@@ -192,86 +192,51 @@ def get_write_mock_output(write_mock):
return ''.join(call[0][0] for call in write_mock.call_args_list)
-def test_run_all_hooks_passing(repo_with_passing_hook):
- stage_a_file()
- runner = Runner(repo_with_passing_hook)
+def _test_run(repo, options, expected_outputs, expected_ret, stage):
+ if stage:
+ stage_a_file()
+ runner = Runner(repo)
args = auto_namedtuple(
- all_files=False, color=False, verbose=False, hook=None,
- )
- write_mock = mock.Mock()
- ret = commands.run(runner, args, write=write_mock)
- assert ret == 0
- printed = get_write_mock_output(write_mock)
- assert 'Bash hook' in printed
- assert 'Passed' in printed
-
-
-def test_verbose_prints_output(repo_with_passing_hook):
- stage_a_file()
- runner = Runner(repo_with_passing_hook)
- args = auto_namedtuple(
- all_files=False, color=False, verbose=True, hook=None,
+ **dict(
+ dict(all_files=False, color=False, verbose=False, hook=None),
+ **options
+ )
)
write_mock = mock.Mock()
ret = commands.run(runner, args, write=write_mock)
- assert ret == 0
+ assert ret == expected_ret
printed = get_write_mock_output(write_mock)
- assert 'foo.py\nHello World\n' in printed
+ for expected_output_part in expected_outputs:
+ assert expected_output_part in printed
def test_run_all_hooks_failing(repo_with_failing_hook):
- stage_a_file()
- runner = Runner(repo_with_failing_hook)
- args = auto_namedtuple(
- all_files=False, color=False, verbose=False, hook=None,
- )
- write_mock = mock.Mock()
- ret = commands.run(runner, args, write=write_mock)
- assert ret == 1
- printed = get_write_mock_output(write_mock)
- assert 'Failing hook' in printed
- assert 'Failed' in printed
- assert 'Fail\nfoo.py\n' in printed
-
-
-def test_run_a_specific_hook(repo_with_passing_hook):
- stage_a_file()
- runner = Runner(repo_with_passing_hook)
- args = auto_namedtuple(
- all_files=False, color=False, verbose=False, hook='bash_hook',
- )
- write_mock = mock.Mock()
- ret = commands.run(runner, args, write=write_mock)
- assert ret == 0
- printed = get_write_mock_output(write_mock)
- assert 'Bash hook' in printed
- assert 'Passed' in printed
-
-
-def test_run_a_non_existing_hook(repo_with_passing_hook):
- stage_a_file()
- runner = Runner(repo_with_passing_hook)
- args = auto_namedtuple(
- all_files=False, color=False, verbose=False, hook='nope',
+ _test_run(
+ repo_with_failing_hook,
+ {},
+ ('Failing hook', 'Failed', 'Fail\nfoo.py\n'),
+ 1,
+ True,
)
- write_mock = mock.Mock()
- ret = commands.run(runner, args, write=write_mock)
- assert ret == 1
- printed = get_write_mock_output(write_mock)
- assert 'No hook with id `nope`' in printed
-def test_run_all_files(repo_with_passing_hook):
- stage_a_file()
- runner = Runner(repo_with_passing_hook)
- args = auto_namedtuple(
- all_files=True, color=False, verbose=True, hook=None,
[email protected](
+ ('options', 'outputs', 'expected_ret', 'stage'),
+ (
+ ({}, ('Bash hook', 'Passed'), 0, True),
+ ({'verbose': True}, ('foo.py\nHello World',), 0, True),
+ ({'hook': 'bash_hook'}, ('Bash hook', 'Passed'), 0, True),
+ ({'hook': 'nope'}, ('No hook with id `nope`',), 1, True),
+ # All the files in the repo.
+ # This seems kind of weird but it is beacuse py.test reuses fixtures
+ (
+ {'all_files': True, 'verbose': True},
+ ('hooks.yaml', 'bin/hook.sh', 'foo.py', 'dummy'),
+ 0,
+ True,
+ ),
+ ({}, ('Bash hook', '(no files to check)', 'Skipped'), 0, False),
)
- write_mock = mock.Mock()
- ret = commands.run(runner, args, write=write_mock)
- assert ret == 0
- printed = get_write_mock_output(write_mock)
- # These are all the files checked into the repo.
- # This seems kind of weird but it is because py.test reuses fixtures
- for filename in 'hooks.yaml bin/hook.sh foo.py dummy'.split():
- assert filename in printed
+)
+def test_run(repo_with_passing_hook, options, outputs, expected_ret, stage):
+ _test_run(repo_with_passing_hook, options, outputs, expected_ret, stage)
| Skip hook if there are no files to run for it.
This blocks adding `flake8` as a hook as it explodes when there are no files.
This will also be a bit of a performance hack.
| 2014-04-14T00:39:00 |
|
pre-commit/pre-commit | 81 | pre-commit__pre-commit-81 | [
"76"
] | 6d8621c09ce910eea9c898b9f1bf8703e00b8fc6 | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -17,11 +17,11 @@ def staged_files_only(cmd_runner):
cmd_runner - PrefixedCommandRunner
"""
# Determine if there are unstaged files
- retcode, _, _ = cmd_runner.run(
- ['git', 'diff-files', '--quiet'],
+ retcode, diff_stdout, _ = cmd_runner.run(
+ ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],
retcode=None,
)
- if retcode:
+ if retcode and diff_stdout.strip():
patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))
logger.warning('Unstaged files detected.')
logger.info(
@@ -29,7 +29,7 @@ def staged_files_only(cmd_runner):
)
# Save the current unstaged changes as a patch
with open(patch_filename, 'w') as patch_file:
- cmd_runner.run(['git', 'diff', '--binary'], stdout=patch_file)
+ patch_file.write(diff_stdout)
# Clear the working directory of unstaged changes
cmd_runner.run(['git', 'checkout', '--', '.'])
| diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -1,3 +1,5 @@
+import logging
+import mock
import os.path
import pytest
import shutil
@@ -215,3 +217,32 @@ def test_sub_something_unstaged(sub_staged, cmd_runner):
_test_sub_state(sub_staged, 'sha2', 'AM')
_test_sub_state(sub_staged, 'sha2', 'AM')
+
+
[email protected]_fixture
+def fake_logging_handler():
+ class FakeHandler(logging.Handler):
+ def __init__(self):
+ logging.Handler.__init__(self)
+ self.logs = []
+
+ def emit(self, record):
+ self.logs.append(record)
+
+ pre_commit_logger = logging.getLogger('pre_commit')
+ original_level = pre_commit_logger.getEffectiveLevel()
+ handler = FakeHandler()
+ pre_commit_logger.addHandler(handler)
+ pre_commit_logger.setLevel(logging.WARNING)
+ yield handler
+ pre_commit_logger.setLevel(original_level)
+ pre_commit_logger.removeHandler(handler)
+
+
+def test_diff_returns_1_no_diff_though(fake_logging_handler, foo_staged):
+ cmd_runner = mock.Mock()
+ cmd_runner.run.return_value = (1, '', '')
+ cmd_runner.path.return_value = '.pre-commit-files_patch'
+ with staged_files_only(cmd_runner):
+ pass
+ assert not fake_logging_handler.logs
| Occasional flakiness of staged file stasher
It appears `git diff-files` is returning incorrectly in some case that I haven't been able to pinpoint.
It results in something like this (you can see however that all the files are staged):
```
$ pre-commit
[WARNING] Unstaged files detected.
Stashing unstaged files to /home/anthony/workspace/pre-commit/.pre-commit-files/patch1397370090.
Trim Trailing Whitespace............................................Passed
Fix End of Files....................................................Passed
Check Yaml..........................................................Passed
Debug Statements (Python)...........................................Passed
Tests should end in _test.py........................................Passed
Pyflakes............................................................Passed
Validate Pre-Commit Config..........................................Passed
Validate Pre-Commit Manifest........................................Passed
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
Traceback (most recent call last):
File "/home/anthony/workspace/pre-commit/venv-pre_commit/bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()
File "/home/anthony/workspace/pre-commit/pre_commit/util.py", line 52, in wrapper
return func(argv)
File "/home/anthony/workspace/pre-commit/pre_commit/run.py", line 143, in run
return _run(runner, args)
File "/home/anthony/workspace/pre-commit/pre_commit/run.py", line 95, in _run
return run_hooks(runner, args)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/home/anthony/workspace/pre-commit/pre_commit/staged_files_only.py", line 51, in staged_files_only
cmd_runner.run(['git', 'apply', patch_filename])
File "/home/anthony/workspace/pre-commit/pre_commit/prefixed_command_runner.py", line 67, in run
returncode, replaced_cmd, retcode, output=(stdout, stderr),
pre_commit.prefixed_command_runner.CalledProcessError: Command: ['git', 'apply', '/home/anthony/workspace/pre-commit/.pre-commit-files/patch1397370090']
Return code: 128
Expected return code: 0
Output: ('', 'fatal: unrecognized input\n')
$ git status
# On branch rebuild_venv
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: .gitignore
# modified: Makefile
#
```
The "stashed diff" is an empty file. I think the "fix" is to check if the diff contains anything before printing the warning message and entering the branch that isn't a noop context manager.
| 2014-04-14T04:24:09 |
|
pre-commit/pre-commit | 83 | pre-commit__pre-commit-83 | [
"82"
] | e9bc7acd7b72b9017bb4f50cff80a24e332e4bb8 | diff --git a/pre_commit/commands.py b/pre_commit/commands.py
--- a/pre_commit/commands.py
+++ b/pre_commit/commands.py
@@ -229,11 +229,21 @@ def _run_hook(runner, hook_id, args, write):
return 1
+def _has_unmerged_paths(runner):
+ _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])
+ return bool(stdout.strip())
+
+
def run(runner, args, write=sys.stdout.write):
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
+ # Check if we have unresolved merge conflict files and fail fast.
+ if _has_unmerged_paths(runner):
+ logger.error('Unmerged files. Resolve before committing.')
+ return 1
+
if args.no_stash or args.all_files:
ctx = noop_context()
else:
| diff --git a/tests/commands_test.py b/tests/commands_test.py
--- a/tests/commands_test.py
+++ b/tests/commands_test.py
@@ -275,3 +275,47 @@ def test_no_stash(repo_with_passing_hook, no_stash, all_files, expect_stash):
assert warning_msg in printed
else:
assert warning_msg not in printed
+
+
[email protected](('output', 'expected'), (('some', True), ('', False)))
+def test_has_unmerged_paths(output, expected):
+ mock_runner = mock.Mock()
+ mock_runner.cmd_runner.run.return_value = (1, output, '')
+ assert commands._has_unmerged_paths(mock_runner) is expected
+
+
[email protected]_fixture
+def in_merge_conflict(repo_with_passing_hook):
+ local['git']['add', C.CONFIG_FILE]()
+ local['git']['commit', '-m' 'add hooks file']()
+ local['git']['clone', '.', 'foo']()
+ with local.cwd('foo'):
+ local['git']['checkout', 'origin/master', '-b', 'foo']()
+ with open('conflict_file', 'w') as conflict_file:
+ conflict_file.write('herp\nderp\n')
+ local['git']['add', 'conflict_file']()
+ local['git']['commit', '-m', 'conflict_file']()
+ local['git']['checkout', 'origin/master', '-b', 'bar']()
+ with open('conflict_file', 'w') as conflict_file:
+ conflict_file.write('harp\nddrp\n')
+ local['git']['add', 'conflict_file']()
+ local['git']['commit', '-m', 'conflict_file']()
+ local['git']['merge', 'foo'](retcode=None)
+ yield os.path.join(repo_with_passing_hook, 'foo')
+
+
+def test_merge_conflict(in_merge_conflict):
+ ret, printed = _do_run(in_merge_conflict, _get_opts())
+ assert ret == 1
+ assert 'Unmerged files. Resolve before committing.' in printed
+
+
+def test_merge_conflict_modified(in_merge_conflict):
+ # Touch another file so we have unstaged non-conflicting things
+ assert os.path.exists('dummy')
+ with open('dummy', 'w') as dummy_file:
+ dummy_file.write('bar\nbaz\n')
+
+ ret, printed = _do_run(in_merge_conflict, _get_opts())
+ assert ret == 1
+ assert 'Unmerged files. Resolve before committing.' in printed
diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -227,7 +227,7 @@ def __init__(self):
self.logs = []
def emit(self, record):
- self.logs.append(record)
+ self.logs.append(record) # pragma: no cover (only hit in failure)
pre_commit_logger = logging.getLogger('pre_commit')
original_level = pre_commit_logger.getEffectiveLevel()
| pre-commit crashes when running during unresolved merge conflict
I intentionally forced the following by making two branches conflict and then editing a file on that branch. `pre-commit` should fail-fast in a merge conflict situation.
```
$ git diff --exit-code
diff --cc foo.txt
index 8ff26e7,c148433..0000000
--- a/foo.txt
+++ b/foo.txt
@@@ -1,4 -1,5 +1,11 @@@
asdf
++<<<<<<< HEAD
+fdsa
+yeah
+yeah
++=======
+ asdf
+ asdf
+ asdf
+
++>>>>>>> derp
diff --git a/git_code_debt/generate.py b/git_code_debt/generate.py
index 12ceec6..967506e 100644
--- a/git_code_debt/generate.py
+++ b/git_code_debt/generate.py
@@ -12,6 +12,7 @@ from git_code_debt.logic import get_previous_sha
from git_code_debt.logic import insert_metric_values
from git_code_debt.repo_parser import RepoParser
+
def get_metrics(diff, metric_parsers):
def get_all_metrics(file_diff_stats):
for metric_parser_cls in metric_parsers:
(py_env)[anthony@anthony-VirtualBox git-code-debt (herp|MERGING)]$ echo $?
1
(py_env)[anthony@anthony-VirtualBox git-code-debt (herp|MERGING)]$ pre-commit
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /tmp/git-code-debt/.pre-commit-files/patch1397455577.
Traceback (most recent call last):
File "/tmp/git-code-debt/py_env/bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()
File "/tmp/git-code-debt/py_env/local/lib/python2.7/site-packages/pre_commit/util.py", line 52, in wrapper
return func(argv)
File "/tmp/git-code-debt/py_env/local/lib/python2.7/site-packages/pre_commit/run.py", line 59, in run
return commands.run(runner, args)
File "/tmp/git-code-debt/py_env/local/lib/python2.7/site-packages/pre_commit/commands.py", line 242, in run
with ctx:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/tmp/git-code-debt/py_env/local/lib/python2.7/site-packages/pre_commit/staged_files_only.py", line 35, in staged_files_only
cmd_runner.run(['git', 'checkout', '--', '.'])
File "/tmp/git-code-debt/py_env/local/lib/python2.7/site-packages/pre_commit/prefixed_command_runner.py", line 77, in run
returncode, replaced_cmd, retcode, output=(stdout, stderr),
pre_commit.prefixed_command_runner.CalledProcessError: Command: ['git', 'checkout', '--', '.']
Return code: 1
Expected return code: 0
Output: (u'', u"error: path 'foo.txt' is unmerged\n")
(py_env)[anthony@anthony-VirtualBox git-code-debt (herp|MERGING)]$
```
| 2014-04-14T06:54:35 |
|
pre-commit/pre-commit | 86 | pre-commit__pre-commit-86 | [
"85"
] | 9c35a113eefc63e939f5a229526fca264b061094 | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -1,4 +1,5 @@
import contextlib
+import io
import logging
import time
@@ -28,7 +29,7 @@ def staged_files_only(cmd_runner):
'Stashing unstaged files to {0}.'.format(patch_filename),
)
# Save the current unstaged changes as a patch
- with open(patch_filename, 'w') as patch_file:
+ with io.open(patch_filename, 'w', encoding='utf-8') as patch_file:
patch_file.write(diff_stdout)
# Clear the working directory of unstaged changes
| diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -1,3 +1,6 @@
+from __future__ import unicode_literals
+
+import io
import logging
import mock
import os.path
@@ -20,14 +23,18 @@ def get_short_git_status():
return dict(reversed(line.split()) for line in git_status.splitlines())
[email protected]_fixture
-def foo_staged(empty_git_dir):
+def write_gitignore():
with open('.gitignore', 'w') as gitignore_file:
gitignore_file.write(C.HOOKS_WORKSPACE + '\n')
+
+
[email protected]_fixture
+def foo_staged(empty_git_dir):
+ write_gitignore()
local['git']['add', '.']()
local['git']['commit', '-m', 'add gitignore']()
- with open('foo', 'w') as foo_file:
+ with io.open('foo', 'w') as foo_file:
foo_file.write(FOO_CONTENTS)
local['git']['add', 'foo']()
foo_filename = os.path.join(empty_git_dir, 'foo')
@@ -41,7 +48,7 @@ def cmd_runner():
def _test_foo_state(path, foo_contents=FOO_CONTENTS, status='A'):
assert os.path.exists(path.foo_filename)
- assert open(path.foo_filename).read() == foo_contents
+ assert io.open(path.foo_filename, encoding='utf-8').read() == foo_contents
actual_status = get_short_git_status()['foo']
assert status == actual_status
@@ -57,7 +64,7 @@ def test_foo_nothing_unstaged(foo_staged, cmd_runner):
def test_foo_something_unstaged(foo_staged, cmd_runner):
- with open(foo_staged.foo_filename, 'w') as foo_file:
+ with io.open(foo_staged.foo_filename, 'w') as foo_file:
foo_file.write('herp\nderp\n')
_test_foo_state(foo_staged, 'herp\nderp\n', 'AM')
@@ -69,7 +76,7 @@ def test_foo_something_unstaged(foo_staged, cmd_runner):
def test_foo_both_modify_non_conflicting(foo_staged, cmd_runner):
- with open(foo_staged.foo_filename, 'w') as foo_file:
+ with io.open(foo_staged.foo_filename, 'w') as foo_file:
foo_file.write(FOO_CONTENTS + '9\n')
_test_foo_state(foo_staged, FOO_CONTENTS + '9\n', 'AM')
@@ -78,7 +85,7 @@ def test_foo_both_modify_non_conflicting(foo_staged, cmd_runner):
_test_foo_state(foo_staged)
# Modify the file as part of the "pre-commit"
- with open(foo_staged.foo_filename, 'w') as foo_file:
+ with io.open(foo_staged.foo_filename, 'w') as foo_file:
foo_file.write(FOO_CONTENTS.replace('1', 'a'))
_test_foo_state(foo_staged, FOO_CONTENTS.replace('1', 'a'), 'AM')
@@ -87,7 +94,7 @@ def test_foo_both_modify_non_conflicting(foo_staged, cmd_runner):
def test_foo_both_modify_conflicting(foo_staged, cmd_runner):
- with open(foo_staged.foo_filename, 'w') as foo_file:
+ with io.open(foo_staged.foo_filename, 'w') as foo_file:
foo_file.write(FOO_CONTENTS.replace('1', 'a'))
_test_foo_state(foo_staged, FOO_CONTENTS.replace('1', 'a'), 'AM')
@@ -96,7 +103,7 @@ def test_foo_both_modify_conflicting(foo_staged, cmd_runner):
_test_foo_state(foo_staged)
# Modify in the same place as the stashed diff
- with open(foo_staged.foo_filename, 'w') as foo_file:
+ with io.open(foo_staged.foo_filename, 'w') as foo_file:
foo_file.write(FOO_CONTENTS.replace('1', 'b'))
_test_foo_state(foo_staged, FOO_CONTENTS.replace('1', 'b'), 'AM')
@@ -106,8 +113,7 @@ def test_foo_both_modify_conflicting(foo_staged, cmd_runner):
@pytest.yield_fixture
def img_staged(empty_git_dir):
- with open('.gitignore', 'w') as gitignore_file:
- gitignore_file.write(C.HOOKS_WORKSPACE + '\n')
+ write_gitignore()
local['git']['add', '.']()
local['git']['commit', '-m', 'add gitignore']()
@@ -120,8 +126,8 @@ def img_staged(empty_git_dir):
def _test_img_state(path, expected_file='img1.jpg', status='A'):
assert os.path.exists(path.img_filename)
assert (
- open(path.img_filename, 'rb').read() ==
- open(get_resource_path(expected_file), 'rb').read()
+ io.open(path.img_filename, 'rb').read() ==
+ io.open(get_resource_path(expected_file), 'rb').read()
)
actual_status = get_short_git_status()['img.jpg']
assert status == actual_status
@@ -246,3 +252,14 @@ def test_diff_returns_1_no_diff_though(fake_logging_handler, foo_staged):
with staged_files_only(cmd_runner):
pass
assert not fake_logging_handler.logs
+
+
+def test_stage_utf8_changes(foo_staged, cmd_runner):
+ contents = '\u2603'
+ with io.open('foo', 'w', encoding='utf-8') as foo_file:
+ foo_file.write(contents)
+
+ _test_foo_state(foo_staged, contents, 'AM')
+ with staged_files_only(cmd_runner):
+ _test_foo_state(foo_staged)
+ _test_foo_state(foo_staged, contents, 'AM')
| UnicodeDecodeError in staged_files_only
```
$ pre-commit
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to .../.pre-commit-files/patch1397853050.
Traceback (most recent call last):
File ".../bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()
File ".../lib/python2.6/site-packages/pre_commit/util.py", line 52, in wrapper
return func(argv)
File ".../lib/python2.6/site-packages/pre_commit/run.py", line 59, in run
return commands.run(runner, args)
File ".../lib/python2.6/site-packages/pre_commit/commands.py", line 254, in run
with ctx:
File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
return self.gen.next()
File ".../lib/python2.6/site-packages/pre_commit/staged_files_only.py", line 32, in staged_files_only
patch_file.write(diff_stdout)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 3795: ordinal not in range(128)
```
| 2014-04-18T21:07:43 |
|
pre-commit/pre-commit | 87 | pre-commit__pre-commit-87 | [
"40"
] | a60b3a3971aae369a50111eaaaaf201e2822c5d8 | diff --git a/pre_commit/commands.py b/pre_commit/commands.py
--- a/pre_commit/commands.py
+++ b/pre_commit/commands.py
@@ -144,7 +144,42 @@ def clean(runner):
return 0
-def _run_single_hook(runner, repository, hook_id, args, write):
+def _get_skips(environ):
+ skips = environ.get('SKIP', '')
+ return set(skip.strip() for skip in skips.split(',') if skip.strip())
+
+
+def _print_no_files_skipped(hook, write, args):
+ no_files_msg = '(no files to check) '
+ skipped_msg = 'Skipped'
+ write(
+ '{0}{1}{2}{3}\n'.format(
+ hook['name'],
+ '.' * (
+ COLS -
+ len(hook['name']) -
+ len(no_files_msg) -
+ len(skipped_msg) -
+ 6
+ ),
+ no_files_msg,
+ color.format_color(skipped_msg, color.TURQUOISE, args.color),
+ )
+ )
+
+
+def _print_user_skipped(hook, write, args):
+ skipped_msg = 'Skipped'
+ write(
+ '{0}{1}{2}\n'.format(
+ hook['name'],
+ '.' * (COLS - len(hook['name']) - len(skipped_msg) - 6),
+ color.format_color(skipped_msg, color.YELLOW, args.color),
+ ),
+ )
+
+
+def _run_single_hook(runner, repository, hook_id, args, write, skips=set()):
if args.all_files:
get_filenames = git.get_all_files_matching
elif git.is_in_merge_conflict():
@@ -155,23 +190,11 @@ def _run_single_hook(runner, repository, hook_id, args, write):
hook = repository.hooks[hook_id]
filenames = get_filenames(hook['files'], hook['exclude'])
- if not filenames:
- no_files_msg = '(no files to check) '
- skipped_msg = 'Skipped'
- write(
- '{0}{1}{2}{3}\n'.format(
- hook['name'],
- '.' * (
- COLS -
- len(hook['name']) -
- len(no_files_msg) -
- len(skipped_msg) -
- 6
- ),
- no_files_msg,
- color.format_color(skipped_msg, color.TURQUOISE, args.color),
- )
- )
+ if hook_id in skips:
+ _print_user_skipped(hook, write, args)
+ return 0
+ elif not filenames:
+ _print_no_files_skipped(hook, write, args)
return 0
# Print the hook and the dots first in case the hook takes hella long to
@@ -211,18 +234,23 @@ def _run_single_hook(runner, repository, hook_id, args, write):
return retcode
-def _run_hooks(runner, args, write):
+def _run_hooks(runner, args, write, environ):
"""Actually run the hooks."""
retval = 0
+ skips = _get_skips(environ)
+
for repo in runner.repositories:
for hook_id in repo.hooks:
- retval |= _run_single_hook(runner, repo, hook_id, args, write=write)
+ retval |= _run_single_hook(
+ runner, repo, hook_id, args, write, skips=skips,
+ )
return retval
-def _run_hook(runner, hook_id, args, write):
+def _run_hook(runner, args, write):
+ hook_id = args.hook
for repo in runner.repositories:
if hook_id in repo.hooks:
return _run_single_hook(runner, repo, hook_id, args, write=write)
@@ -236,7 +264,7 @@ def _has_unmerged_paths(runner):
return bool(stdout.strip())
-def run(runner, args, write=sys.stdout.write):
+def run(runner, args, write=sys.stdout.write, environ=os.environ):
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
@@ -253,6 +281,6 @@ def run(runner, args, write=sys.stdout.write):
with ctx:
if args.hook:
- return _run_hook(runner, args.hook, args, write=write)
+ return _run_hook(runner, args, write=write)
else:
- return _run_hooks(runner, args, write=write)
+ return _run_hooks(runner, args, write=write, environ=environ)
| diff --git a/tests/commands_test.py b/tests/commands_test.py
--- a/tests/commands_test.py
+++ b/tests/commands_test.py
@@ -201,10 +201,10 @@ def _get_opts(all_files=False, color=False, verbose=False, hook=None, no_stash=F
)
-def _do_run(repo, args):
+def _do_run(repo, args, environ={}):
runner = Runner(repo)
write_mock = mock.Mock()
- ret = commands.run(runner, args, write=write_mock)
+ ret = commands.run(runner, args, write=write_mock, environ=environ)
printed = get_write_mock_output(write_mock)
return ret, printed
@@ -298,3 +298,35 @@ def test_merge_conflict_modified(in_merge_conflict):
ret, printed = _do_run(in_merge_conflict, _get_opts())
assert ret == 1
assert 'Unmerged files. Resolve before committing.' in printed
+
+
+def test_merge_conflict_resolved(in_merge_conflict):
+ local['git']['add', '.']()
+ ret, printed = _do_run(in_merge_conflict, _get_opts())
+ for msg in ('Checking merge-conflict files only.', 'Bash hook', 'Passed'):
+ assert msg in printed
+
+
[email protected](
+ ('environ', 'expected_output'),
+ (
+ ({}, set([])),
+ ({'SKIP': ''}, set([])),
+ ({'SKIP': ','}, set([])),
+ ({'SKIP': ',foo'}, set(['foo'])),
+ ({'SKIP': 'foo'}, set(['foo'])),
+ ({'SKIP': 'foo,bar'}, set(['foo', 'bar'])),
+ ({'SKIP': ' foo , bar'}, set(['foo', 'bar'])),
+ ),
+)
+def test_get_skips(environ, expected_output):
+ ret = commands._get_skips(environ)
+ assert ret == expected_output
+
+
+def test_skip_hook(repo_with_passing_hook):
+ ret, printed = _do_run(
+ repo_with_passing_hook, _get_opts(), {'SKIP': 'bash_hook'},
+ )
+ for msg in ('Bash hook', 'Skipped'):
+ assert msg in printed
| Add way to temporarily/permanently disable hooks.
[overcommit](https://github.com/causes/overcommit) uses environment variables to do temporary skipping...
For instance:
`SKIP=foo git commit` will skip the `foo` hook
Whereas I've used a more-permanent switching with `git config hooks.foo false` in the past.
Considering both approaches, I think overcommit does this quite elegantly while focusing on only _temporarily_ disabling hooks.
| 2014-04-19T18:02:47 |
|
pre-commit/pre-commit | 89 | pre-commit__pre-commit-89 | [
"88"
] | edb04422b8f6065fa5af3ea7ba34aa8d426b5558 | diff --git a/pre_commit/commands.py b/pre_commit/commands.py
--- a/pre_commit/commands.py
+++ b/pre_commit/commands.py
@@ -5,7 +5,6 @@
import pkg_resources
import shutil
import stat
-import subprocess
import sys
from asottile.ordereddict import OrderedDict
from asottile.yaml import ordered_dump
@@ -19,6 +18,7 @@
from pre_commit.clientlib.validate_config import load_config
from pre_commit.jsonschema_extensions import remove_defaults
from pre_commit.logging_handler import LoggingHandler
+from pre_commit.output import get_hook_message
from pre_commit.repository import Repository
from pre_commit.staged_files_only import staged_files_only
from pre_commit.util import noop_context
@@ -26,10 +26,6 @@
logger = logging.getLogger('pre_commit')
-COLS = int(subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0])
-
-PASS_FAIL_LENGTH = 6
-
def install(runner):
"""Install the pre-commit hooks."""
@@ -107,7 +103,8 @@ def autoupdate(runner):
)
for repo_config in input_configs:
- print('Updating {0}...'.format(repo_config['repo']), end='')
+ sys.stdout.write('Updating {0}...'.format(repo_config['repo']))
+ sys.stdout.flush()
try:
new_repo_config = _update_repository(repo_config)
except RepositoryCannotBeUpdatedError as error:
@@ -149,34 +146,30 @@ def _get_skips(environ):
return set(skip.strip() for skip in skips.split(',') if skip.strip())
-def _print_no_files_skipped(hook, write, args):
- no_files_msg = '(no files to check) '
- skipped_msg = 'Skipped'
- write(
- '{0}{1}{2}{3}\n'.format(
- hook['name'],
- '.' * (
- COLS -
- len(hook['name']) -
- len(no_files_msg) -
- len(skipped_msg) -
- 1
- ),
- no_files_msg,
- color.format_color(skipped_msg, color.TURQUOISE, args.color),
- )
+def _hook_msg_start(hook, verbose):
+ return '{0}{1}'.format(
+ '[{0}] '.format(hook['id']) if verbose else '',
+ hook['name'],
)
+def _print_no_files_skipped(hook, write, args):
+ write(get_hook_message(
+ _hook_msg_start(hook, args.verbose),
+ postfix='(no files to check) ',
+ end_msg='Skipped',
+ end_color=color.TURQUOISE,
+ use_color=args.color,
+ ))
+
+
def _print_user_skipped(hook, write, args):
- skipped_msg = 'Skipped'
- write(
- '{0}{1}{2}\n'.format(
- hook['name'],
- '.' * (COLS - len(hook['name']) - len(skipped_msg) - 1),
- color.format_color(skipped_msg, color.YELLOW, args.color),
- ),
- )
+ write(get_hook_message(
+ _hook_msg_start(hook, args.verbose),
+ end_msg='Skipped',
+ end_color=color.YELLOW,
+ use_color=args.color,
+ ))
def _run_single_hook(runner, repository, hook_id, args, write, skips=set()):
@@ -199,12 +192,7 @@ def _run_single_hook(runner, repository, hook_id, args, write, skips=set()):
# Print the hook and the dots first in case the hook takes hella long to
# run.
- write(
- '{0}{1}'.format(
- hook['name'],
- '.' * (COLS - len(hook['name']) - PASS_FAIL_LENGTH - 1),
- ),
- )
+ write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
sys.stdout.flush()
retcode, stdout, stderr = repository.run_hook(
diff --git a/pre_commit/output.py b/pre_commit/output.py
new file mode 100644
--- /dev/null
+++ b/pre_commit/output.py
@@ -0,0 +1,66 @@
+import subprocess
+
+from pre_commit import color
+
+
+# TODO: smell: import side-effects
+COLS = int(
+ subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0]
+)
+
+
+def get_hook_message(
+ start,
+ postfix='',
+ end_msg=None,
+ end_len=0,
+ end_color=None,
+ use_color=None,
+ cols=COLS,
+):
+ """Prints a message for running a hook.
+
+ This currently supports three approaches:
+
+ # Print `start` followed by dots, leaving 6 characters at the end
+ >>> print_hook_message('start', end_len=6)
+ start...............................................................
+
+ # Print `start` followed by dots with the end message colored if coloring is
+ # specified and a newline afterwards
+ >>> print_hook_message(
+ 'start',
+ end_msg='end',
+ end_color=color.RED,
+ use_color=True,
+ )
+ start...................................................................end
+
+ # Print `start` followed by dots, followed by the `postfix` message
+ # uncolored, followed by the `end_msg` colored if specified and a newline
+ # afterwards
+ >>> print_hook_message(
+ 'start',
+ postfix='postfix ',
+ end_msg='end',
+ end_color=color.RED,
+ use_color=True,
+ )
+ start...........................................................postfix end
+ """
+ if bool(end_msg) == bool(end_len):
+ raise ValueError('Expected one of (`end_msg`, `end_len`)')
+ if end_msg is not None and (end_color is None or use_color is None):
+ raise ValueError(
+ '`end_color` and `use_color` are required with `end_msg`'
+ )
+
+ if end_len:
+ return start + '.' * (cols - len(start) - end_len - 1)
+ else:
+ return '{0}{1}{2}{3}\n'.format(
+ start,
+ '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
+ postfix,
+ color.format_color(end_msg, end_color, use_color),
+ )
| diff --git a/tests/commands_test.py b/tests/commands_test.py
--- a/tests/commands_test.py
+++ b/tests/commands_test.py
@@ -330,3 +330,13 @@ def test_skip_hook(repo_with_passing_hook):
)
for msg in ('Bash hook', 'Skipped'):
assert msg in printed
+
+
+def test_hook_id_not_in_non_verbose_output(repo_with_passing_hook):
+ ret, printed = _do_run(repo_with_passing_hook, _get_opts(verbose=False))
+ assert '[bash_hook]' not in printed
+
+
+def test_hook_id_in_verbose_output(repo_with_passing_hook):
+ ret, printed = _do_run(repo_with_passing_hook, _get_opts(verbose=True))
+ assert '[bash_hook] Bash hook' in printed
diff --git a/tests/output_test.py b/tests/output_test.py
new file mode 100644
--- /dev/null
+++ b/tests/output_test.py
@@ -0,0 +1,77 @@
+import pytest
+
+from pre_commit import color
+from pre_commit.output import get_hook_message
+
+
[email protected](
+ 'kwargs',
+ (
+ # both end_msg and end_len
+ {'end_msg': 'end', 'end_len': 1, 'end_color': '', 'use_color': True},
+ # Neither end_msg nor end_len
+ {},
+ # Neither color option for end_msg
+ {'end_msg': 'end'},
+ # No use_color for end_msg
+ {'end_msg': 'end', 'end_color': ''},
+ # No end_color for end_msg
+ {'end_msg': 'end', 'use_color': ''},
+ ),
+)
+def test_get_hook_message_raises(kwargs):
+ with pytest.raises(ValueError):
+ get_hook_message('start', **kwargs)
+
+
+def test_case_with_end_len():
+ ret = get_hook_message('start', end_len=5, cols=15)
+ assert ret == 'start' + '.' * 4
+
+
+def test_case_with_end_msg():
+ ret = get_hook_message(
+ 'start',
+ end_msg='end',
+ end_color='',
+ use_color=False,
+ cols=15,
+ )
+ assert ret == 'start' + '.' * 6 + 'end' + '\n'
+
+
+def test_case_with_end_msg_using_color():
+ ret = get_hook_message(
+ 'start',
+ end_msg='end',
+ end_color=color.RED,
+ use_color=True,
+ cols=15,
+ )
+ assert ret == 'start' + '.' * 6 + color.RED + 'end' + color.NORMAL + '\n'
+
+
+def test_case_with_postfix_message():
+ ret = get_hook_message(
+ 'start',
+ postfix='post ',
+ end_msg='end',
+ end_color='',
+ use_color=False,
+ cols=20,
+ )
+ assert ret == 'start' + '.' * 6 + 'post ' + 'end' + '\n'
+
+
+def test_make_sure_postfix_is_not_colored():
+ ret = get_hook_message(
+ 'start',
+ postfix='post ',
+ end_msg='end',
+ end_color=color.RED,
+ use_color=True,
+ cols=20,
+ )
+ assert ret == (
+ 'start' + '.' * 6 + 'post ' + color.RED + 'end' + color.NORMAL + '\n'
+ )
| Display hook ids when running pre-commit
The ideal output would look something like this:
[flake8] Flake8..........................................Passed
| 2014-04-20T03:06:50 |
|
pre-commit/pre-commit | 96 | pre-commit__pre-commit-96 | [
"95"
] | 53e316ade44fd4a97418f393241b8c334bfe0c61 | diff --git a/pre_commit/languages/system.py b/pre_commit/languages/system.py
--- a/pre_commit/languages/system.py
+++ b/pre_commit/languages/system.py
@@ -1,3 +1,6 @@
+import shlex
+
+
ENVIRONMENT_DIR = None
@@ -7,7 +10,7 @@ def install_environment(repo_cmd_runner):
def run_hook(repo_cmd_runner, hook, file_args):
return repo_cmd_runner.run(
- ['xargs', hook['entry']] + hook['args'],
+ ['xargs'] + shlex.split(hook['entry']) + hook['args'],
# TODO: this is duplicated in pre_commit/languages/helpers.py
stdin='\n'.join(list(file_args) + ['']),
retcode=None,
| diff --git a/testing/resources/prints_cwd_repo/setup.py b/testing/resources/prints_cwd_repo/setup.py
deleted file mode 100644
--- a/testing/resources/prints_cwd_repo/setup.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from setuptools import find_packages
-from setuptools import setup
-
-setup(
- name='prints_cwd',
- version='0.0.0',
- packages=find_packages('.'),
- entry_points={
- 'console_scripts': ['prints_cwd = prints_cwd.main:func'],
- },
-)
diff --git a/testing/resources/system_hook_with_spaces_repo/hooks.yaml b/testing/resources/system_hook_with_spaces_repo/hooks.yaml
new file mode 100644
--- /dev/null
+++ b/testing/resources/system_hook_with_spaces_repo/hooks.yaml
@@ -0,0 +1,4 @@
+- id: system-hook-with-spaces
+ name: System hook with spaces
+ entry: /usr/bin/python -c 'import sys; print("Hello World")'
+ language: system
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -102,6 +102,11 @@ def failing_hook_repo(dummy_git_repo):
yield _make_repo(dummy_git_repo, 'failing_hook_repo')
[email protected]_fixture
+def system_hook_with_spaces_repo(dummy_git_repo):
+ yield _make_repo(dummy_git_repo, 'system_hook_with_spaces_repo')
+
+
def _make_config(path, hook_id, file_regex):
config = {
'repo': path,
@@ -138,6 +143,13 @@ def config_for_script_hooks_repo(script_hooks_repo):
yield _make_config(script_hooks_repo, 'bash_hook', '')
[email protected]_fixture
+def config_for_system_hook_with_spaces(system_hook_with_spaces_repo):
+ yield _make_config(
+ system_hook_with_spaces_repo, 'system-hook-with-spaces', '',
+ )
+
+
def _make_repo_from_configs(*configs):
with open(C.CONFIG_FILE, 'w') as config_file:
yaml.dump(
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -40,6 +40,14 @@ def test_cwd_of_hook(config_for_prints_cwd_repo, store):
assert ret[1] == repo.repo_url + '\n'
[email protected]
+def test_system_hook_with_spaces(config_for_system_hook_with_spaces, store):
+ repo = Repository.create(config_for_system_hook_with_spaces, store)
+ ret = repo.run_hook('system-hook-with-spaces', [])
+ assert ret[0] == 0
+ assert ret[1] == 'Hello World\n'
+
+
@skipif_slowtests_false
@pytest.mark.integration
def test_run_a_node_hook(config_for_node_hooks_repo, store):
| System hooks with spaces in entry are not runnable
It's pretty reasonable to have a system hook that looks like this:
```
- id: foo
name: foo
entry: python -m bar
language: system
```
Currently this fails:
```
$ pre-commit run foo --all-files
foo...................................................Failed
xargs: python -m bar: No such file or directory
```
| 2014-05-18T21:25:14 |
|
pre-commit/pre-commit | 162 | pre-commit__pre-commit-162 | [
"161"
] | 37d3dc0c82f4cdf643ecad8e2beefff99464f88f | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -9,6 +9,7 @@
from pre_commit import color
from pre_commit.logging_handler import LoggingHandler
from pre_commit.output import get_hook_message
+from pre_commit.output import sys_stdout_write_wrapper
from pre_commit.staged_files_only import staged_files_only
from pre_commit.util import noop_context
@@ -125,7 +126,7 @@ def _has_unmerged_paths(runner):
return bool(stdout.strip())
-def run(runner, args, write=sys.stdout.write, environ=os.environ):
+def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,8 +1,10 @@
from __future__ import unicode_literals
import subprocess
+import sys
from pre_commit import color
+from pre_commit import five
# TODO: smell: import side-effects
@@ -70,3 +72,14 @@ def get_hook_message(
postfix,
color.format_color(end_msg, end_color, use_color),
)
+
+
+def sys_stdout_write_wrapper(s, stream=sys.stdout):
+ """Python 2.6 chokes on unicode being passed to sys.stdout.write.
+
+ This is an adapter because PY2 is ok with bytes and PY3 requires text.
+ """
+ assert type(s) is five.text
+ if five.PY2: # pragma: no cover (PY2)
+ s = s.encode('UTF-8')
+ stream.write(s)
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -1,3 +1,4 @@
+# -*- coding: UTF-8 -*-
from __future__ import unicode_literals
import io
@@ -5,8 +6,10 @@
import os
import os.path
import pytest
+import subprocess
from plumbum import local
+from pre_commit.commands.install_uninstall import install
from pre_commit.commands.run import _get_skips
from pre_commit.commands.run import _has_unmerged_paths
from pre_commit.commands.run import run
@@ -225,3 +228,30 @@ def test_multiple_hooks_same_id(
ret, output = _do_run(repo_with_passing_hook, _get_opts())
assert ret == 0
assert output.count('Bash hook') == 2
+
+
+def test_stdout_write_bug_py26(
+ repo_with_failing_hook, mock_out_store_directory, tmpdir_factory,
+):
+ with local.cwd(repo_with_failing_hook):
+ # Add bash hook on there again
+ with io.open('.pre-commit-config.yaml', 'a+') as config_file:
+ config_file.write(' args: ["☃"]\n')
+ local['git']('add', '.pre-commit-config.yaml')
+ stage_a_file()
+
+ install(Runner(repo_with_failing_hook))
+
+ # Don't want to write to home directory
+ env = dict(os.environ, **{'PRE_COMMIT_HOME': tmpdir_factory.get()})
+ # Have to use subprocess because pytest monkeypatches sys.stdout
+ _, stdout, _ = local['git'].run(
+ ('commit', '-m', 'Commit!'),
+ # git commit puts pre-commit to stderr
+ stderr=subprocess.STDOUT,
+ env=env,
+ retcode=None,
+ )
+ assert 'UnicodeEncodeError' not in stdout
+ # Doesn't actually happen, but a reasonable assertion
+ assert 'UnicodeDecodeError' not in stdout
diff --git a/tests/output_test.py b/tests/output_test.py
--- a/tests/output_test.py
+++ b/tests/output_test.py
@@ -1,9 +1,11 @@
from __future__ import unicode_literals
+import mock
import pytest
from pre_commit import color
from pre_commit.output import get_hook_message
+from pre_commit.output import sys_stdout_write_wrapper
@pytest.mark.parametrize(
@@ -77,3 +79,9 @@ def test_make_sure_postfix_is_not_colored():
assert ret == (
'start' + '.' * 6 + 'post ' + color.RED + 'end' + color.NORMAL + '\n'
)
+
+
+def test_sys_stdout_write_wrapper_writes():
+ fake_stream = mock.Mock()
+ sys_stdout_write_wrapper('hello world', fake_stream)
+ assert fake_stream.write.call_count == 1
| UnicodeEncodeError when writing to stdout in python2.6
```
$ pre-commit run fixmyjs
fixmyjs............................................................................................................................................................................................Failed
hookid: fixmyjs
Traceback (most recent call last):
File "virtualenv_run/bin/pre-commit", line 14, in <module>
sys.exit(main())
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/util.py", line 41, in wrapper
return func(argv)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/main.py", line 99, in main
return run(runner, args)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 144, in run
return _run_hook(runner, args, write=write)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 116, in _run_hook
return _run_single_hook(runner, repo, hook_id, args, write=write)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 91, in _run_single_hook
write(output.strip() + '\n')
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2713' in position 0: ordinal not in range(128)
```
| This SO post echos this bug:
http://stackoverflow.com/questions/8016236/python-unicode-handling-differences-between-print-and-sys-stdout-write
Sadly:
```
$ python2.6 -c "import sys; sys.stdout.write(u'\u2603\n')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2603' in position 0: ordinal not in range(128)
$ python2.7 -c "import sys; sys.stdout.write(u'\u2603\n')"
☃
$ python3.3 -c "import sys; sys.stdout.write(u'\u2603\n')"
☃
$ python3.4 -c "import sys; sys.stdout.write(u'\u2603\n')"
☃
```
And using bytes:
```
$ python2.6 -c "import sys; sys.stdout.write(u'\u2603\n'.encode('UTF-8'))"
☃
$ python2.7 -c "import sys; sys.stdout.write(u'\u2603\n'.encode('UTF-8'))"
☃
$ python3.3 -c "import sys; sys.stdout.write(u'\u2603\n'.encode('UTF-8'))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: must be str, not bytes
$ python3.4 -c "import sys; sys.stdout.write(u'\u2603\n'.encode('UTF-8'))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: must be str, not bytes
```
Looks like the fix is (sadly) to convert to bytes only in the 2.6 case
| 2014-09-02T23:15:56 |
pre-commit/pre-commit | 166 | pre-commit__pre-commit-166 | [
"164"
] | a6112f44f20ff2446ae68eefd46d718bb4281269 | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -6,6 +6,7 @@
import os
import os.path
import stat
+import sys
from pre_commit.logging_handler import LoggingHandler
from pre_commit.util import resource_filename
@@ -15,12 +16,13 @@
# This is used to identify the hook file we install
-PREVIOUS_IDENTIFYING_HASHES = [
+PREVIOUS_IDENTIFYING_HASHES = (
+ '4d9958c90bc262f47553e2c073f14cfe',
'd8ee923c46731b42cd95cc869add4062',
-]
+)
-IDENTIFYING_HASH = '4d9958c90bc262f47553e2c073f14cfe'
+IDENTIFYING_HASH = '49fd668cb42069aa1b6048464be5d395'
def is_our_pre_commit(filename):
@@ -63,8 +65,11 @@ def install(runner, overwrite=False, hooks=False):
)
)
- with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
- pre_commit_file_obj.write(open(pre_commit_file).read())
+ with io.open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
+ contents = io.open(pre_commit_file).read().format(
+ sys_executable=sys.executable,
+ )
+ pre_commit_file_obj.write(contents)
make_executable(runner.pre_commit_path)
print('pre-commit installed at {0}'.format(runner.pre_commit_path))
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,5 +1,6 @@
from __future__ import unicode_literals
+import os
import subprocess
import sys
@@ -10,7 +11,7 @@
# TODO: smell: import side-effects
COLS = int(
subprocess.Popen(
- ['tput', 'cols'], stdout=subprocess.PIPE
+ ['tput', 'cols'], stdout=subprocess.PIPE, stderr=open(os.devnull, 'w'),
).communicate()[0] or
# Default in the case of no terminal
80
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -2,11 +2,13 @@
from __future__ import unicode_literals
import io
+import mock
import os
import os.path
import re
import subprocess
import stat
+import sys
from plumbum import local
from pre_commit.commands.install_uninstall import IDENTIFYING_HASH
@@ -53,7 +55,9 @@ def test_install_pre_commit(tmpdir_factory):
assert os.path.exists(runner.pre_commit_path)
pre_commit_contents = io.open(runner.pre_commit_path).read()
pre_commit_script = resource_filename('pre-commit-hook')
- expected_contents = io.open(pre_commit_script).read()
+ expected_contents = io.open(pre_commit_script).read().format(
+ sys_executable=sys.executable,
+ )
assert pre_commit_contents == expected_contents
stat_result = os.stat(runner.pre_commit_path)
assert stat_result.st_mode & (stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
@@ -76,12 +80,17 @@ def test_uninstall(tmpdir_factory):
assert not os.path.exists(runner.pre_commit_path)
-def _get_commit_output(tmpdir_factory, touch_file='foo', home=None):
+def _get_commit_output(
+ tmpdir_factory,
+ touch_file='foo',
+ home=None,
+ env_base=os.environ,
+):
local['touch'](touch_file)
local['git']('add', touch_file)
# Don't want to write to home directory
home = home or tmpdir_factory.get()
- env = dict(os.environ, **{'PRE_COMMIT_HOME': home})
+ env = dict(env_base, **{'PRE_COMMIT_HOME': home})
return local['git'].run(
['commit', '-m', 'Commit!', '--allow-empty'],
# git commit puts pre-commit to stderr
@@ -136,11 +145,12 @@ def test_install_idempotent(tmpdir_factory):
def test_environment_not_sourced(tmpdir_factory):
path = make_consuming_repo(tmpdir_factory, 'script_hooks_repo')
with local.cwd(path):
- assert install(Runner(path)) == 0
+ # Patch the executable to simulate rming virtualenv
+ with mock.patch.object(sys, 'executable', '/bin/false'):
+ assert install(Runner(path)) == 0
ret, stdout, stderr = local['git'].run(
['commit', '--allow-empty', '-m', 'foo'],
- # XXX: 'HOME' makes this test pass on OSX
env={'HOME': os.environ['HOME']},
retcode=None,
)
@@ -362,3 +372,17 @@ def test_installs_hooks_with_hooks_True(
assert ret == 0
assert PRE_INSTALLED.match(output)
+
+
+def test_installed_from_venv(tmpdir_factory):
+ path = make_consuming_repo(tmpdir_factory, 'script_hooks_repo')
+ with local.cwd(path):
+ install(Runner(path))
+ # No environment so pre-commit is not on the path when running!
+ # Should still pick up the python from when we installed
+ ret, output = _get_commit_output(
+ tmpdir_factory,
+ env_base={'HOME': os.environ['HOME']},
+ )
+ assert ret == 0
+ assert NORMAL_PRE_COMMIT_RUN.match(output)
| Choose python more intelligently in the file installed to .git/hooks/pre-commit
Since we know which python we're running as when `pre-commit install` is run (`sys.executable`), let's put that into `.git/hooks/pre-commit` and try that python first.
This will involve bumping the magic number in the resource file, and probably updating the logic to check for the same sys.executable existing inside that file.
| 2014-09-04T15:46:06 |
|
pre-commit/pre-commit | 167 | pre-commit__pre-commit-167 | [
"165"
] | 59e753624e471100f94b3f7503bd01944476a4e3 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,7 +41,7 @@
'aspy.yaml',
'cached-property',
'jsonschema',
- 'nodeenv>=0.9.4',
+ 'nodeenv>=0.11.1',
'ordereddict',
'plumbum',
'pyyaml',
| npmrc causes npm to install to home directory instead of nodeenv
Here is what happened when I tried to get eslint installed:
```
$ pre-commit run --all-files
eslint..............................................................................................................................................................................................................................................................................................................Failed
hookid: eslint
xargs: eslint: No such file or directory
```
Moving .npmrc to nope.npmrc fixed the issue.
| Here's an example npmrc causing this problem:
```
prefix=~/.npm-packages
```
| 2014-09-04T15:55:01 |
|
pre-commit/pre-commit | 177 | pre-commit__pre-commit-177 | [
"176"
] | b2cb0f6fe6c0b195ded2cafd2806d9c3650a8379 | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -21,7 +21,10 @@ def staged_files_only(cmd_runner):
"""
# Determine if there are unstaged files
retcode, diff_stdout_binary, _ = cmd_runner.run(
- ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],
+ [
+ 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',
+ '--no-color',
+ ],
retcode=None,
encoding=None,
)
| diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -70,6 +70,11 @@ def test_foo_something_unstaged(foo_staged, cmd_runner):
_test_foo_state(foo_staged, 'herp\nderp\n', 'AM')
+def test_foo_something_unstaged_diff_color_always(foo_staged, cmd_runner):
+ cmd_output('git', 'config', '--local', 'color.diff', 'always')
+ test_foo_something_unstaged(foo_staged, cmd_runner)
+
+
def test_foo_both_modify_non_conflicting(foo_staged, cmd_runner):
with io.open(foo_staged.foo_filename, 'w') as foo_file:
foo_file.write(FOO_CONTENTS + '9\n')
| Stashed changes lost if hook fails
I've run into this particular (in my eyes, critical) bug.
If I want to do a partial commit, e.g. I have 2 files but I only add 1 file to the staging area and the staged file will cause a hook to fail, I loose the changes in the 2nd file because pre-commit fails to reroll the patch it stashed before running.
Here's my terminal log and the steps to reproduce:
## Version
$ pre-commit -V
pre-commit 0.3.0
## Commands to reproduce
```
$ cat unstaged.py
"""I am unstaged"""
$ echo "'''I am unstaged, but I have changes'''" > unstaged.py
$ echo "x = 'This is the loooooooooooooooooooooooooooooooooooongest liiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiine eveeeeeeeeer'" > foo.py
$ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: unstaged.py
modified: foo.py
no changes added to commit (use "git add" and/or "git commit -a")
$ git add foo.py
$ git commit -m "Adding a long line"
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/k/.pre-commit/patch1412683352.
Flake8...............................................Failed
hookid: flake8
foo.py:1:80: E501 line too long (112 > 79 characters)
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: [u'git', u'apply', u'/home/k/.pre-commit/patch1412683352']
Return code: 128
Expected return code: 0
Output: (u'', u'fatal: unrecognized input\n')
Check the log at ~/.pre-commit/pre-commit.log
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: foo.py
$ echo "x = 'This is a shorter line, its better'" > foo.py
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: foo.py
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: foo.py
$ git add foo.py
$ git commit -m "Fixed the error I got from the flake8 hook"
Flake8...............................................Passed
[master 78568e8] Fixed the error I got from the flake8 hook
1 file changed, 1 insertion(+), 1 deletion(-)
$ git status
On branch master
nothing to commit, working directory clean
$ cat unstaged.py
"""I am unstaged"""
```
## Log
```
$ cat ~/.pre-commit/pre-commit.log
Traceback (most recent call last):
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 34, in error_handler
yield
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/main.py", line 108, in main
return run(runner, args)
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/commands/run.py", line 151, in run
return _run_hooks(runner, args, write=write, environ=environ)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/staged_files_only.py", line 55, in staged_files_only
cmd_runner.run(['git', 'apply', patch_filename])
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/prefixed_command_runner.py", line 82, in run
returncode, replaced_cmd, retcode, output=(stdout, stderr),
CalledProcessError: Command: [u'git', u'apply', u'/home/k/.pre-commit/patch1412683352']
Return code: 128
Expected return code: 0
Output: (u'', u'fatal: unrecognized input\n')
```
## .pre-commit-config.yaml
```
$ cat .pre-commit-config.yaml
- repo: [email protected]:pre-commit/pre-commit-hooks
sha: 6343700aa063fe30acc319d2dc84353a35a3d6d0
hooks:
- id: flake8
args: ['--ignore=E712,F821']
```
| Could you additionally provide the version of `git` you are using? I believe it may be contributing to this problem.
I wonder if `git-apply` works significantly differently in a different version of git. I may need to work around that to fix this bug.
For what it's worth, it seems to work as intended with git 1.7.9.5:
```
$ git status
# On branch master
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: unstaged.py
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# venv/
no changes added to commit (use "git add" and/or "git commit -a")
$ echo "x = 'This is the loooooooooooooooooooooooooooooooooooongest liiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiine eveeeeeeeeer'" > foo.py
$ git status
# On branch master
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: foo.py
# modified: unstaged.py
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# venv/
no changes added to commit (use "git add" and/or "git commit -a")
$ git add foo.py
$ git commit -m "Add a long line"
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/anthony/.pre-commit/patch1412701193.
Flake8...................................................................Failed
hookid: flake8
foo.py:1:80: E501 line too long (112 > 79 characters)
[INFO] Restored changes from /home/anthony/.pre-commit/patch1412701193.
```
Additionally, the contents of `/home/k/.pre-commit/patch1412683352` will be useful to diagnose
I'm using git version 1.9.1
Here are the contents of the patch file
```
$ cat /home/k/.pre-commit/patch1412683352
diff --git a/file.py b/file.py
index f99a4d5..a8c4aad 100644
--- a/file.py
+++ b/file.py
@@ -1 +1 @@
-"""I am unstaged"""
+'''I am unstaged, but I have changes'''
```
This is a set of commands similar to what pre-commit runs. Could you run them and show me what the output is?
`test.sh`
```
#!/usr/bin/env bash
function printcmd() { echo '$$ '"$@"; "$@"; echo "## retcode $?"; }
printcmd git init test
printcmd cd test
printcmd sh -c 'echo '"'"'"""Docstring"""'"'"' > unstaged.py'
printcmd touch foo.py
printcmd git add unstaged.py foo.py
printcmd git commit -m "initial commit"
printcmd sh -c "echo "'"'"'''I am unstaged but I have changes'''"'"'" > unstaged.py"
printcmd sh -c "echo 'x = "'"'"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"'"'"' > foo.py"
printcmd git status
printcmd git add foo.py
# This mimics what pre-commit does in pre_commit/staged_files_only.py
printcmd sh -c "git diff --ignore-submodules --binary --exit-code > patch"
printcmd cat patch
printcmd git checkout -- .
printcmd git status
printcmd git apply patch
```
Output on my machine:
```
$ ./test.sh
$$ git init test
Initialized empty Git repository in /tmp/test/.git/
## retcode 0
$$ cd test
## retcode 0
$$ sh -c echo '"""Docstring"""' > unstaged.py
## retcode 0
$$ touch foo.py
## retcode 0
$$ git add unstaged.py foo.py
## retcode 0
$$ git commit -m initial commit
[master (root-commit) 446eafd] initial commit
1 file changed, 1 insertion(+)
create mode 100644 foo.py
create mode 100644 unstaged.py
## retcode 0
$$ sh -c echo "'''I am unstaged but I have changes'''" > unstaged.py
## retcode 0
$$ sh -c echo 'x = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"' > foo.py
## retcode 0
$$ git status
# On branch master
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: foo.py
# modified: unstaged.py
#
no changes added to commit (use "git add" and/or "git commit -a")
## retcode 0
$$ git add foo.py
## retcode 0
$$ sh -c git diff --ignore-submodules --binary --exit-code > patch
## retcode 1
$$ cat patch
diff --git a/unstaged.py b/unstaged.py
index c83d2e0..cbb1889 100644
--- a/unstaged.py
+++ b/unstaged.py
@@ -1 +1 @@
-"""Docstring"""
+'''I am unstaged but I have changes'''
## retcode 0
$$ git checkout -- .
## retcode 0
$$ git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: foo.py
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# patch
## retcode 0
$$ git apply patch
## retcode 0
```
Actually, I think it might be being affected by `.gitconfig`, could you paste the contents of `~/.gitconfig`?
Here's the output of the command:
```
$ . test.sh
$$ git init test
Initialized empty Git repository in /home/k/testtest/test/.git/
## retcode 0
$$ cd test
## retcode 0
$$ sh -c echo '"""Docstring"""' > unstaged.py
## retcode 0
$$ touch foo.py
## retcode 0
$$ git add unstaged.py foo.py
## retcode 0
$$ git commit -m initial commit
[master (root-commit) aa743a4] initial commit
2 files changed, 1 insertion(+)
create mode 100644 foo.py
create mode 100644 unstaged.py
## retcode 0
$$ sh -c echo "'''I am unstaged but I have changes'''" > unstaged.py
## retcode 0
$$ sh -c echo 'x = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"' > foo.py
## retcode 0
$$ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: foo.py
modified: unstaged.py
no changes added to commit (use "git add" and/or "git commit -a")
## retcode 0
$$ git add foo.py
## retcode 0
$$ sh -c git diff --ignore-submodules --binary --exit-code > patch
## retcode 1
$$ cat patch
diff --git a/unstaged.py b/unstaged.py
index c83d2e0..cbb1889 100644
--- a/unstaged.py
+++ b/unstaged.py
@@ -1 +1 @@
-"""Docstring"""
+'''I am unstaged but I have changes'''
## retcode 0
$$ git checkout -- .
## retcode 0
$$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: foo.py
Untracked files:
(use "git add <file>..." to include in what will be committed)
patch
## retcode 0
$$ git apply patch
fatal: unrecognized input
## retcode 128
```
and here's `~/.gitconfig`
```
[user]
name = Kasper Jacobsen
email = [email protected]
[color]
branch = always
diff = always
grep = always
interactive = always
pager = true
showbranch = always
status = always
ui = always
[apply]
whitespace = fix
[branch]
autosetupmerge = true
#autosetuprebase = always
[merge]
# Include summaries of merged commits in newly created merge commit messages
log = true
tool = sublime
[mergetool "sublime"]
cmd = subl -w $MERGED
trustExitCode = false
# Use `origin` as the default remote on the `master` branch in all cases
[branch "master"]
remote = origin
merge = refs/heads/master
[push]
default = tracking
[fetch]
recurseSubmodules = true
[log]
decorate = short
[pretty]
custom = %C(yellow)%h%Creset -%C(green)%d%Creset %s %Cgreen(%cr) %C(bold blue)%an%Creset
[alias]
dc = diff --cached
df = diff --word-diff
co = checkout
cp = cherry-pick
st = status
br = branch -a
lg = log --graph --pretty=medium --abbrev-commit --date=local
lgb = log --graph --pretty=medium --abbrev-commit --date=local --branches --all
lol = log --graph --oneline
lola = log --graph --oneline --all
l = log --graph --pretty=custom
la = log --graph --pretty=custom --all
# Show files ignored by git:
ign = ls-files -o -i --exclude-standard
# Show verbose output about tags, branches or remotes
tags = tag -l
branches = branch -a
remotes = remote -v
onto = !"git co -b __ONTOTMP; git co $1; git merge __ONTOTMP; git branch -d __ONTOTMP"
today = log --since=midnight --author='Kasper Jacobsen' --oneline
# exclude files locally
exclude = !sh -c 'echo "$1" >> .git/info/exclude' -
# URL shorthands
[url "[email protected]:"]
insteadOf = "gh:"
pushInsteadOf = "github:"
pushInsteadOf = "git://github.com/"
[url "git://github.com/"]
insteadOf = "github:"
[url "[email protected]:"]
insteadOf = "gst:"
pushInsteadOf = "gist:"
pushInsteadOf = "git://gist.github.com/"
[url "git://gist.github.com/"]
insteadOf = "gist:"
[credential]
helper = cache
```
| 2014-10-08T00:40:30 |
pre-commit/pre-commit | 193 | pre-commit__pre-commit-193 | [
"186"
] | 901c50632f6236a35dbfed5d7e12477db2949f20 | diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -1,5 +1,7 @@
from __future__ import unicode_literals
+import shutil
+
from cached_property import cached_property
from pre_commit.languages.all import languages
@@ -64,11 +66,21 @@ def install(self):
language = languages[language_name]
if (
language.ENVIRONMENT_DIR is None or
- self.cmd_runner.exists(language.ENVIRONMENT_DIR)
+ self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')
):
# The language is already installed
continue
+ # There's potentially incomplete cleanup from previous runs
+ # Clean it up!
+ if self.cmd_runner.exists(language.ENVIRONMENT_DIR):
+ shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))
+
language.install_environment(self.cmd_runner, language_version)
+ # Touch the .installed file (atomic) to indicate we've installed
+ open(
+ self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),
+ 'w',
+ ).close()
def run_hook(self, hook, file_args):
"""Run a hook.
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -3,6 +3,7 @@
import io
import os.path
+import shutil
import mock
import pytest
@@ -10,6 +11,7 @@
from pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA
from pre_commit.clientlib.validate_config import validate_config_extra
from pre_commit.jsonschema_extensions import apply_defaults
+from pre_commit.languages.python import PythonEnv
from pre_commit.repository import Repository
from pre_commit.util import cmd_output
from pre_commit.util import cwd
@@ -266,6 +268,35 @@ def test_reinstall(tmpdir_factory, store):
repo.require_installed()
+def test_control_c_control_c_on_install(tmpdir_factory, store):
+ """Regression test for #186."""
+ path = make_repo(tmpdir_factory, 'python_hooks_repo')
+ config = make_config_from_repo(path)
+ repo = Repository.create(config, store)
+ hook = repo.hooks[0][1]
+
+ class MyKeyboardInterrupt(KeyboardInterrupt):
+ pass
+
+ # To simulate a killed install, we'll make PythonEnv.run raise ^C
+ # and then to simulate a second ^C during cleanup, we'll make shutil.rmtree
+ # raise as well.
+ with pytest.raises(MyKeyboardInterrupt):
+ with mock.patch.object(
+ PythonEnv, 'run', side_effect=MyKeyboardInterrupt,
+ ):
+ with mock.patch.object(shutil, 'rmtree', MyKeyboardInterrupt):
+ repo.run_hook(hook, [])
+
+ # Should have made an environment, however this environment is broken!
+ assert os.path.exists(repo.cmd_runner.path('py_env'))
+
+ # However, it should be perfectly runnable (reinstall after botched
+ # install)
+ retv, stdout, stderr = repo.run_hook(hook, [])
+ assert retv == 0
+
+
@pytest.mark.integration
def test_really_long_file_paths(tmpdir_factory, store):
base_path = tmpdir_factory.get()
| ^C^C during installation may leave pre-commit in a bad state
There's code which handles the first ^C, however I think the second one (during execution of the finally block) may not be handled well. I probably need to make the cleanup atomic somehow...
| 2015-02-07T23:44:23 |
|
pre-commit/pre-commit | 204 | pre-commit__pre-commit-204 | [
"203"
] | 7b4470850e8ffa2911419a92d4050b301e3c63f3 | diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -62,7 +62,7 @@ def _write_readme(self):
def _write_sqlite_db(self):
# To avoid a race where someone ^Cs between db creation and execution
# of the CREATE TABLE statement
- fd, tmpfile = tempfile.mkstemp()
+ fd, tmpfile = tempfile.mkstemp(dir=self.directory)
# We'll be managing this file ourselves
os.close(fd)
# sqlite doesn't close its fd with its contextmanager >.<
| Crash when /tmp is on a different device
```
Traceback (most recent call last):
File "/home/cameron/Workspace/hack16-llvm-lang/venv/bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.4.0', 'console_scripts', 'pre-commit')()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/main.py", line 136, in main
'Command {0} failed to exit with a returncode'.format(args.command)
File "/usr/lib64/python3.4/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/error_handler.py", line 41, in error_handler
traceback.format_exc(),
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/error_handler.py", line 24, in _log_and_exit
store.require_created()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py", line 97, in require_created
self._create()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py", line 90, in _create
self._write_sqlite_db()
File "/home/cameron/Workspace/hack16-llvm-lang/venv/lib/python3.4/site-packages/pre_commit/store.py", line 82, in _write_sqlite_db
os.rename(tmpfile, self.db_path)
OSError: [Errno 18] Invalid cross-device link: '/tmp/tmpz1pkyqsm' -> '/home/cameron/.pre-commit/db.db'
```
| Seems your tempfs is on a different device and os.rename derps on that. Thanks python! Should be easy enough to fix :)
| 2015-02-27T18:12:38 |
|
pre-commit/pre-commit | 206 | pre-commit__pre-commit-206 | [
"205"
] | 52c2d9c35a9f4d298d879f6608c0bc444b312396 | diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py
--- a/pre_commit/languages/helpers.py
+++ b/pre_commit/languages/helpers.py
@@ -10,7 +10,9 @@ def file_args_to_stdin(file_args):
def run_hook(env, hook, file_args):
quoted_args = [pipes.quote(arg) for arg in hook['args']]
return env.run(
- ' '.join(['xargs', '-0', hook['entry']] + quoted_args),
+ # Use -s 4000 (slightly less than posix mandated minimum)
+ # This is to prevent "xargs: ... Bad file number" on windows
+ ' '.join(['xargs', '-0', '-s4000', hook['entry']] + quoted_args),
stdin=file_args_to_stdin(file_args),
retcode=None,
)
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -305,3 +305,32 @@ def test_get_changed_files():
'3387edbb1288a580b37fe25225aa0b856b18ad1a',
)
assert files == ['CHANGELOG.md', 'setup.py']
+
+
+def test_lots_of_files(mock_out_store_directory, tmpdir_factory):
+ # windows xargs seems to have a bug, here's a regression test for
+ # our workaround
+ git_path = make_consuming_repo(tmpdir_factory, 'python_hooks_repo')
+ with cwd(git_path):
+ # Override files so we run against them
+ with io.open(
+ '.pre-commit-config.yaml', 'a+',
+ ) as config_file:
+ config_file.write(' files: ""\n')
+
+ # Write a crap ton of files
+ for i in range(400):
+ filename = '{0}{1}'.format('a' * 100, i)
+ open(filename, 'w').close()
+
+ cmd_output('bash', '-c', 'git add .')
+ install(Runner(git_path))
+
+ # Don't want to write to home directory
+ env = dict(os.environ, **{'PRE_COMMIT_HOME': tmpdir_factory.get()})
+ cmd_output(
+ 'git', 'commit', '-m', 'Commit!',
+ # git commit puts pre-commit to stderr
+ stderr=subprocess.STDOUT,
+ env=env,
+ )
| Windows: Large number of files causes `xargs: ... Bad file number`
Originally here: https://github.com/pre-commit/pre-commit-hooks/issues/41
| I think I might be able to solve this with xargs -s perhaps
Based on my googling that does sound like a good way to handle this issue. It's actually surprisingly difficult to find much information about this.
The other idea I have is it is hitting the file descriptor limit which seems to be surprisingly low on windows.
Is pre-commit actually opening all of those files at once, though? It doesn't seem like these hooks would actually be hanging on to file descriptors. The list of filenames is just a bunch of strings, after all.
pre-commit shouldn't be grabbing file descriptors. I did see the same problem with just `git add` which makes me think it might be some git internal that's misbehaving. At the moment I'm trying to reproduce this under test (and not just interactively) and failing :/
> On computers running Microsoft Windows XP or later, the maximum length of the string that you can use at the command prompt is 8191 characters. On computers running Microsoft Windows 2000 or Windows NT 4.0, the maximum length of the string that you can use at the command prompt is 2047 characters.
In Linux it's more like a few megabytes worth of text I think.
Linux depends on the implementation. POSIX says 4096 is the lowest minimum so I'm going to choose `-s 4000` to be pretty safe.
| 2015-02-27T23:15:46 |
pre-commit/pre-commit | 215 | pre-commit__pre-commit-215 | [
"214"
] | 9a18f6a38d729d22317517647d1a5d6841253481 | diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -116,7 +116,7 @@ def clone(self, url, sha):
with clean_path_on_failure(dir):
cmd_output('git', 'clone', '--no-checkout', url, dir)
with cwd(dir):
- cmd_output('git', 'checkout', sha)
+ cmd_output('git', 'reset', sha, '--hard')
# Update our db with the created repo
with sqlite3.connect(self.db_path) as db:
| Not stashing changes before installing
Hi,
I'm regularly running into this situation: I have pending changes, I run `git commit -a`, and pre-commit tries to install its hooks:
```
[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.
An unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: Your local changes to the following files would be overwritten by checkout:
.pre-commit-config.yaml
.travis.yml
CHANGELOG
README.md
hooks.yaml
pre_commit_hooks/autopep8_wrapper.py
pre_commit_hooks/check_json.py
pre_commit_hooks/check_yaml.py
pre_commit_hooks/debug_statement_hook.py
pre_commit_hooks/end_of_file_fixer.py
pre_commit_hooks/tests_should_end_in_test.py
pre_commit_hooks/trailing_whitespace_fixer.py
pre_commit_hooks/util.py
pylintrc
requirements-dev.txt
setup.py
testing/util.py
tests/autopep8_wrapper_test.py
tests/debug_statement_hook_test.py
tests/end_of_file_fixer_test.py
tests/tests_should_end_in_test_test.py
tests/trailing_whitespace_fixer_test.py
tests/util_test.py
tox.ini
Please, commit your changes or stash them before you can switch branches.
Aborting
Check the log at ~/.pre-commit/pre-commit.log
```
The log contents are
```
An unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: Your local changes to the following files would be overwritten by checkout:
.pre-commit-config.yaml
.travis.yml
CHANGELOG
README.md
hooks.yaml
pre_commit_hooks/autopep8_wrapper.py
pre_commit_hooks/check_json.py
pre_commit_hooks/check_yaml.py
pre_commit_hooks/debug_statement_hook.py
pre_commit_hooks/end_of_file_fixer.py
pre_commit_hooks/tests_should_end_in_test.py
pre_commit_hooks/trailing_whitespace_fixer.py
pre_commit_hooks/util.py
pylintrc
requirements-dev.txt
setup.py
testing/util.py
tests/autopep8_wrapper_test.py
tests/debug_statement_hook_test.py
tests/end_of_file_fixer_test.py
tests/tests_should_end_in_test_test.py
tests/trailing_whitespace_fixer_test.py
tests/util_test.py
tox.ini
Please, commit your changes or stash them before you can switch branches.
Aborting
Traceback (most recent call last):
File "/home/qdm/workspace/web/pre-commit/pre_commit/error_handler.py", line 34, in error_handler
yield
File "/home/qdm/workspace/web/pre-commit/pre_commit/main.py", line 129, in main
return run(runner, args)
File "/home/qdm/workspace/web/pre-commit/pre_commit/commands/run.py", line 165, in run
return _run_hooks(runner, args, write=write, environ=environ)
File "/home/qdm/workspace/web/pre-commit/pre_commit/commands/run.py", line 115, in _run_hooks
for repo in runner.repositories:
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/runner.py", line 43, in repositories
repository.require_installed()
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 64, in require_installed
self.install()
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 78, in install
for language_name, _ in self.languages
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 41, in languages
for _, hook in self.hooks
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 49, in hooks
for hook in self.repo_config['hooks']
File "/home/qdm/workspace/web/pre-commit/pre_commit/repository.py", line 49, in <genexpr>
for hook in self.repo_config['hooks']
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/manifest.py", line 24, in hooks
return dict((hook['id'], hook) for hook in self.manifest_contents)
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/manifest.py", line 18, in manifest_contents
self.repo_path_getter.repo_path, C.MANIFEST_FILE,
File "/usr/lib/python3.4/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/qdm/workspace/web/pre-commit/pre_commit/store.py", line 46, in repo_path
return self._store.clone(self._repo, self._sha)
File "/home/qdm/workspace/web/pre-commit/pre_commit/store.py", line 119, in clone
cmd_output('git', 'checkout', sha)
File "/home/qdm/workspace/web/pre-commit/pre_commit/util.py", line 160, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ['git', 'checkout', 'd3db0385825d4c082bc7117c090ac16cb4840f3e']
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: Your local changes to the following files would be overwritten by checkout:
.pre-commit-config.yaml
.travis.yml
CHANGELOG
README.md
hooks.yaml
pre_commit_hooks/autopep8_wrapper.py
pre_commit_hooks/check_json.py
pre_commit_hooks/check_yaml.py
pre_commit_hooks/debug_statement_hook.py
pre_commit_hooks/end_of_file_fixer.py
pre_commit_hooks/tests_should_end_in_test.py
pre_commit_hooks/trailing_whitespace_fixer.py
pre_commit_hooks/util.py
pylintrc
requirements-dev.txt
setup.py
testing/util.py
tests/autopep8_wrapper_test.py
tests/debug_statement_hook_test.py
tests/end_of_file_fixer_test.py
tests/tests_should_end_in_test_test.py
tests/trailing_whitespace_fixer_test.py
tests/util_test.py
tox.ini
Please, commit your changes or stash them before you can switch branches.
Aborting
```
I think this is a regression from a previous version, it was more seamless then.
| Hmmm that is odd!
Can you supply the following:
```
$ git --version
$ pre-commit --version
```
And potentially can you cd to that directory and see what `git status` is? (The directory might be hard to find... hmm...)
You can find it by doing the following:
```
$ sqlite3 ~/.pre-commit/db.db 'SELECT path FROM repos WHERE repo = "git url for repo" AND ref = "sha you are using"'
```
Also I'm curious if `pre-commit clean` will fix it (obviously not a solution as I'd like to prevent getting into this situation).
I have some ideas on how to fix it if it's just that git changed how `--no-checkout` + `checkout` works
```
$ git --version
git version 2.3.3
hub version 2.2.0
$ pre-commit --version
pre-commit 0.4.2
```
`pre-commit clean` does not fix it.
My `~/.pre-commit/db.db` file has no repos…
Currently building git from source, I imagine this is a git change :/
Sounds like a good place to look, after a short investigation, my teammates who are on git 2.3 have encountered this issue as well.
Can you cat `/etc/gitconfig` and `~/.gitconfig` ?
`/etc/gitconfig` does not exist.
`~/.gitconfig` contents:
```
[user]
name = Quentin de Metz
email = [email protected]
[credential]
helper = cache --timeout=3600
[push]
default = current
[help]
autocorrect = 1
[branch]
autosetuprebase = never
[color]
branch = auto
diff = auto
status = auto
[color "branch"]
current = yellow reverse
local = yellow
remote = green
[color "diff"]
meta = yellow bold
frag = magenta bold
old = red bold
new = green bold
[color "status"]
added = yellow
changed = green
untracked = cyan
[tag]
sort = version:refname
```
| 2015-03-25T17:29:25 |
|
pre-commit/pre-commit | 216 | pre-commit__pre-commit-216 | [
"208"
] | c4ff9d498830cb04ad54dc3b73666b5e7943fa3d | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -3,6 +3,7 @@
import contextlib
import distutils.spawn
import os
+import sys
import virtualenv
@@ -43,7 +44,10 @@ def install_environment(repo_cmd_runner, version='default'):
# Install a virtualenv
with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):
- venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]
+ venv_cmd = [
+ sys.executable, '-m', 'virtualenv',
+ '{{prefix}}{0}'.format(ENVIRONMENT_DIR)
+ ]
if version != 'default':
venv_cmd.extend(['-p', norm_version(version)])
repo_cmd_runner.run(venv_cmd)
| pre-commit potentially uses the wrong `virtualenv` when building environments
It should use `sys.executable, '-m', 'virtualenv'` instead of `'virtualenv'`
| 2015-03-29T01:05:25 |
||
pre-commit/pre-commit | 226 | pre-commit__pre-commit-226 | [
"219"
] | d97ea30c4bb309a2877fed95323ac8c793c0679f | diff --git a/pre_commit/clientlib/validate_config.py b/pre_commit/clientlib/validate_config.py
--- a/pre_commit/clientlib/validate_config.py
+++ b/pre_commit/clientlib/validate_config.py
@@ -6,6 +6,13 @@
from pre_commit.errors import FatalError
+_LOCAL_HOOKS_MAGIC_REPO_STRING = 'local'
+
+
+def is_local_hooks(repo_entry):
+ return repo_entry['repo'] == _LOCAL_HOOKS_MAGIC_REPO_STRING
+
+
class InvalidConfigError(FatalError):
pass
@@ -53,7 +60,12 @@ def try_regex(repo, hook, value, field_name):
def validate_config_extra(config):
for repo in config:
- if 'sha' not in repo:
+ if is_local_hooks(repo):
+ if 'sha' in repo:
+ raise InvalidConfigError(
+ '"sha" property provided for local hooks'
+ )
+ elif 'sha' not in repo:
raise InvalidConfigError(
'Missing "sha" field for repository {0}'.format(repo['repo'])
)
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -18,15 +18,6 @@
logger = logging.getLogger('pre_commit')
-class HookExecutor(object):
- def __init__(self, hook, invoker):
- self.hook = hook
- self._invoker = invoker
-
- def invoke(self, filenames):
- return self._invoker(self.hook, filenames)
-
-
def _get_skips(environ):
skips = environ.get('SKIP', '')
return set(skip.strip() for skip in skips.split(',') if skip.strip())
@@ -80,8 +71,7 @@ def get_filenames(args, include_expr, exclude_expr):
return getter(include_expr, exclude_expr)
-def _run_single_hook(hook_executor, args, write, skips=frozenset()):
- hook = hook_executor.hook
+def _run_single_hook(hook, repo, args, write, skips=frozenset()):
filenames = get_filenames(args, hook['files'], hook['exclude'])
if hook['id'] in skips:
_print_user_skipped(hook, write, args)
@@ -95,7 +85,7 @@ def _run_single_hook(hook_executor, args, write, skips=frozenset()):
write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
sys.stdout.flush()
- retcode, stdout, stderr = hook_executor.invoke(filenames)
+ retcode, stdout, stderr = repo.run_hook(hook, filenames)
if retcode != hook['expected_return_value']:
retcode = 1
@@ -119,19 +109,19 @@ def _run_single_hook(hook_executor, args, write, skips=frozenset()):
return retcode
-def _run_hooks(hook_executors, args, write, environ):
+def _run_hooks(repo_hooks, args, write, environ):
"""Actually run the hooks."""
skips = _get_skips(environ)
retval = 0
- for hook_executor in hook_executors:
- retval |= _run_single_hook(hook_executor, args, write, skips)
+ for repo, hook in repo_hooks:
+ retval |= _run_single_hook(hook, repo, args, write, skips)
return retval
-def get_hook_executors(runner):
+def get_repo_hooks(runner):
for repo in runner.repositories:
- for _, repo_hook in repo.hooks:
- yield HookExecutor(repo_hook, repo.run_hook)
+ for _, hook in repo.hooks:
+ yield (repo, hook)
def _has_unmerged_paths(runner):
@@ -159,13 +149,13 @@ def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
ctx = staged_files_only(runner.cmd_runner)
with ctx:
- hook_executors = list(get_hook_executors(runner))
+ repo_hooks = list(get_repo_hooks(runner))
if args.hook:
- hook_executors = [
- he for he in hook_executors
- if he.hook['id'] == args.hook
+ repo_hooks = [
+ (repo, hook) for repo, hook in repo_hooks
+ if hook['id'] == args.hook
]
- if not hook_executors:
+ if not repo_hooks:
write('No hook with id `{0}`\n'.format(args.hook))
return 1
- return _run_hooks(hook_executors, args, write, environ)
+ return _run_hooks(repo_hooks, args, write, environ)
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -5,6 +5,10 @@
from cached_property import cached_property
+from pre_commit import git
+from pre_commit.clientlib.validate_config import is_local_hooks
+from pre_commit.clientlib.validate_manifest import MANIFEST_JSON_SCHEMA
+from pre_commit.jsonschema_extensions import apply_defaults
from pre_commit.languages.all import languages
from pre_commit.manifest import Manifest
from pre_commit.prefixed_command_runner import PrefixedCommandRunner
@@ -21,10 +25,13 @@ def __init__(self, repo_config, repo_path_getter):
@classmethod
def create(cls, config, store):
- repo_path_getter = store.get_repo_path_getter(
- config['repo'], config['sha']
- )
- return cls(config, repo_path_getter)
+ if is_local_hooks(config):
+ return LocalRepository(config)
+ else:
+ repo_path_getter = store.get_repo_path_getter(
+ config['repo'], config['sha']
+ )
+ return cls(config, repo_path_getter)
@cached_property
def repo_url(self):
@@ -111,3 +118,28 @@ def run_hook(self, hook, file_args):
return languages[hook['language']].run_hook(
self.cmd_runner, hook, file_args,
)
+
+
+class LocalRepository(Repository):
+ def __init__(self, repo_config, repo_path_getter=None):
+ repo_path_getter = None
+ super(LocalRepository, self).__init__(repo_config, repo_path_getter)
+
+ @cached_property
+ def hooks(self):
+ return tuple(
+ (hook['id'], apply_defaults(hook, MANIFEST_JSON_SCHEMA['items']))
+ for hook in self.repo_config['hooks']
+ )
+
+ @cached_property
+ def cmd_runner(self):
+ return PrefixedCommandRunner(git.get_root())
+
+ @cached_property
+ def sha(self):
+ raise NotImplementedError
+
+ @cached_property
+ def manifest(self):
+ raise NotImplementedError
| diff --git a/testing/fixtures.py b/testing/fixtures.py
--- a/testing/fixtures.py
+++ b/testing/fixtures.py
@@ -60,12 +60,16 @@ def write_config(directory, config):
config_file.write(ordered_dump([config], **C.YAML_DUMP_KWARGS))
-def make_consuming_repo(tmpdir_factory, repo_source):
- path = make_repo(tmpdir_factory, repo_source)
- config = make_config_from_repo(path)
- git_path = git_dir(tmpdir_factory)
+def add_config_to_repo(git_path, config):
write_config(git_path, config)
with cwd(git_path):
cmd_output('git', 'add', C.CONFIG_FILE)
cmd_output('git', 'commit', '-m', 'Add hooks config')
return git_path
+
+
+def make_consuming_repo(tmpdir_factory, repo_source):
+ path = make_repo(tmpdir_factory, repo_source)
+ config = make_config_from_repo(path)
+ git_path = git_dir(tmpdir_factory)
+ return add_config_to_repo(git_path, config)
diff --git a/tests/clientlib/validate_config_test.py b/tests/clientlib/validate_config_test.py
--- a/tests/clientlib/validate_config_test.py
+++ b/tests/clientlib/validate_config_test.py
@@ -1,5 +1,6 @@
from __future__ import unicode_literals
+import jsonschema
import pytest
from pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA
@@ -25,7 +26,7 @@ def test_run(input, expected_output):
assert run(input) == expected_output
[email protected](('manifest_obj', 'expected'), (
[email protected](('config_obj', 'expected'), (
([], False),
(
[{
@@ -66,8 +67,8 @@ def test_run(input, expected_output):
False,
),
))
-def test_is_valid_according_to_schema(manifest_obj, expected):
- ret = is_valid_according_to_schema(manifest_obj, CONFIG_JSON_SCHEMA)
+def test_is_valid_according_to_schema(config_obj, expected):
+ ret = is_valid_according_to_schema(config_obj, CONFIG_JSON_SCHEMA)
assert ret is expected
@@ -121,3 +122,55 @@ def test_config_with_ok_exclude_regex_passes():
CONFIG_JSON_SCHEMA,
)
validate_config_extra(config)
+
+
[email protected]('config_obj', (
+ [{
+ 'repo': 'local',
+ 'sha': 'foo',
+ 'hooks': [{
+ 'id': 'do_not_commit',
+ 'name': 'Block if "DO NOT COMMIT" is found',
+ 'entry': 'DO NOT COMMIT',
+ 'language': 'pcre',
+ 'files': '^(.*)$',
+ }],
+ }],
+))
+def test_config_with_local_hooks_definition_fails(config_obj):
+ with pytest.raises((
+ jsonschema.exceptions.ValidationError, InvalidConfigError
+ )):
+ jsonschema.validate(config_obj, CONFIG_JSON_SCHEMA)
+ config = apply_defaults(config_obj, CONFIG_JSON_SCHEMA)
+ validate_config_extra(config)
+
+
[email protected]('config_obj', (
+ [{
+ 'repo': 'local',
+ 'hooks': [{
+ 'id': 'arg-per-line',
+ 'name': 'Args per line hook',
+ 'entry': 'bin/hook.sh',
+ 'language': 'script',
+ 'files': '',
+ 'args': ['hello', 'world'],
+ }],
+ }],
+ [{
+ 'repo': 'local',
+ 'hooks': [{
+ 'id': 'arg-per-line',
+ 'name': 'Args per line hook',
+ 'entry': 'bin/hook.sh',
+ 'language': 'script',
+ 'files': '',
+ 'args': ['hello', 'world'],
+ }]
+ }],
+))
+def test_config_with_local_hooks_definition_passes(config_obj):
+ jsonschema.validate(config_obj, CONFIG_JSON_SCHEMA)
+ config = apply_defaults(config_obj, CONFIG_JSON_SCHEMA)
+ validate_config_extra(config)
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -14,10 +14,12 @@
from pre_commit.commands.run import _has_unmerged_paths
from pre_commit.commands.run import get_changed_files
from pre_commit.commands.run import run
+from pre_commit.ordereddict import OrderedDict
from pre_commit.runner import Runner
from pre_commit.util import cmd_output
from pre_commit.util import cwd
from testing.auto_namedtuple import auto_namedtuple
+from testing.fixtures import add_config_to_repo
from testing.fixtures import make_consuming_repo
@@ -81,7 +83,7 @@ def _test_run(repo, options, expected_outputs, expected_ret, stage):
stage_a_file()
args = _get_opts(**options)
ret, printed = _do_run(repo, args)
- assert ret == expected_ret
+ assert ret == expected_ret, (ret, expected_ret, printed)
for expected_output_part in expected_outputs:
assert expected_output_part in printed
@@ -313,9 +315,7 @@ def test_lots_of_files(mock_out_store_directory, tmpdir_factory):
git_path = make_consuming_repo(tmpdir_factory, 'python_hooks_repo')
with cwd(git_path):
# Override files so we run against them
- with io.open(
- '.pre-commit-config.yaml', 'a+',
- ) as config_file:
+ with io.open('.pre-commit-config.yaml', 'a+') as config_file:
config_file.write(' files: ""\n')
# Write a crap ton of files
@@ -334,3 +334,66 @@ def test_lots_of_files(mock_out_store_directory, tmpdir_factory):
stderr=subprocess.STDOUT,
env=env,
)
+
+
+def test_local_hook_passes(
+ repo_with_passing_hook, mock_out_store_directory,
+):
+ config = OrderedDict((
+ ('repo', 'local'),
+ ('hooks', (OrderedDict((
+ ('id', 'pylint'),
+ ('name', 'PyLint'),
+ ('entry', 'python -m pylint.__main__'),
+ ('language', 'system'),
+ ('files', r'\.py$'),
+ )), OrderedDict((
+ ('id', 'do_not_commit'),
+ ('name', 'Block if "DO NOT COMMIT" is found'),
+ ('entry', 'DO NOT COMMIT'),
+ ('language', 'pcre'),
+ ('files', '^(.*)$'),
+ ))))
+ ))
+ add_config_to_repo(repo_with_passing_hook, config)
+
+ with io.open('dummy.py', 'w') as staged_file:
+ staged_file.write('"""TODO: something"""\n')
+ cmd_output('git', 'add', 'dummy.py')
+
+ _test_run(
+ repo_with_passing_hook,
+ options={},
+ expected_outputs=[''],
+ expected_ret=0,
+ stage=False
+ )
+
+
+def test_local_hook_fails(
+ repo_with_passing_hook, mock_out_store_directory,
+):
+ config = OrderedDict((
+ ('repo', 'local'),
+ ('hooks', [OrderedDict((
+ ('id', 'no-todo'),
+ ('name', 'No TODO'),
+ ('entry', 'grep -iI todo'),
+ ('expected_return_value', 1),
+ ('language', 'system'),
+ ('files', ''),
+ ))])
+ ))
+ add_config_to_repo(repo_with_passing_hook, config)
+
+ with io.open('dummy.py', 'w') as staged_file:
+ staged_file.write('"""TODO: something"""\n')
+ cmd_output('git', 'add', 'dummy.py')
+
+ _test_run(
+ repo_with_passing_hook,
+ options={},
+ expected_outputs=[''],
+ expected_ret=1,
+ stage=False
+ )
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -12,6 +12,7 @@
from pre_commit.clientlib.validate_config import validate_config_extra
from pre_commit.jsonschema_extensions import apply_defaults
from pre_commit.languages.python import PythonEnv
+from pre_commit.ordereddict import OrderedDict
from pre_commit.repository import Repository
from pre_commit.util import cmd_output
from pre_commit.util import cwd
@@ -377,3 +378,22 @@ def test_tags_on_repositories(in_tmpdir, tmpdir_factory, store):
ret = repo_2.run_hook(repo_2.hooks[0][1], ['bar'])
assert ret[0] == 0
assert ret[1] == 'bar\nHello World\n'
+
+
+def test_local_repository():
+ config = OrderedDict((
+ ('repo', 'local'),
+ ('hooks', [OrderedDict((
+ ('id', 'do_not_commit'),
+ ('name', 'Block if "DO NOT COMMIT" is found'),
+ ('entry', 'DO NOT COMMIT'),
+ ('language', 'pcre'),
+ ('files', '^(.*)$'),
+ ))])
+ ))
+ local_repo = Repository.create(config, 'dummy')
+ with pytest.raises(NotImplementedError):
+ local_repo.sha
+ with pytest.raises(NotImplementedError):
+ local_repo.manifest
+ assert len(local_repo.hooks) == 1
diff --git a/tests/runner_test.py b/tests/runner_test.py
--- a/tests/runner_test.py
+++ b/tests/runner_test.py
@@ -5,8 +5,10 @@
import os.path
import pre_commit.constants as C
+from pre_commit.ordereddict import OrderedDict
from pre_commit.runner import Runner
from pre_commit.util import cwd
+from testing.fixtures import add_config_to_repo
from testing.fixtures import git_dir
from testing.fixtures import make_consuming_repo
@@ -52,6 +54,31 @@ def test_repositories(tmpdir_factory, mock_out_store_directory):
assert len(runner.repositories) == 1
+def test_local_hooks(tmpdir_factory, mock_out_store_directory):
+ config = OrderedDict((
+ ('repo', 'local'),
+ ('hooks', (OrderedDict((
+ ('id', 'arg-per-line'),
+ ('name', 'Args per line hook'),
+ ('entry', 'bin/hook.sh'),
+ ('language', 'script'),
+ ('files', ''),
+ ('args', ['hello', 'world']),
+ )), OrderedDict((
+ ('id', 'do_not_commit'),
+ ('name', 'Block if "DO NOT COMMIT" is found'),
+ ('entry', 'DO NOT COMMIT'),
+ ('language', 'pcre'),
+ ('files', '^(.*)$'),
+ ))))
+ ))
+ git_path = git_dir(tmpdir_factory)
+ add_config_to_repo(git_path, config)
+ runner = Runner(git_path)
+ assert len(runner.repositories) == 1
+ assert len(runner.repositories[0].hooks) == 2
+
+
def test_pre_commit_path():
runner = Runner(os.path.join('foo', 'bar'))
expected_path = os.path.join('foo', 'bar', '.git', 'hooks', 'pre-commit')
| Add support for pcre / scripts / system hooks definition in .pre-commit-config.yaml
Everything is in the title :)
_Rationale:_ a `pre-commit` user shouldn't have to setup a git repository to configure a pre-commit check that can be defined in 5 lines or less.
_Example:_ taken the from [do_not_commit](https://github.com/pricematch/pricematch-pre-commit-hooks/blob/master/hooks.yaml) hook:
```
- id: do_not_commit
name: Block if "DO NOT COMMIT" is found
entry: DO NOT COMMIT
language: pcre
files: ^(.*)$
```
_Suggested solutions:_
1. Allow for pcre / scripts / system hooks definition in _.pre-commit-config.yaml_
2. Allow for the `repo` field in _.pre-commit-config.yaml_ to point to a subfolder (of the git repo configured with `pre-commit`) that contains a _hooks.yaml_ .
This currently crashes because `pre-commit` expect to find git repository in the folder point by `repo`.
| I think this is along similar lines of this: https://github.com/pre-commit/pre-commit/issues/173
And is kind-of possible: https://github.com/Yelp/venv-update/blob/a5960acab7101a1e70c57945b2038fef9d005aed/.pre-commit-config.yaml#L15-L22
I'd certainly like to make this easier though :)
Yes ! The `Yelp/venv-update` example is exactly what I wanted !
If you support this, maybe mention it in the docs ?
Although maybe allowing a `repo: local` value would be cleaner than having to point it to an existing dummy github repo.
Yeah that way _works_ currently, I wouldn't call it supported yet since it's kinda clunky and I want a more straightforward way of doing that. That method is unlikely to break though. I imagine the actual implementation will be similar
Ok. Then you can resolve this ticket if you want to track this feature elsewhere.
And thank you for your detailed answers !
I went on and hacked around it: what do you think of the solution described in #223 ?
I'm going to clarify here since I think I'm a bit confused :)
I'd like to make @bukzor's config work when it changes from:
``` yaml
- repo: git://github.com/bukzor/pre-commit-system-hook.git
sha: 64a3e3f0d11f74ccd0498ea3149c8e177dd9989c
hooks:
- id: system
name: PyLint
entry: python -m pylint.__main__
language: system
files: \.py$
```
to
``` yaml
- repo: pre-commit-locally-configured
hooks:
- id: pylint
name: PyLint
entry: python -m pylint.__main__
language: system
files: \.py$
```
I think (but I might be a bit off!) that all we would need to change to support this is:
- Change the config schema to allow for missing sha in the case the repo is our special value
- In the case that our repo is the special value, the repository is a special list which contains "all hooks" (that is to say that anything we configure in our config will match a hook in this special repo)
- Allow the current code which merges the config dict on top of the manifest dict to continue to work (might need it in another place as well)
Does this sound right? (mostly wanted to just get a checklist together so it's clear what the plan is here :D)
Sounds alright, + adding some tests :)
I won't have time to craft the commit today, but I plan to submit the pull request over the week-end.
Repo: null makes sense to me.
Just don't validate the sha for language: system.
Also: can we put system-style hooks at the top level? It doesn't actually
make sense to nest then under a repo at all.
Just ideas. If it makes things a mess, disregard.
--phone is hard.
On May 7, 2015 6:17 PM, "Anthony Sottile" [email protected] wrote:
> I'm going to clarify here since I think I'm a bit confused :)
>
> I'd like to make @bukzor https://github.com/bukzor's config work when
> it changes from:
> - repo: git://github.com/bukzor/pre-commit-system-hook.git
> sha: 64a3e3f0d11f74ccd0498ea3149c8e177dd9989c
> hooks:
> - id: system
> name: PyLint
> entry: python -m pylint.**main**
> language: system
> files: .py$
>
> to
> - repo: pre-commit-locally-configured
> hooks:
> - id: pylint
> name: PyLint
> entry: python -m pylint.**main**
> language: system
> files: .py$
>
> I think (but I might be a bit off!) that all we would need to change to
> support this is:
> - Change the config schema to allow for missing sha in the case the
> repo is our special value
> - In the case that our repo is the special value, the repository is a
> special list which contains "all hooks" (that is to say that anything we
> configure in our config will match a hook in this special repo)
> - Allow the current code which merges the config dict on top of the
> manifest dict to continue to work (might need it in another place as well)
>
> Does this sound right? (mostly wanted to just get a checklist together so
> it's clear what the plan is here :D)
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/pre-commit/pre-commit/issues/219#issuecomment-100060347
> .
> Repo: null makes sense to me.
Sounds nice & simple, but null / None is often a design smell: it is often better to use an "Optional" construct, a NullObject or the SpecialCase pattern. I'll stick to the magic string constant.
> can we put system-style hooks at the top level?
I don't understand what you meant here. I complete agree it doesn't make sense to put them in a repo, but why put them at the YAML config file hierarchy root ? I don't quite see what change to the plan @asottile detailed you are suggesting there. Could you further explain ?
> Also: can we put system-style hooks at the top level?
@bukzor this would entail a breaking change to the config file, I don't really want to do that at this time.
EDIT: null also seems fine to me as well
Is fine. It probably doesn't make sense for the list to be heterogeneous.
--phone is hard.
On May 8, 2015 9:54 AM, "Anthony Sottile" [email protected] wrote:
> Also: can we put system-style hooks at the top level?
> @bukzor https://github.com/bukzor this would entail a breaking change
> to the config file, I don't really want to do that at this time.
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/pre-commit/pre-commit/issues/219#issuecomment-100297345
> .
| 2015-05-10T16:40:06 |
pre-commit/pre-commit | 231 | pre-commit__pre-commit-231 | [
"227"
] | 9515ca06378d74f1e4f8013db2b5230c1f15edaa | diff --git a/pre_commit/clientlib/validate_config.py b/pre_commit/clientlib/validate_config.py
--- a/pre_commit/clientlib/validate_config.py
+++ b/pre_commit/clientlib/validate_config.py
@@ -33,7 +33,7 @@ class InvalidConfigError(FatalError):
'properties': {
'id': {'type': 'string'},
'files': {'type': 'string'},
- 'exclude': {'type': 'string', 'default': '^$'},
+ 'exclude': {'type': 'string'},
'language_version': {'type': 'string'},
'args': {
'type': 'array',
@@ -71,7 +71,7 @@ def validate_config_extra(config):
)
for hook in repo['hooks']:
try_regex(repo, hook['id'], hook.get('files', ''), 'files')
- try_regex(repo, hook['id'], hook['exclude'], 'exclude')
+ try_regex(repo, hook['id'], hook.get('exclude', ''), 'exclude')
load_config = get_validator(
diff --git a/pre_commit/clientlib/validate_manifest.py b/pre_commit/clientlib/validate_manifest.py
--- a/pre_commit/clientlib/validate_manifest.py
+++ b/pre_commit/clientlib/validate_manifest.py
@@ -20,6 +20,7 @@ class InvalidManifestError(ValueError):
'name': {'type': 'string'},
'description': {'type': 'string', 'default': ''},
'entry': {'type': 'string'},
+ 'exclude': {'type': 'string', 'default': '^$'},
'language': {'type': 'string'},
'language_version': {'type': 'string', 'default': 'default'},
'files': {'type': 'string'},
@@ -52,8 +53,14 @@ def validate_files(hook_config):
if not is_regex_valid(hook_config['files']):
raise InvalidManifestError(
'Invalid files regex at {0}: {1}'.format(
- hook_config['id'],
- hook_config['files'],
+ hook_config['id'], hook_config['files'],
+ )
+ )
+
+ if not is_regex_valid(hook_config.get('exclude', '')):
+ raise InvalidManifestError(
+ 'Invalid exclude regex at {0}: {1}'.format(
+ hook_config['id'], hook_config['exclude'],
)
)
| diff --git a/tests/clientlib/validate_config_test.py b/tests/clientlib/validate_config_test.py
--- a/tests/clientlib/validate_config_test.py
+++ b/tests/clientlib/validate_config_test.py
@@ -174,3 +174,23 @@ def test_config_with_local_hooks_definition_passes(config_obj):
jsonschema.validate(config_obj, CONFIG_JSON_SCHEMA)
config = apply_defaults(config_obj, CONFIG_JSON_SCHEMA)
validate_config_extra(config)
+
+
+def test_does_not_contain_defaults():
+ """Due to the way our merging works, if this schema has any defaults they
+ will clobber potentially useful values in the backing manifest. #227
+ """
+ to_process = [(CONFIG_JSON_SCHEMA, ())]
+ while to_process:
+ schema, route = to_process.pop()
+ # Check this value
+ if isinstance(schema, dict):
+ if 'default' in schema:
+ raise AssertionError(
+ 'Unexpected default in schema at {0}'.format(
+ ' => '.join(route),
+ )
+ )
+
+ for key, value in schema.items():
+ to_process.append((value, route + (key,)))
diff --git a/tests/clientlib/validate_manifest_test.py b/tests/clientlib/validate_manifest_test.py
--- a/tests/clientlib/validate_manifest_test.py
+++ b/tests/clientlib/validate_manifest_test.py
@@ -46,6 +46,9 @@ def test_additional_manifest_check_passing(obj):
[{'id': 'a', 'language': 'not a language', 'files': ''}],
[{'id': 'a', 'language': 'python3', 'files': ''}],
[{'id': 'a', 'language': 'python', 'files': 'invalid regex('}],
+ [{'id': 'a', 'language': 'not a language', 'files': ''}],
+ [{'id': 'a', 'language': 'python3', 'files': ''}],
+ [{'id': 'a', 'language': 'python', 'files': '', 'exclude': '('}],
),
)
def test_additional_manifest_failing(obj):
diff --git a/tests/manifest_test.py b/tests/manifest_test.py
--- a/tests/manifest_test.py
+++ b/tests/manifest_test.py
@@ -22,6 +22,7 @@ def test_manifest_contents(manifest):
'args': [],
'description': '',
'entry': 'bin/hook.sh',
+ 'exclude': '^$',
'expected_return_value': 0,
'files': '',
'id': 'bash_hook',
@@ -36,6 +37,7 @@ def test_hooks(manifest):
'args': [],
'description': '',
'entry': 'bin/hook.sh',
+ 'exclude': '^$',
'expected_return_value': 0,
'files': '',
'id': 'bash_hook',
| Bug: base manifest value for 'exclude' is always ignored
I stumbled upon this bug while working on #226: the culprit is [`Repository.hooks`](https://github.com/pre-commit/pre-commit/blob/master/pre_commit/repository.py#L48).
A quick fix for this would be to simply remove the default value from `pre_commit/clientlib/validate_config.py`, but the root cause is that any default value defined for a field in this file will make the corresponding manifest field useless.
Basically here is what happens in `Repository.hooks`:
- all the hooks defined in the current repository are enumerated
- at this stage, a `hook` is a dict closely matching the Yaml the config file content, **plus** default values for fields not defined in the Yaml but having a JSON schema 'default'
- when doing the dict merge, **every** (key,value) pair in `hook` overrides the corresponding manifest entry. This includes default config value like `exclude: '$^'` overriding a base manifest value like `exclude: '.bak$'`
Hence I suggest either adding a test ensuring there will never be any 'default' defined in `CONFIG_JSON_SCHEMA`, or improving the merge logic.
| Ah the fix in #185 makes more sense now! I just didn't connect 2 and 2 together.
| 2015-05-18T19:48:14 |
pre-commit/pre-commit | 233 | pre-commit__pre-commit-233 | [
"207"
] | 20c546a7daef67750c4f2bf099a23f8b9219e32a | diff --git a/pre_commit/five.py b/pre_commit/five.py
--- a/pre_commit/five.py
+++ b/pre_commit/five.py
@@ -20,3 +20,11 @@ def n(s):
return s
else:
return s.decode('UTF-8')
+
+
+def to_text(s):
+ return s if isinstance(s, text) else s.decode('UTF-8')
+
+
+def to_bytes(s):
+ return s if isinstance(s, bytes) else s.encode('UTF-8')
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -7,6 +7,7 @@
import pkg_resources
from pre_commit import color
+from pre_commit import five
from pre_commit.commands.autoupdate import autoupdate
from pre_commit.commands.clean import clean
from pre_commit.commands.install_uninstall import install
@@ -25,6 +26,7 @@
def main(argv=None):
argv = argv if argv is not None else sys.argv[1:]
+ argv = [five.to_text(arg) for arg in argv]
parser = argparse.ArgumentParser()
# http://stackoverflow.com/a/8521644/812183
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -77,12 +77,8 @@ def get_hook_message(
)
-def sys_stdout_write_wrapper(s, stream=sys.stdout):
- """Python 2.6 chokes on unicode being passed to sys.stdout.write.
+stdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)
- This is an adapter because PY2 is ok with bytes and PY3 requires text.
- """
- assert type(s) is five.text
- if five.PY2: # pragma: no cover (PY2)
- s = s.encode('UTF-8')
- stream.write(s)
+
+def sys_stdout_write_wrapper(s, stream=stdout_byte_stream):
+ stream.write(five.to_bytes(s))
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -105,7 +105,7 @@ def _get_commit_output(
cmd_output('git', 'add', touch_file)
# Don't want to write to home directory
home = home or tmpdir_factory.get()
- env = dict(env_base, **{'PRE_COMMIT_HOME': home})
+ env = dict(env_base, PRE_COMMIT_HOME=home)
return cmd_output(
'git', 'commit', '-m', 'Commit!', '--allow-empty',
# git commit puts pre-commit to stderr
@@ -414,7 +414,7 @@ def test_installed_from_venv(tmpdir_factory):
def _get_push_output(tmpdir_factory):
# Don't want to write to home directory
home = tmpdir_factory.get()
- env = dict(os.environ, **{'PRE_COMMIT_HOME': home})
+ env = dict(os.environ, PRE_COMMIT_HOME=home)
return cmd_output(
'git', 'push', 'origin', 'HEAD:new_branch',
# git commit puts pre-commit to stderr
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -5,6 +5,7 @@
import os
import os.path
import subprocess
+import sys
import mock
import pytest
@@ -274,6 +275,22 @@ def test_multiple_hooks_same_id(
assert output.count('Bash hook') == 2
+def test_non_ascii_hook_id(
+ repo_with_passing_hook, mock_out_store_directory, tmpdir_factory,
+):
+ with cwd(repo_with_passing_hook):
+ install(Runner(repo_with_passing_hook))
+ # Don't want to write to home directory
+ env = dict(os.environ, PRE_COMMIT_HOME=tmpdir_factory.get())
+ _, stdout, _ = cmd_output(
+ sys.executable, '-m', 'pre_commit.main', 'run', '☃',
+ env=env, retcode=None,
+ )
+ assert 'UnicodeDecodeError' not in stdout
+ # Doesn't actually happen, but a reasonable assertion
+ assert 'UnicodeEncodeError' not in stdout
+
+
def test_stdout_write_bug_py26(
repo_with_failing_hook, mock_out_store_directory, tmpdir_factory,
):
@@ -289,7 +306,7 @@ def test_stdout_write_bug_py26(
install(Runner(repo_with_failing_hook))
# Don't want to write to home directory
- env = dict(os.environ, **{'PRE_COMMIT_HOME': tmpdir_factory.get()})
+ env = dict(os.environ, PRE_COMMIT_HOME=tmpdir_factory.get())
# Have to use subprocess because pytest monkeypatches sys.stdout
_, stdout, _ = cmd_output(
'git', 'commit', '-m', 'Commit!',
@@ -329,7 +346,7 @@ def test_lots_of_files(mock_out_store_directory, tmpdir_factory):
install(Runner(git_path))
# Don't want to write to home directory
- env = dict(os.environ, **{'PRE_COMMIT_HOME': tmpdir_factory.get()})
+ env = dict(os.environ, PRE_COMMIT_HOME=tmpdir_factory.get())
cmd_output(
'git', 'commit', '-m', 'Commit!',
# git commit puts pre-commit to stderr
| Failures when hook ids are non-ascii
```
$ pre-commit run ☃
An unexpected error has occurred: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
Check the log at ~/.pre-commit/pre-commit.log
$ cat ~/.pre-commit/pre-commit.log
An unexpected error has occurred: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
Traceback (most recent call last):
File "/home/asottile/workspace/pre-commit/pre_commit/error_handler.py", line 34, in error_handler
yield
File "/home/asottile/workspace/pre-commit/pre_commit/main.py", line 129, in main
return run(runner, args)
File "/home/asottile/workspace/pre-commit/pre_commit/commands/run.py", line 163, in run
return _run_hook(runner, args, write=write)
File "/home/asottile/workspace/pre-commit/pre_commit/commands/run.py", line 133, in _run_hook
write('No hook with id `{0}`\n'.format(hook_id))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
```
| Same error here, after running a `mvn test` invoking JUnit tests.
I don't have time to write a minimal test case right now sadly.
| 2015-05-21T14:50:50 |
pre-commit/pre-commit | 235 | pre-commit__pre-commit-235 | [
"234"
] | b4bc5e47423635e187d50d8730584d2c8ff06772 | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -48,6 +48,9 @@ def install(runner, overwrite=False, hooks=False, hook_type='pre-commit'):
hook_path = runner.get_hook_path(hook_type)
legacy_path = hook_path + '.legacy'
+ if not os.path.exists(os.path.dirname(hook_path)):
+ os.makedirs(os.path.dirname(hook_path))
+
# If we have an existing hook, move it to pre-commit.legacy
if (
os.path.exists(hook_path) and
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -5,6 +5,7 @@
import os
import os.path
import re
+import shutil
import subprocess
import sys
@@ -78,6 +79,15 @@ def test_install_pre_commit(tmpdir_factory):
assert pre_push_contents == expected_contents
+def test_install_hooks_directory_not_present(tmpdir_factory):
+ path = git_dir(tmpdir_factory)
+ # Simulate some git clients which don't make .git/hooks #234
+ shutil.rmtree(os.path.join(path, '.git', 'hooks'))
+ runner = Runner(path)
+ install(runner)
+ assert os.path.exists(runner.pre_commit_path)
+
+
def test_uninstall_does_not_blow_up_when_not_there(tmpdir_factory):
path = git_dir(tmpdir_factory)
runner = Runner(path)
| Some versions of git don't create .git/hooks directory
Noticed here: https://github.com/victorlin/bugbuzz-python/pull/1#issuecomment-104971132
| 2015-05-24T03:03:11 |
|
pre-commit/pre-commit | 239 | pre-commit__pre-commit-239 | [
"238"
] | 1c46446427ab0dfa6293221426b855420533ef8d | diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py
--- a/pre_commit/commands/autoupdate.py
+++ b/pre_commit/commands/autoupdate.py
@@ -8,6 +8,7 @@
import pre_commit.constants as C
from pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA
+from pre_commit.clientlib.validate_config import is_local_hooks
from pre_commit.clientlib.validate_config import load_config
from pre_commit.jsonschema_extensions import remove_defaults
from pre_commit.ordereddict import OrderedDict
@@ -67,6 +68,8 @@ def autoupdate(runner):
)
for repo_config in input_configs:
+ if is_local_hooks(repo_config):
+ continue
sys.stdout.write('Updating {0}...'.format(repo_config['repo']))
sys.stdout.flush()
try:
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -125,9 +125,8 @@ def run_hook(self, hook, file_args):
class LocalRepository(Repository):
- def __init__(self, repo_config, repo_path_getter=None):
- repo_path_getter = None
- super(LocalRepository, self).__init__(repo_config, repo_path_getter)
+ def __init__(self, repo_config):
+ super(LocalRepository, self).__init__(repo_config, None)
@cached_property
def hooks(self):
| diff --git a/testing/fixtures.py b/testing/fixtures.py
--- a/testing/fixtures.py
+++ b/testing/fixtures.py
@@ -35,6 +35,19 @@ def make_repo(tmpdir_factory, repo_source):
return path
+def config_with_local_hooks():
+ return OrderedDict((
+ ('repo', 'local'),
+ ('hooks', [OrderedDict((
+ ('id', 'do_not_commit'),
+ ('name', 'Block if "DO NOT COMMIT" is found'),
+ ('entry', 'DO NOT COMMIT'),
+ ('language', 'pcre'),
+ ('files', '^(.*)$'),
+ ))])
+ ))
+
+
def make_config_from_repo(repo_path, sha=None, hooks=None, check=True):
manifest = load_manifest(os.path.join(repo_path, C.MANIFEST_FILE))
config = OrderedDict((
diff --git a/tests/commands/autoupdate_test.py b/tests/commands/autoupdate_test.py
--- a/tests/commands/autoupdate_test.py
+++ b/tests/commands/autoupdate_test.py
@@ -13,6 +13,9 @@
from pre_commit.util import cmd_output
from pre_commit.util import cwd
from testing.auto_namedtuple import auto_namedtuple
+from testing.fixtures import add_config_to_repo
+from testing.fixtures import config_with_local_hooks
+from testing.fixtures import git_dir
from testing.fixtures import make_config_from_repo
from testing.fixtures import make_repo
from testing.fixtures import write_config
@@ -137,3 +140,10 @@ def test_autoupdate_hook_disappearing_repo(
after = open(C.CONFIG_FILE).read()
assert ret == 1
assert before == after
+
+
+def test_autoupdate_local_hooks(tmpdir_factory):
+ git_path = git_dir(tmpdir_factory)
+ config = config_with_local_hooks()
+ path = add_config_to_repo(git_path, config)
+ assert autoupdate(Runner(path)) == 0
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -12,10 +12,10 @@
from pre_commit.clientlib.validate_config import validate_config_extra
from pre_commit.jsonschema_extensions import apply_defaults
from pre_commit.languages.python import PythonEnv
-from pre_commit.ordereddict import OrderedDict
from pre_commit.repository import Repository
from pre_commit.util import cmd_output
from pre_commit.util import cwd
+from testing.fixtures import config_with_local_hooks
from testing.fixtures import git_dir
from testing.fixtures import make_config_from_repo
from testing.fixtures import make_repo
@@ -404,16 +404,7 @@ def test_tags_on_repositories(in_tmpdir, tmpdir_factory, store):
def test_local_repository():
- config = OrderedDict((
- ('repo', 'local'),
- ('hooks', [OrderedDict((
- ('id', 'do_not_commit'),
- ('name', 'Block if "DO NOT COMMIT" is found'),
- ('entry', 'DO NOT COMMIT'),
- ('language', 'pcre'),
- ('files', '^(.*)$'),
- ))])
- ))
+ config = config_with_local_hooks()
local_repo = Repository.create(config, 'dummy')
with pytest.raises(NotImplementedError):
local_repo.sha
| pre-commit autoupdate fails on `local` hooks repos
```
$ pre-commit autoupdate
Updating [email protected]:pre-commit/pre-commit-hooks...updating 9ce45609a92f648c87b42207410386fd69a5d1e5 -> cf550fcab3f12015f8676b8278b30e1a5bc10e70.
Updating [email protected]:pre-commit/pre-commit...updating 4352d45451296934bc17494073b82bcacca3205c -> 1c46446427ab0dfa6293221426b855420533ef8d.
Updating [email protected]:asottile/reorder_python_imports...updating aeda21eb7df6af8c9f6cd990abb086375c71c953 -> 3d86483455ab5bd06cc1069fdd5ac57be5463f10.
Updating local...An unexpected error has occurred: AttributeError: 'NoneType' object has no attribute 'repo_path'
Check the log at ~/.pre-commit/pre-commit.log
(venv-pre_commit)asottile@work:~/workspace/pre-commit$ cat ~/.pre-commit/pre-commit.log
An unexpected error has occurred: AttributeError: 'NoneType' object has no attribute 'repo_path'
Traceback (most recent call last):
File "/home/asottile/workspace/pre-commit/pre_commit/error_handler.py", line 34, in error_handler
yield
File "/home/asottile/workspace/pre-commit/pre_commit/main.py", line 142, in main
return autoupdate(runner)
File "/home/asottile/workspace/pre-commit/pre_commit/commands/autoupdate.py", line 73, in autoupdate
new_repo_config = _update_repository(repo_config, runner)
File "/home/asottile/workspace/pre-commit/pre_commit/commands/autoupdate.py", line 33, in _update_repository
with cwd(repo.repo_path_getter.repo_path):
AttributeError: 'NoneType' object has no attribute 'repo_path'
(venv-pre_commit)asottile@work:~/workspace/pre-commit$ git diff
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 397ee72..20393a7 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -20,3 +20,10 @@
sha: aeda21eb7df6af8c9f6cd990abb086375c71c953
hooks:
- id: reorder-python-imports
+- repo: local
+ hooks:
+ - id: herp
+ name: Herp
+ entry: echo derp
+ language: system
+ files: ^$
```
| cc @Lucas-C
Damn. I'll get a look asap.
| 2015-06-02T19:44:23 |
pre-commit/pre-commit | 244 | pre-commit__pre-commit-244 | [
"242"
] | 25ebea63ea5e22482cc568c04bf840deb7a8e22e | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -139,6 +139,7 @@ def _has_unstaged_config(runner):
def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
+ no_stash = args.no_stash or args.all_files or bool(args.files)
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
@@ -150,7 +151,7 @@ def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
if bool(args.source) != bool(args.origin):
logger.error('Specify both --origin and --source.')
return 1
- if _has_unstaged_config(runner) and not args.no_stash:
+ if _has_unstaged_config(runner) and not no_stash:
if args.allow_unstaged_config:
logger.warn(
'You have an unstaged config file and have specified the '
@@ -166,8 +167,7 @@ def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
)
return 1
- # Don't stash if specified or files are specified
- if args.no_stash or args.all_files or args.files:
+ if no_stash:
ctx = noop_context()
else:
ctx = staged_files_only(runner.cmd_runner)
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -433,14 +433,17 @@ def test_allow_unstaged_config_option(
assert ret == 0
-def test_no_allow_unstaged_config_option(
- repo_with_passing_hook, mock_out_store_directory,
-):
- with cwd(repo_with_passing_hook):
+def modify_config(path):
+ with cwd(path):
with io.open('.pre-commit-config.yaml', 'a+') as config_file:
# writing a newline should be relatively harmless to get a change
config_file.write('\n')
+
+def test_no_allow_unstaged_config_option(
+ repo_with_passing_hook, mock_out_store_directory,
+):
+ modify_config(repo_with_passing_hook)
args = _get_opts(allow_unstaged_config=False)
ret, printed = _do_run(repo_with_passing_hook, args)
assert 'Your .pre-commit-config.yaml is unstaged.' in printed
@@ -450,11 +453,25 @@ def test_no_allow_unstaged_config_option(
def test_no_stash_suppresses_allow_unstaged_config_option(
repo_with_passing_hook, mock_out_store_directory,
):
- with cwd(repo_with_passing_hook):
- with io.open('.pre-commit-config.yaml', 'a+') as config_file:
- # writing a newline should be relatively harmless to get a change
- config_file.write('\n')
-
+ modify_config(repo_with_passing_hook)
args = _get_opts(allow_unstaged_config=False, no_stash=True)
ret, printed = _do_run(repo_with_passing_hook, args)
assert 'Your .pre-commit-config.yaml is unstaged.' not in printed
+
+
+def test_all_files_suppresses_allow_unstaged_config_option(
+ repo_with_passing_hook, mock_out_store_directory,
+):
+ modify_config(repo_with_passing_hook)
+ args = _get_opts(all_files=True)
+ ret, printed = _do_run(repo_with_passing_hook, args)
+ assert 'Your .pre-commit-config.yaml is unstaged.' not in printed
+
+
+def test_files_suppresses_allow_unstaged_config_option(
+ repo_with_passing_hook, mock_out_store_directory,
+):
+ modify_config(repo_with_passing_hook)
+ args = _get_opts(files=['.pre-commit-config.yaml'])
+ ret, printed = _do_run(repo_with_passing_hook, args)
+ assert 'Your .pre-commit-config.yaml is unstaged.' not in printed
| Unstaged check should not complain when running --all-files
```
$ pre-commit run --all-files
[ERROR] Your .pre-commit-config.yaml is unstaged.
`git add .pre-commit-config.yaml` to fix this.
Run pre-commit with --allow-unstaged-config to silence this.
```
| 2015-06-15T18:42:48 |
|
pre-commit/pre-commit | 286 | pre-commit__pre-commit-286 | [
"232"
] | 3472f2b3ce972de7b440a387a81c76f67d5539cd | diff --git a/pre_commit/clientlib/validate_manifest.py b/pre_commit/clientlib/validate_manifest.py
--- a/pre_commit/clientlib/validate_manifest.py
+++ b/pre_commit/clientlib/validate_manifest.py
@@ -24,7 +24,6 @@ class InvalidManifestError(ValueError):
'language': {'type': 'string'},
'language_version': {'type': 'string', 'default': 'default'},
'files': {'type': 'string'},
- 'expected_return_value': {'type': 'number', 'default': 0},
'stages': {
'type': 'array',
'default': [],
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -87,7 +87,7 @@ def _run_single_hook(hook, repo, args, write, skips=frozenset()):
retcode, stdout, stderr = repo.run_hook(hook, filenames)
- if retcode != hook['expected_return_value']:
+ if retcode:
retcode = 1
print_color = color.RED
pass_fail = 'Failed'
| diff --git a/tests/clientlib/validate_manifest_test.py b/tests/clientlib/validate_manifest_test.py
--- a/tests/clientlib/validate_manifest_test.py
+++ b/tests/clientlib/validate_manifest_test.py
@@ -77,7 +77,6 @@ def test_additional_manifest_failing(obj):
'language': 'python',
'language_version': 'python3.3',
'files': r'\.py$',
- 'expected_return_value': 0,
}],
True,
),
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -476,8 +476,7 @@ def test_local_hook_fails(
('hooks', [OrderedDict((
('id', 'no-todo'),
('name', 'No TODO'),
- ('entry', 'grep -iI todo'),
- ('expected_return_value', 1),
+ ('entry', 'sh -c "! grep -iI todo $@" --'),
('language', 'system'),
('files', ''),
))])
diff --git a/tests/manifest_test.py b/tests/manifest_test.py
--- a/tests/manifest_test.py
+++ b/tests/manifest_test.py
@@ -23,7 +23,6 @@ def test_manifest_contents(manifest):
'description': '',
'entry': 'bin/hook.sh',
'exclude': '^$',
- 'expected_return_value': 0,
'files': '',
'id': 'bash_hook',
'language': 'script',
@@ -39,7 +38,6 @@ def test_hooks(manifest):
'description': '',
'entry': 'bin/hook.sh',
'exclude': '^$',
- 'expected_return_value': 0,
'files': '',
'id': 'bash_hook',
'language': 'script',
| Remove / deprecate expected_return_code
Since we rely heavily on xargs, this feature doesn't really work as expected.
| Damn, that was useful.
To revert the logic of a `grep` for example.
I don't have a specific use case though, so I won't miss it personally.
Yeah it was actually in the version-negative-one of the project, but is horribly broken for a nontrivial number of files: https://github.com/pre-commit/pre-commit/commit/7bb7f4a4838dbd5d52873187cc9b1b51b83cf0ba
I also ran into this while implementing pcre hooks, and you can see the "correct" way here (pcre are basically just special system hooks): https://github.com/pre-commit/pre-commit/blob/2ec7a34035e3072b2a4d118131fbf3b64c12b2f6/pre_commit/languages/pcre.py#L19-L23
You can see the test specifically tailored to this "bug" for pcre hooks here: https://github.com/pre-commit/pre-commit/blob/2ec7a34035e3072b2a4d118131fbf3b64c12b2f6/tests/repository_test.py#L157-L173
The problem is with the way that xargs works (we'll use -n 1 to simplify the problem, but the same occurs with generalized larger n):
``` sh
# test.sh
# Trivial shell file which executes whatever command is given to it
$1
```
```
$ ./test.sh true; echo $?
0
$ ./test.sh false; echo $?
1
$ echo true false | xargs -n 1 ./test.sh; echo $?
123
$ echo false false | xargs -n 1 ./test.sh; echo $?
123
```
Consider `false` to be like "grep failed to match", it's impossible to distinguish between "grep failing to match" in not all the processes vs "grep failing to match" in all of the processes.
I could reimplement xargs inside of pre-commit, but I don't want to do that and I don't think I'd get it right in all of the edge cases.
| 2015-11-12T21:51:01 |
pre-commit/pre-commit | 287 | pre-commit__pre-commit-287 | [
"285"
] | 67f6f812c42a81a520c5073e82241fccd5425e0a | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -85,7 +85,13 @@ def _run_single_hook(hook, repo, args, write, skips=frozenset()):
write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
sys.stdout.flush()
+ diff_before = cmd_output('git', 'diff', retcode=None)
retcode, stdout, stderr = repo.run_hook(hook, filenames)
+ diff_after = cmd_output('git', 'diff', retcode=None)
+
+ # If the hook makes changes, fail the commit
+ if diff_before != diff_after:
+ retcode = 1
if retcode:
retcode = 1
| diff --git a/testing/resources/modified_file_returns_zero_repo/bin/hook.sh b/testing/resources/modified_file_returns_zero_repo/bin/hook.sh
new file mode 100755
--- /dev/null
+++ b/testing/resources/modified_file_returns_zero_repo/bin/hook.sh
@@ -0,0 +1,6 @@
+#!/usr/bin/env bash
+
+for f in $@; do
+ echo modified > "$f"
+ echo "Modified: $f!"
+done
diff --git a/testing/resources/modified_file_returns_zero_repo/bin/hook2.sh b/testing/resources/modified_file_returns_zero_repo/bin/hook2.sh
new file mode 100755
--- /dev/null
+++ b/testing/resources/modified_file_returns_zero_repo/bin/hook2.sh
@@ -0,0 +1,2 @@
+#!/usr/bin/env bash
+echo $@
diff --git a/testing/resources/modified_file_returns_zero_repo/hooks.yaml b/testing/resources/modified_file_returns_zero_repo/hooks.yaml
new file mode 100644
--- /dev/null
+++ b/testing/resources/modified_file_returns_zero_repo/hooks.yaml
@@ -0,0 +1,10 @@
+- id: bash_hook
+ name: Bash hook
+ entry: bin/hook.sh
+ language: script
+ files: ''
+- id: bash_hook2
+ name: Bash hook
+ entry: bin/hook2.sh
+ language: script
+ files: ''
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -120,6 +120,29 @@ def test_arbitrary_bytes_hook(tempdir_factory, mock_out_store_directory):
_test_run(git_path, {}, (b'\xe2\x98\x83\xb2\n',), 1, True)
+def test_hook_that_modifies_but_returns_zero(
+ tempdir_factory, mock_out_store_directory,
+):
+ git_path = make_consuming_repo(
+ tempdir_factory, 'modified_file_returns_zero_repo',
+ )
+ with cwd(git_path):
+ _test_run(
+ git_path,
+ {},
+ (
+ # The first should fail
+ b'Failed',
+ # With a modified file (the hook's output)
+ b'Modified: foo.py',
+ # The next hook should pass despite the first modifying
+ b'Passed',
+ ),
+ 1,
+ True,
+ )
+
+
@pytest.mark.parametrize(
('options', 'outputs', 'expected_ret', 'stage'),
(
| Make pre-commit consider a hook as "failed" if it modifies files and still (incorrectly?) exits 0
This would allow us to ditch autopep8-wrapper and support a bunch of hooks which refused to be scriptable (yapf, etc.)
| 2015-11-12T23:17:59 |
|
pre-commit/pre-commit | 302 | pre-commit__pre-commit-302 | [
"293"
] | 1cdbe38b5ffb62f91d6ef08925690d3ae1ab54f0 | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -31,15 +31,18 @@ def in_env(repo_cmd_runner, language_version):
def norm_version(version):
- version = os.path.expanduser(version)
if os.name == 'nt': # pragma: no cover (windows)
- if not distutils.spawn.find_executable(version):
- # expanduser introduces a leading slash
- version = version.strip('\\')
- # The default place for python on windows is:
- # C:\PythonXX\python.exe
- version = r'C:\{0}\python.exe'.format(version.replace('.', ''))
- return version
+ # Try looking up by name
+ if distutils.spawn.find_executable(version):
+ return version
+
+ # If it is in the form pythonx.x search in the default
+ # place on windows
+ if version.startswith('python'):
+ return r'C:\{0}\python.exe'.format(version.replace('.', ''))
+
+ # Otherwise assume it is a path
+ return os.path.expanduser(version)
def install_environment(
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -12,13 +12,14 @@
try:
if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
raise OSError('Cannot determine width without TERM')
- COLS = int(
- subprocess.Popen(
- ('tput', 'cols'), stdout=subprocess.PIPE,
- ).communicate()[0] or
- # Default in the case of no terminal
- 80
- )
+ else: # pragma no cover (windows)
+ COLS = int(
+ subprocess.Popen(
+ ('tput', 'cols'), stdout=subprocess.PIPE,
+ ).communicate()[0] or
+ # Default in the case of no terminal
+ 80
+ )
except OSError: # pragma: no cover (windows)
COLS = 80
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -90,7 +90,9 @@ def test_install_hooks_directory_not_present(tempdir_factory):
@xfailif_no_symlink
-def test_install_hooks_dead_symlink(tempdir_factory):
+def test_install_hooks_dead_symlink(
+ tempdir_factory,
+): # pragma: no cover (non-windows)
path = git_dir(tempdir_factory)
os.symlink('/fake/baz', os.path.join(path, '.git', 'hooks', 'pre-commit'))
runner = Runner(path)
@@ -175,6 +177,14 @@ def test_install_idempotent(tempdir_factory):
assert NORMAL_PRE_COMMIT_RUN.match(output)
+def _path_without_us():
+ # Choose a path which *probably* doesn't include us
+ return os.pathsep.join([
+ x for x in os.environ['PATH'].split(os.pathsep)
+ if x.lower() != os.path.dirname(sys.executable).lower()
+ ])
+
+
def test_environment_not_sourced(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
@@ -193,7 +203,7 @@ def test_environment_not_sourced(tempdir_factory):
)
ret, stdout, stderr = cmd_output(
'git', 'commit', '--allow-empty', '-m', 'foo',
- env={'HOME': homedir},
+ env={'HOME': homedir, 'PATH': _path_without_us()},
retcode=None,
)
assert ret == 1
@@ -422,6 +432,7 @@ def test_installed_from_venv(tempdir_factory):
tempdir_factory,
env_base={
'HOME': os.path.expanduser('~'),
+ 'PATH': _path_without_us(),
'TERM': os.environ.get('TERM', ''),
# Windows needs this to import `random`
'SYSTEMROOT': os.environ.get('SYSTEMROOT', ''),
diff --git a/tests/languages/python_test.py b/tests/languages/python_test.py
new file mode 100644
--- /dev/null
+++ b/tests/languages/python_test.py
@@ -0,0 +1,18 @@
+from __future__ import absolute_import
+from __future__ import unicode_literals
+
+import os.path
+
+from pre_commit.languages import python
+
+
+def test_norm_version_expanduser():
+ home = os.path.expanduser('~')
+ if os.name == 'nt': # pragma: no cover (nt)
+ path = r'~\python343'
+ expected_path = r'{0}\python343'.format(home)
+ else: # pragma: no cover (non-nt)
+ path = '~/.pyenv/versions/3.4.3/bin/python'
+ expected_path = home + '/.pyenv/versions/3.4.3/bin/python'
+ result = python.norm_version(path)
+ assert result == expected_path
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -342,7 +342,9 @@ def test_additional_python_dependencies_installed(tempdir_factory, store):
@xfailif_windows_no_ruby
@pytest.mark.integration
-def test_additional_ruby_dependencies_installed(tempdir_factory, store):
+def test_additional_ruby_dependencies_installed(
+ tempdir_factory, store,
+): # pragma: no cover (non-windows)
path = make_repo(tempdir_factory, 'ruby_hooks_repo')
config = make_config_from_repo(path)
config['hooks'][0]['additional_dependencies'] = ['thread_safe']
@@ -355,7 +357,9 @@ def test_additional_ruby_dependencies_installed(tempdir_factory, store):
@xfailif_windows_no_node
@pytest.mark.integration
-def test_additional_node_dependencies_installed(tempdir_factory, store):
+def test_additional_node_dependencies_installed(
+ tempdir_factory, store,
+): # pragma: no cover (non-windows)
path = make_repo(tempdir_factory, 'node_hooks_repo')
config = make_config_from_repo(path)
# Careful to choose a small package that's not depped by npm
@@ -481,15 +485,3 @@ def test_local_repository():
with pytest.raises(NotImplementedError):
local_repo.manifest
assert len(local_repo.hooks) == 1
-
-
-def test_norm_version_expanduser(): # pragma: no cover
- home = os.path.expanduser('~')
- if os.name == 'nt':
- path = r'~\python343'
- expected_path = r'C:{0}\python343\python.exe'.format(home)
- else:
- path = '~/.pyenv/versions/3.4.3/bin/python'
- expected_path = home + '/.pyenv/versions/3.4.3/bin/python'
- result = python.norm_version(path)
- assert result == expected_path
| Add option to pass additional dependencies to hooks
I am currently working on implementing this framework and one of the things I am trying to run is eslint. As part of that I have a number of plugins that are in my configuration file. I think that, rather than forcing anyone who is using plugins to create a new hook definition with a corresponding package.json it might be useful to add a global option to pass a list of dependencies in the configuration file.
For instance, something lilke this:
``` yaml
- repo: https://github.com/pre-commit/mirrors-eslint
sha: 135f285caf8e6e886b28c8e98fdff402b69c4490
hooks:
- id: eslint
language_version: '0.12.7'
dependencies: [eslint-plugin-react, eslint-plugin-html]
```
and have those dependencies installed into the generated environment for that language.
I am going to work on implementing this in my forked repo but would like feedback on whether this is a desired feature or any implementation advice on how best to facilitate this.
| Sounds pretty reasonable, how about `additional_dependencies` instead of `dependencies`?
This would actually be a nice way to replace the wacky mirrors repos!
Yeah, that's reasonable. I'll update. I'm currently working on fixing broken tests and adding some to cover the new functionality.
Sounds great :+1:
Via #295
| 2015-11-24T06:29:47 |
pre-commit/pre-commit | 306 | pre-commit__pre-commit-306 | [
"194"
] | 512a6a2c64dda9a74743fec78ff8651953b236c6 | diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -61,7 +61,16 @@ def additional_dependencies(self):
@cached_property
def hooks(self):
- # TODO: merging in manifest dicts is a smell imo
+ for hook in self.repo_config['hooks']:
+ if hook['id'] not in self.manifest.hooks:
+ logger.error(
+ '`{0}` is not present in repository {1}. '
+ 'Typo? Perhaps it is introduced in a newer version? '
+ 'Often `pre-commit autoupdate` fixes this.'.format(
+ hook['id'], self.repo_config['repo'],
+ )
+ )
+ exit(1)
return tuple(
(hook['id'], dict(self.manifest.hooks[hook['id']], **hook))
for hook in self.repo_config['hooks']
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -2,6 +2,7 @@
from __future__ import unicode_literals
import io
+import logging
import os
import os.path
import shutil
@@ -485,3 +486,26 @@ def test_local_repository():
with pytest.raises(NotImplementedError):
local_repo.manifest
assert len(local_repo.hooks) == 1
+
+
[email protected]_fixture
+def fake_log_handler():
+ handler = mock.Mock(level=logging.INFO)
+ logger = logging.getLogger('pre_commit')
+ logger.addHandler(handler)
+ yield handler
+ logger.removeHandler(handler)
+
+
+def test_hook_id_not_present(tempdir_factory, store, fake_log_handler):
+ path = make_repo(tempdir_factory, 'script_hooks_repo')
+ config = make_config_from_repo(path)
+ config['hooks'][0]['id'] = 'i-dont-exist'
+ repo = Repository.create(config, store)
+ with pytest.raises(SystemExit):
+ repo.install()
+ assert fake_log_handler.handle.call_args[0][0].msg == (
+ '`i-dont-exist` is not present in repository {0}. '
+ 'Typo? Perhaps it is introduced in a newer version? '
+ 'Often `pre-commit autoupdate` fixes this.'.format(path)
+ )
| Improve error message when attempting to run non-existent hook
Hook id in `/.pre-commit-config.yaml` doesn't exist in the included repository
(From https://github.com/pre-commit/pre-commit-hooks/issues/37)
It should probably suggest updating hooks or checking the spelling of the hookid
| +1 I've been bitten by this :)
| 2015-11-26T07:15:18 |
pre-commit/pre-commit | 310 | pre-commit__pre-commit-310 | [
"309"
] | 6b005cff0d5d4f579be5dbb97102c4fee3b4e39f | diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py
--- a/pre_commit/error_handler.py
+++ b/pre_commit/error_handler.py
@@ -7,7 +7,9 @@
import os.path
import traceback
+from pre_commit import five
from pre_commit.errors import FatalError
+from pre_commit.output import sys_stdout_write_wrapper
from pre_commit.store import Store
@@ -16,15 +18,15 @@ class PreCommitSystemExit(SystemExit):
pass
-def _log_and_exit(msg, exc, formatted, print_fn=print):
- error_msg = '{0}: {1}: {2}'.format(msg, type(exc).__name__, exc)
- print_fn(error_msg)
- print_fn('Check the log at ~/.pre-commit/pre-commit.log')
+def _log_and_exit(msg, exc, formatted, write_fn=sys_stdout_write_wrapper):
+ error_msg = '{0}: {1}: {2}\n'.format(msg, type(exc).__name__, exc)
+ write_fn(error_msg)
+ write_fn('Check the log at ~/.pre-commit/pre-commit.log\n')
store = Store()
store.require_created()
- with io.open(os.path.join(store.directory, 'pre-commit.log'), 'w') as log:
- log.write(error_msg + '\n')
- log.write(formatted + '\n')
+ with io.open(os.path.join(store.directory, 'pre-commit.log'), 'wb') as log:
+ log.write(five.to_bytes(error_msg))
+ log.write(five.to_bytes(formatted) + b'\n')
raise PreCommitSystemExit(1)
| diff --git a/tests/error_handler_test.py b/tests/error_handler_test.py
--- a/tests/error_handler_test.py
+++ b/tests/error_handler_test.py
@@ -1,15 +1,18 @@
+# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
import io
import os.path
import re
+import sys
import mock
import pytest
from pre_commit import error_handler
from pre_commit.errors import FatalError
+from pre_commit.util import cmd_output
@pytest.yield_fixture
@@ -72,17 +75,17 @@ def test_error_handler_uncaught_error(mocked_log_and_exit):
def test_log_and_exit(mock_out_store_directory):
- mocked_print = mock.Mock()
+ mocked_write = mock.Mock()
with pytest.raises(error_handler.PreCommitSystemExit):
error_handler._log_and_exit(
'msg', FatalError('hai'), "I'm a stacktrace",
- print_fn=mocked_print,
+ write_fn=mocked_write,
)
- printed = '\n'.join(call[0][0] for call in mocked_print.call_args_list)
+ printed = ''.join(call[0][0] for call in mocked_write.call_args_list)
assert printed == (
'msg: FatalError: hai\n'
- 'Check the log at ~/.pre-commit/pre-commit.log'
+ 'Check the log at ~/.pre-commit/pre-commit.log\n'
)
log_file = os.path.join(mock_out_store_directory, 'pre-commit.log')
@@ -92,3 +95,25 @@ def test_log_and_exit(mock_out_store_directory):
'msg: FatalError: hai\n'
"I'm a stacktrace\n"
)
+
+
+def test_error_handler_non_ascii_exception(mock_out_store_directory):
+ with pytest.raises(error_handler.PreCommitSystemExit):
+ with error_handler.error_handler():
+ raise ValueError('☃')
+
+
+def test_error_handler_no_tty(tempdir_factory):
+ output = cmd_output(
+ sys.executable, '-c',
+ 'from __future__ import unicode_literals\n'
+ 'from pre_commit.error_handler import error_handler\n'
+ 'with error_handler():\n'
+ ' raise ValueError("\\u2603")\n',
+ env=dict(os.environ, PRE_COMMIT_HOME=tempdir_factory.get()),
+ retcode=1,
+ )
+ assert output[1].replace('\r', '') == (
+ 'An unexpected error has occurred: ValueError: ☃\n'
+ 'Check the log at ~/.pre-commit/pre-commit.log\n'
+ )
| Non-ascii prints in error handler without tty cause stacktrace
```
23:00:13 style runtests: commands[0] | pre-commit run --all-files
23:00:13 [INFO] Installing environment for [email protected]:mirrors/pre-commit/mirrors-jshint.
23:00:13 [INFO] Once installed this environment will be reused.
23:00:13 [INFO] This may take a few minutes...
23:01:33 Traceback (most recent call last):
23:01:33 File ".tox/style/bin/pre-commit", line 11, in <module>
23:01:33 sys.exit(main())
23:01:33 File ".../.tox/style/local/lib/python2.7/site-packages/pre_commit/main.py", line 157, in main
23:01:33 'Command {0} failed to exit with a returncode'.format(args.command)
23:01:33 File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
23:01:33 self.gen.throw(type, value, traceback)
23:01:33 File ".../.tox/style/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 41, in error_handler
23:01:33 traceback.format_exc(),
23:01:33 File ".../.tox/style/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 21, in _log_and_exit
23:01:33 print_fn(error_msg)
23:01:33 UnicodeEncodeError: 'ascii' codec can't encode characters in position 735-737: ordinal not in range(128)
```
| I found a separate bug about writing that same stacktrace to the logfile. I'll be fixing both it seems!
| 2015-12-01T16:34:13 |
pre-commit/pre-commit | 315 | pre-commit__pre-commit-315 | [
"314"
] | 97735a3883fb14d30d5d1486c0d833c2988d4db8 | diff --git a/pre_commit/prefixed_command_runner.py b/pre_commit/prefixed_command_runner.py
--- a/pre_commit/prefixed_command_runner.py
+++ b/pre_commit/prefixed_command_runner.py
@@ -7,10 +7,6 @@
from pre_commit.util import cmd_output
-def _replace_cmd(cmd, **kwargs):
- return [part.format(**kwargs) for part in cmd]
-
-
class PrefixedCommandRunner(object):
"""A PrefixedCommandRunner allows you to run subprocess commands with
comand substitution.
@@ -37,7 +33,9 @@ def _create_path_if_not_exists(self):
def run(self, cmd, **kwargs):
self._create_path_if_not_exists()
- replaced_cmd = _replace_cmd(cmd, prefix=self.prefix_dir)
+ replaced_cmd = [
+ part.replace('{prefix}', self.prefix_dir) for part in cmd
+ ]
return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)
def path(self, *parts):
| diff --git a/tests/prefixed_command_runner_test.py b/tests/prefixed_command_runner_test.py
--- a/tests/prefixed_command_runner_test.py
+++ b/tests/prefixed_command_runner_test.py
@@ -7,7 +7,6 @@
import pytest
from pre_commit import five
-from pre_commit.prefixed_command_runner import _replace_cmd
from pre_commit.prefixed_command_runner import PrefixedCommandRunner
from pre_commit.util import CalledProcessError
@@ -59,19 +58,6 @@ def makedirs_mock():
return mock.Mock(spec=os.makedirs)
[email protected](('input', 'kwargs', 'expected_output'), (
- ([], {}, []),
- (['foo'], {}, ['foo']),
- ([], {'foo': 'bar'}, []),
- (['{foo}/baz'], {'foo': 'bar'}, ['bar/baz']),
- (['foo'], {'foo': 'bar'}, ['foo']),
- (['foo', '{bar}'], {'bar': 'baz'}, ['foo', 'baz']),
-))
-def test_replace_cmd(input, kwargs, expected_output):
- ret = _replace_cmd(input, **kwargs)
- assert ret == expected_output
-
-
@pytest.mark.parametrize(('input', 'expected_prefix'), (
norm_slash(('.', './')),
norm_slash(('foo', 'foo/')),
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -178,6 +178,22 @@ def test_run_hook_with_spaced_args(tempdir_factory, store):
)
[email protected]
+def test_run_hook_with_curly_braced_arguments(tempdir_factory, store):
+ _test_hook_repo(
+ tempdir_factory, store, 'arg_per_line_hooks_repo',
+ 'arg-per-line',
+ [],
+ b"arg: hi {1}\narg: I'm {a} problem\n",
+ config_kwargs={
+ 'hooks': [{
+ 'id': 'arg-per-line',
+ 'args': ['hi {1}', "I'm {a} problem"],
+ }]
+ },
+ )
+
+
@xfailif_no_pcre_support
@pytest.mark.integration
def test_pcre_hook_no_match(tempdir_factory, store):
| :args seems to break with {} in list.
I am working on a repo with some hooks for my company: https://github.com/marick/pre-commit-hooks
There is a hook that works fine with this `.pre-commit-config.yaml`:
``` yaml
- repo: /Users/marick/src/pre-commit-hooks
sha: d6dee96f56bf9290f7ebb852c4252c50b8f6215d
stages: [commit, push]
hooks:
- id: prohibit-suspicious-patterns
args: ["AKIA[[:alnum]]", --]
```
However, it I change the first arg by adding `{1}`:
``` yaml
args: ["AKIA[[:alnum]]{1}", --]
```
... I get this:
```
prohibit suspicious patterns..................................................................
An unexpected error has occurred: IndexError: tuple index out of range
Check the log at ~/.pre-commit/pre-commit.log
```
The contents of `pre-commit.log`:
```
An unexpected error has occurred: IndexError: tuple index out of range
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 36, in error_handler
yield
File "/usr/local/lib/python2.7/site-packages/pre_commit/main.py", line 150, in main
return run(runner, args)
File "/usr/local/lib/python2.7/site-packages/pre_commit/commands/run.py", line 212, in run
return _run_hooks(repo_hooks, args, write, environ)
File "/usr/local/lib/python2.7/site-packages/pre_commit/commands/run.py", line 136, in _run_hooks
retval |= _run_single_hook(hook, repo, args, write, skips)
File "/usr/local/lib/python2.7/site-packages/pre_commit/commands/run.py", line 89, in _run_single_hook
retcode, stdout, stderr = repo.run_hook(hook, filenames)
File "/usr/local/lib/python2.7/site-packages/pre_commit/repository.py", line 145, in run_hook
self.cmd_runner, hook, file_args,
File "/usr/local/lib/python2.7/site-packages/pre_commit/languages/script.py", line 23, in run_hook
encoding=None,
File "/usr/local/lib/python2.7/site-packages/pre_commit/prefixed_command_runner.py", line 40, in run
replaced_cmd = _replace_cmd(cmd, prefix=self.prefix_dir)
File "/usr/local/lib/python2.7/site-packages/pre_commit/prefixed_command_runner.py", line 11, in _replace_cmd
return [part.format(**kwargs) for part in cmd]
IndexError: tuple index out of range
```
| Ah dang, this has to do with the way we format in the prefix directory.
A workaround for now is to replace `{1}` with `{{1}}`.
I wonder if I should document this somewhere as whatever I choose for substitution it'll be difficult to accept all strings (For example if I use `%s` instead I'll need to look out for args with `%` symbols in them). What are you suggestions?
Oh, I could instead use `.replace(...)` and use some sentinel value instead of `.format(`ting, lemme try that.
| 2015-12-07T01:54:30 |
pre-commit/pre-commit | 316 | pre-commit__pre-commit-316 | [
"307"
] | c24f78b1a5ee03302ac21ed93699efbe3dc1de54 | diff --git a/pre_commit/clientlib/validate_manifest.py b/pre_commit/clientlib/validate_manifest.py
--- a/pre_commit/clientlib/validate_manifest.py
+++ b/pre_commit/clientlib/validate_manifest.py
@@ -23,6 +23,9 @@ class InvalidManifestError(ValueError):
'exclude': {'type': 'string', 'default': '^$'},
'language': {'type': 'string'},
'language_version': {'type': 'string', 'default': 'default'},
+ 'minimum_pre_commit_version': {
+ 'type': 'string', 'default': '0.0.0',
+ },
'files': {'type': 'string'},
'stages': {
'type': 'array',
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -4,6 +4,7 @@
import shutil
from collections import defaultdict
+import pkg_resources
from cached_property import cached_property
from pre_commit import git
@@ -18,6 +19,10 @@
logger = logging.getLogger('pre_commit')
+_pre_commit_version = pkg_resources.parse_version(
+ pkg_resources.get_distribution('pre-commit').version
+)
+
class Repository(object):
def __init__(self, repo_config, repo_path_getter):
@@ -71,6 +76,18 @@ def hooks(self):
)
)
exit(1)
+ hook_version = pkg_resources.parse_version(
+ self.manifest.hooks[hook['id']]['minimum_pre_commit_version'],
+ )
+ if hook_version > _pre_commit_version:
+ logger.error(
+ 'The hook `{0}` requires pre-commit version {1} but '
+ 'version {2} is installed. '
+ 'Perhaps run `pip install --upgrade pre-commit`.'.format(
+ hook['id'], hook_version, _pre_commit_version,
+ )
+ )
+ exit(1)
return tuple(
(hook['id'], dict(self.manifest.hooks[hook['id']], **hook))
for hook in self.repo_config['hooks']
| diff --git a/testing/fixtures.py b/testing/fixtures.py
--- a/testing/fixtures.py
+++ b/testing/fixtures.py
@@ -1,10 +1,12 @@
from __future__ import absolute_import
from __future__ import unicode_literals
+import contextlib
import io
import os.path
from aspy.yaml import ordered_dump
+from aspy.yaml import ordered_load
import pre_commit.constants as C
from pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA
@@ -35,6 +37,17 @@ def make_repo(tempdir_factory, repo_source):
return path
[email protected]
+def modify_manifest(path):
+ """Modify the manifest yielded by this context to write to hooks.yaml."""
+ manifest_path = os.path.join(path, C.MANIFEST_FILE)
+ manifest = ordered_load(io.open(manifest_path).read())
+ yield manifest
+ with io.open(manifest_path, 'w') as manifest_file:
+ manifest_file.write(ordered_dump(manifest, **C.YAML_DUMP_KWARGS))
+ cmd_output('git', 'commit', '-am', 'update hooks.yaml', cwd=path)
+
+
def config_with_local_hooks():
return OrderedDict((
('repo', 'local'),
diff --git a/tests/manifest_test.py b/tests/manifest_test.py
--- a/tests/manifest_test.py
+++ b/tests/manifest_test.py
@@ -27,6 +27,7 @@ def test_manifest_contents(manifest):
'id': 'bash_hook',
'language': 'script',
'language_version': 'default',
+ 'minimum_pre_commit_version': '0.0.0',
'name': 'Bash hook',
'stages': [],
}]
@@ -42,6 +43,7 @@ def test_hooks(manifest):
'id': 'bash_hook',
'language': 'script',
'language_version': 'default',
+ 'minimum_pre_commit_version': '0.0.0',
'name': 'Bash hook',
'stages': [],
}
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -5,9 +5,11 @@
import logging
import os
import os.path
+import re
import shutil
import mock
+import pkg_resources
import pytest
from pre_commit import five
@@ -24,6 +26,7 @@
from testing.fixtures import git_dir
from testing.fixtures import make_config_from_repo
from testing.fixtures import make_repo
+from testing.fixtures import modify_manifest
from testing.util import skipif_slowtests_false
from testing.util import xfailif_no_pcre_support
from testing.util import xfailif_windows_no_node
@@ -525,3 +528,33 @@ def test_hook_id_not_present(tempdir_factory, store, fake_log_handler):
'Typo? Perhaps it is introduced in a newer version? '
'Often `pre-commit autoupdate` fixes this.'.format(path)
)
+
+
+def test_too_new_version(tempdir_factory, store, fake_log_handler):
+ path = make_repo(tempdir_factory, 'script_hooks_repo')
+ with modify_manifest(path) as manifest:
+ manifest[0]['minimum_pre_commit_version'] = '999.0.0'
+ config = make_config_from_repo(path)
+ repo = Repository.create(config, store)
+ with pytest.raises(SystemExit):
+ repo.install()
+ msg = fake_log_handler.handle.call_args[0][0].msg
+ assert re.match(
+ r'^The hook `bash_hook` requires pre-commit version 999\.0\.0 but '
+ r'version \d+\.\d+\.\d+ is installed. '
+ r'Perhaps run `pip install --upgrade pre-commit`\.$',
+ msg,
+ )
+
+
[email protected](
+ 'version',
+ ('0.1.0', pkg_resources.get_distribution('pre-commit').version),
+)
+def test_versions_ok(tempdir_factory, store, version):
+ path = make_repo(tempdir_factory, 'script_hooks_repo')
+ with modify_manifest(path) as manifest:
+ manifest[0]['minimum_pre_commit_version'] = version
+ config = make_config_from_repo(path)
+ # Should succeed
+ Repository.create(config, store).install()
| Hooks should have a way of specifying minimum pre-commit version to run
Obviously this is a bit of a chicken-and-egg problem until the feature exists, but afterwards this'll be useful.
For instance, the feature added in 0.6.6 (additional dependencies) would be useful to gate against older pre-commit versions where installation would be nonsensical.
This would be a fatal error that would require the user to upgrade pre-commit to use that hook.
I'd like to implement this first, and then change all of the mirror repositories to use the additional-deps approach. This would alleviate the pains in #282 as well by eliminating the hacky mirroring technique that is currently required for node hooks.
| 2015-12-07T04:43:11 |
|
pre-commit/pre-commit | 319 | pre-commit__pre-commit-319 | [
"311"
] | c1c3f3b571adcd0cf5a8cea7d9d80574c2572c02 | diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -1,12 +1,16 @@
from __future__ import unicode_literals
+import io
+import json
import logging
+import os
import shutil
from collections import defaultdict
import pkg_resources
from cached_property import cached_property
+from pre_commit import five
from pre_commit import git
from pre_commit.clientlib.validate_config import is_local_hooks
from pre_commit.clientlib.validate_manifest import MANIFEST_JSON_SCHEMA
@@ -23,6 +27,9 @@
pkg_resources.get_distribution('pre-commit').version
)
+# Bump when installation changes in a backwards / forwards incompatible way
+INSTALLED_STATE_VERSION = '1'
+
class Repository(object):
def __init__(self, repo_config, repo_path_getter):
@@ -110,14 +117,45 @@ def require_installed(self):
def install(self):
"""Install the hook repository."""
+ def state(language_name, language_version):
+ return {
+ 'additional_dependencies': sorted(
+ self.additional_dependencies[
+ language_name
+ ][language_version],
+ )
+ }
+
+ def state_filename(venv, suffix=''):
+ return self.cmd_runner.path(
+ venv, '.install_state_v' + INSTALLED_STATE_VERSION + suffix,
+ )
+
+ def read_state(venv):
+ if not os.path.exists(state_filename(venv)):
+ return None
+ else:
+ return json.loads(io.open(state_filename(venv)).read())
+
+ def write_state(venv, language_name, language_version):
+ with io.open(
+ state_filename(venv, suffix='staging'), 'w',
+ ) as state_file:
+ state_file.write(five.to_text(json.dumps(
+ state(language_name, language_version),
+ )))
+ # Move the file into place atomically to indicate we've installed
+ os.rename(
+ state_filename(venv, suffix='staging'),
+ state_filename(venv),
+ )
+
def language_is_installed(language_name, language_version):
language = languages[language_name]
- directory = environment_dir(
- language.ENVIRONMENT_DIR, language_version,
- )
+ venv = environment_dir(language.ENVIRONMENT_DIR, language_version)
return (
- directory is None or
- self.cmd_runner.exists(directory, '.installed')
+ venv is None or
+ read_state(venv) == state(language_name, language_version)
)
if not all(
@@ -131,24 +169,23 @@ def language_is_installed(language_name, language_version):
logger.info('This may take a few minutes...')
for language_name, language_version in self.languages:
- language = languages[language_name]
if language_is_installed(language_name, language_version):
continue
- directory = environment_dir(
- language.ENVIRONMENT_DIR, language_version,
- )
+ language = languages[language_name]
+ venv = environment_dir(language.ENVIRONMENT_DIR, language_version)
+
# There's potentially incomplete cleanup from previous runs
# Clean it up!
- if self.cmd_runner.exists(directory):
- shutil.rmtree(self.cmd_runner.path(directory))
+ if self.cmd_runner.exists(venv):
+ shutil.rmtree(self.cmd_runner.path(venv))
language.install_environment(
self.cmd_runner, language_version,
self.additional_dependencies[language_name][language_version],
)
- # Touch the .installed file (atomic) to indicate we've installed
- open(self.cmd_runner.path(directory, '.installed'), 'w').close()
+ # Write our state to indicate we're installed
+ write_state(venv, language_name, language_version)
def run_hook(self, hook, file_args):
"""Run a hook.
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,7 +46,6 @@
'nodeenv>=0.11.1',
'ordereddict',
'pyyaml',
- 'simplejson',
'virtualenv',
],
entry_points={
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -360,6 +360,23 @@ def test_additional_python_dependencies_installed(tempdir_factory, store):
assert 'mccabe' in output
[email protected]
+def test_additional_dependencies_roll_forward(tempdir_factory, store):
+ path = make_repo(tempdir_factory, 'python_hooks_repo')
+ config = make_config_from_repo(path)
+ # Run the repo once without additional_dependencies
+ repo = Repository.create(config, store)
+ repo.run_hook(repo.hooks[0][1], [])
+ # Now run it with additional_dependencies
+ config['hooks'][0]['additional_dependencies'] = ['mccabe']
+ repo = Repository.create(config, store)
+ repo.run_hook(repo.hooks[0][1], [])
+ # We should see our additional dependency installed
+ with python.in_env(repo.cmd_runner, 'default') as env:
+ output = env.run('pip freeze -l')[1]
+ assert 'mccabe' in output
+
+
@xfailif_windows_no_ruby
@pytest.mark.integration
def test_additional_ruby_dependencies_installed(
| additonal_dependencies isn't "rollback safe"
Using old pre-commit + a hook repo with `additional_dependencies` it'll happily create the repo without installing the additional dependencies. Upon upgrading to a newer pre-commit, it doesn't know that the additional dependencies aren't installed yet and will happily attempt to run in there (usually causing an executable to not be found). We need some way to signify when these have been installed in order for this to be rollable. A workaround is to `pre-commit clean` when upgrading, but that kinda is not the best (and especially confusing).
| 2015-12-10T18:36:26 |
|
pre-commit/pre-commit | 321 | pre-commit__pre-commit-321 | [
"213"
] | d6cf62532de9f80c8a359c12867f1a401ea73961 | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -24,10 +24,18 @@ def get_root():
)
+def get_git_dir(git_root):
+ return os.path.normpath(os.path.join(
+ git_root,
+ cmd_output('git', 'rev-parse', '--git-dir', cwd=git_root)[1].strip(),
+ ))
+
+
def is_in_merge_conflict():
+ git_dir = get_git_dir('.')
return (
- os.path.exists(os.path.join('.git', 'MERGE_MSG')) and
- os.path.exists(os.path.join('.git', 'MERGE_HEAD'))
+ os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and
+ os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))
)
@@ -46,7 +54,7 @@ def get_conflicted_files():
logger.info('Checking merge-conflict files only.')
# Need to get the conflicted files from the MERGE_MSG because they could
# have resolved the conflict by choosing one side or the other
- merge_msg = open(os.path.join('.git', 'MERGE_MSG')).read()
+ merge_msg = open(os.path.join(get_git_dir('.'), 'MERGE_MSG')).read()
merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)
# This will get the rest of the changes made after the merge.
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -25,6 +25,13 @@
# https://github.com/pre-commit/pre-commit/issues/300
# In git 2.6.3 (maybe others), git exports this while running pre-commit hooks
os.environ.pop('GIT_WORK_TREE', None)
+# In git 1.9.1 (maybe others), git exports these while running pre-commit hooks
+# in submodules. In the general case this causes problems.
+# These are covered by test_install_in_submodule_and_run
+# Causes git clone to clone wrong thing
+os.environ.pop('GIT_DIR', None)
+# Causes 'error invalid object ...' during commit
+os.environ.pop('GIT_INDEX_FILE', None)
def main(argv=None):
diff --git a/pre_commit/runner.py b/pre_commit/runner.py
--- a/pre_commit/runner.py
+++ b/pre_commit/runner.py
@@ -30,6 +30,10 @@ def create(cls):
os.chdir(root)
return cls(root)
+ @cached_property
+ def git_dir(self):
+ return git.get_git_dir(self.git_root)
+
@cached_property
def config_file_path(self):
return os.path.join(self.git_root, C.CONFIG_FILE)
@@ -44,7 +48,7 @@ def repositories(self):
return repositories
def get_hook_path(self, hook_type):
- return os.path.join(self.git_root, '.git', 'hooks', hook_type)
+ return os.path.join(self.git_dir, 'hooks', hook_type)
@cached_property
def pre_commit_path(self):
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -170,6 +170,21 @@ def test_install_pre_commit_and_run(tempdir_factory):
assert NORMAL_PRE_COMMIT_RUN.match(output)
+def test_install_in_submodule_and_run(tempdir_factory):
+ src_path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
+ parent_path = git_dir(tempdir_factory)
+ with cwd(parent_path):
+ cmd_output('git', 'submodule', 'add', src_path, 'sub')
+ cmd_output('git', 'commit', '-am', 'foo')
+
+ sub_pth = os.path.join(parent_path, 'sub')
+ with cwd(sub_pth):
+ assert install(Runner(sub_pth)) == 0
+ ret, output = _get_commit_output(tempdir_factory)
+ assert ret == 0
+ assert NORMAL_PRE_COMMIT_RUN.match(output)
+
+
def test_install_idempotent(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -15,6 +15,7 @@
from pre_commit.store import Store
from pre_commit.util import cmd_output
from pre_commit.util import cwd
+from testing.fixtures import git_dir
from testing.fixtures import make_consuming_repo
@@ -40,6 +41,26 @@ def in_tmpdir(tempdir_factory):
yield path
+def _make_conflict():
+ cmd_output('git', 'checkout', 'origin/master', '-b', 'foo')
+ with io.open('conflict_file', 'w') as conflict_file:
+ conflict_file.write('herp\nderp\n')
+ cmd_output('git', 'add', 'conflict_file')
+ with io.open('foo_only_file', 'w') as foo_only_file:
+ foo_only_file.write('foo')
+ cmd_output('git', 'add', 'foo_only_file')
+ cmd_output('git', 'commit', '-m', 'conflict_file')
+ cmd_output('git', 'checkout', 'origin/master', '-b', 'bar')
+ with io.open('conflict_file', 'w') as conflict_file:
+ conflict_file.write('harp\nddrp\n')
+ cmd_output('git', 'add', 'conflict_file')
+ with io.open('bar_only_file', 'w') as bar_only_file:
+ bar_only_file.write('bar')
+ cmd_output('git', 'add', 'bar_only_file')
+ cmd_output('git', 'commit', '-m', 'conflict_file')
+ cmd_output('git', 'merge', 'foo', retcode=None)
+
+
@pytest.yield_fixture
def in_merge_conflict(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
@@ -51,26 +72,23 @@ def in_merge_conflict(tempdir_factory):
conflict_path = tempdir_factory.get()
cmd_output('git', 'clone', path, conflict_path)
with cwd(conflict_path):
- cmd_output('git', 'checkout', 'origin/master', '-b', 'foo')
- with io.open('conflict_file', 'w') as conflict_file:
- conflict_file.write('herp\nderp\n')
- cmd_output('git', 'add', 'conflict_file')
- with io.open('foo_only_file', 'w') as foo_only_file:
- foo_only_file.write('foo')
- cmd_output('git', 'add', 'foo_only_file')
- cmd_output('git', 'commit', '-m', 'conflict_file')
- cmd_output('git', 'checkout', 'origin/master', '-b', 'bar')
- with io.open('conflict_file', 'w') as conflict_file:
- conflict_file.write('harp\nddrp\n')
- cmd_output('git', 'add', 'conflict_file')
- with io.open('bar_only_file', 'w') as bar_only_file:
- bar_only_file.write('bar')
- cmd_output('git', 'add', 'bar_only_file')
- cmd_output('git', 'commit', '-m', 'conflict_file')
- cmd_output('git', 'merge', 'foo', retcode=None)
+ _make_conflict()
yield os.path.join(conflict_path)
[email protected]_fixture
+def in_conflicting_submodule(tempdir_factory):
+ git_dir_1 = git_dir(tempdir_factory)
+ git_dir_2 = git_dir(tempdir_factory)
+ with cwd(git_dir_2):
+ cmd_output('git', 'commit', '--allow-empty', '-m', 'init!')
+ with cwd(git_dir_1):
+ cmd_output('git', 'submodule', 'add', git_dir_2, 'sub')
+ with cwd(os.path.join(git_dir_1, 'sub')):
+ _make_conflict()
+ yield
+
+
@pytest.yield_fixture(scope='session', autouse=True)
def dont_write_to_home_directory():
"""pre_commit.store.Store will by default write to the home directory
diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -43,6 +43,10 @@ def test_is_in_merge_conflict(in_merge_conflict):
assert git.is_in_merge_conflict() is True
+def test_is_in_merge_conflict_submodule(in_conflicting_submodule):
+ assert git.is_in_merge_conflict() is True
+
+
def test_cherry_pick_conflict(in_merge_conflict):
cmd_output('git', 'merge', '--abort')
foo_ref = cmd_output('git', 'rev-parse', 'foo')[1].strip()
@@ -111,6 +115,11 @@ def test_get_conflicted_files(in_merge_conflict):
assert ret == set(('conflict_file', 'other_file'))
+def test_get_conflicted_files_in_submodule(in_conflicting_submodule):
+ resolve_conflict()
+ assert set(git.get_conflicted_files()) == set(('conflict_file',))
+
+
def test_get_conflicted_files_unstaged_files(in_merge_conflict):
# If they for whatever reason did pre-commit run --no-stash during a
# conflict
diff --git a/tests/runner_test.py b/tests/runner_test.py
--- a/tests/runner_test.py
+++ b/tests/runner_test.py
@@ -7,6 +7,7 @@
import pre_commit.constants as C
from pre_commit.ordereddict import OrderedDict
from pre_commit.runner import Runner
+from pre_commit.util import cmd_output
from pre_commit.util import cwd
from testing.fixtures import add_config_to_repo
from testing.fixtures import git_dir
@@ -79,15 +80,19 @@ def test_local_hooks(tempdir_factory, mock_out_store_directory):
assert len(runner.repositories[0].hooks) == 2
-def test_pre_commit_path():
- runner = Runner(os.path.join('foo', 'bar'))
- expected_path = os.path.join('foo', 'bar', '.git', 'hooks', 'pre-commit')
+def test_pre_commit_path(in_tmpdir):
+ path = os.path.join('foo', 'bar')
+ cmd_output('git', 'init', path)
+ runner = Runner(path)
+ expected_path = os.path.join(path, '.git', 'hooks', 'pre-commit')
assert runner.pre_commit_path == expected_path
def test_pre_push_path():
- runner = Runner(os.path.join('foo', 'bar'))
- expected_path = os.path.join('foo', 'bar', '.git', 'hooks', 'pre-push')
+ path = os.path.join('foo', 'bar')
+ cmd_output('git', 'init', path)
+ runner = Runner(path)
+ expected_path = os.path.join(path, '.git', 'hooks', 'pre-push')
assert runner.pre_push_path == expected_path
| Does not work within submodules
I'm getting:
```
An unexpected error has occurred: NotADirectoryError: [Errno 20] Not a directory: '/home/quentin/chef-repo/cookbooks/ssmtp-cookbook/.git/hooks/pre-commit'
```
chef-repo is my primary repository and ssmtp-cookbook a git submodule of that.
**ssmtp-cookbook/.git file contents:**
```
gitdir: ../../.git/modules/cookbooks/ssmtp-cookbook
```
| What is your pwd?
ssmtp-cookbook
Well you're not really supposed to develop inside of submodules (they aren't quite full git repos) but I guess we can support this workflow. The fix is a regression test and to mkdirp the hooks directory I'd imagine. A workaround is to manually make the hooks directory yourself.
Ah I think I'm wrong here -- in newer git, submodules have an entirely separate git directory from the repository. This means our trick of searching upwards for the git dir won't work. Apparently there is `git rev-parse --git-dir` which will get us what we want maybe?
I concur, `git rev-parse --git-dir` does yield the correct path to the git directory, including in my current submodule configuration:
```
$ git rev-parse --git-dir
/home/quentin/chef-repo/.git/modules/cookbooks/ssmtp-cookbook
```
Good find!
(oops)
So I took an initial look into this, it's slightly more difficult than I initially thought since we (erroneously) depend on the git directory existing at the top level of the repository for several things (such as where to chdir right before running hooks). Therefore this needs to be split into two concepts: git-dir and git-root.
| 2015-12-18T22:23:24 |
pre-commit/pre-commit | 326 | pre-commit__pre-commit-326 | [
"322"
] | c16479b94a8e6d45008758bca7ad09b3a2923e56 | diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -22,16 +22,6 @@
# to install packages to the wrong place. We don't want anything to deal with
# pyvenv
os.environ.pop('__PYVENV_LAUNCHER__', None)
-# https://github.com/pre-commit/pre-commit/issues/300
-# In git 2.6.3 (maybe others), git exports this while running pre-commit hooks
-os.environ.pop('GIT_WORK_TREE', None)
-# In git 1.9.1 (maybe others), git exports these while running pre-commit hooks
-# in submodules. In the general case this causes problems.
-# These are covered by test_install_in_submodule_and_run
-# Causes git clone to clone wrong thing
-os.environ.pop('GIT_DIR', None)
-# Causes 'error invalid object ...' during commit
-os.environ.pop('GIT_INDEX_FILE', None)
def main(argv=None):
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -14,6 +14,7 @@
from pre_commit.util import clean_path_on_failure
from pre_commit.util import cmd_output
from pre_commit.util import cwd
+from pre_commit.util import no_git_env
logger = logging.getLogger('pre_commit')
@@ -114,9 +115,11 @@ def clone(self, url, sha):
dir = tempfile.mkdtemp(prefix='repo', dir=self.directory)
with clean_path_on_failure(dir):
- cmd_output('git', 'clone', '--no-checkout', url, dir)
+ cmd_output(
+ 'git', 'clone', '--no-checkout', url, dir, env=no_git_env(),
+ )
with cwd(dir):
- cmd_output('git', 'reset', sha, '--hard')
+ cmd_output('git', 'reset', sha, '--hard', env=no_git_env())
# Update our db with the created repo
with sqlite3.connect(self.db_path) as db:
diff --git a/pre_commit/util.py b/pre_commit/util.py
--- a/pre_commit/util.py
+++ b/pre_commit/util.py
@@ -71,6 +71,20 @@ def shell_escape(arg):
return "'" + arg.replace("'", "'\"'\"'".strip()) + "'"
+def no_git_env():
+ # Too many bugs dealing with environment variables and GIT:
+ # https://github.com/pre-commit/pre-commit/issues/300
+ # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running
+ # pre-commit hooks
+ # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE
+ # while running pre-commit hooks in submodules.
+ # GIT_DIR: Causes git clone to clone wrong thing
+ # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit
+ return dict(
+ (k, v) for k, v in os.environ.items() if not k.startswith('GIT_')
+ )
+
+
@contextlib.contextmanager
def tarfile_open(*args, **kwargs):
"""Compatibility layer because python2.6"""
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -133,7 +133,7 @@ def _get_commit_output(
home = home or tempdir_factory.get()
env = dict(env_base, PRE_COMMIT_HOME=home)
return cmd_output(
- 'git', 'commit', '-m', 'Commit!', '--allow-empty',
+ 'git', 'commit', '-am', 'Commit!', '--allow-empty',
# git commit puts pre-commit to stderr
stderr=subprocess.STDOUT,
env=env,
@@ -175,7 +175,7 @@ def test_install_in_submodule_and_run(tempdir_factory):
parent_path = git_dir(tempdir_factory)
with cwd(parent_path):
cmd_output('git', 'submodule', 'add', src_path, 'sub')
- cmd_output('git', 'commit', '-am', 'foo')
+ cmd_output('git', 'commit', '-m', 'foo')
sub_pth = os.path.join(parent_path, 'sub')
with cwd(sub_pth):
@@ -185,6 +185,23 @@ def test_install_in_submodule_and_run(tempdir_factory):
assert NORMAL_PRE_COMMIT_RUN.match(output)
+def test_commit_am(tempdir_factory):
+ """Regression test for #322."""
+ path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
+ with cwd(path):
+ # Make an unstaged change
+ open('unstaged', 'w').close()
+ cmd_output('git', 'add', '.')
+ cmd_output('git', 'commit', '-m', 'foo')
+ with io.open('unstaged', 'w') as foo_file:
+ foo_file.write('Oh hai')
+
+ assert install(Runner(path)) == 0
+
+ ret, output = _get_commit_output(tempdir_factory)
+ assert ret == 0
+
+
def test_install_idempotent(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -377,6 +377,7 @@ def test_additional_dependencies_roll_forward(tempdir_factory, store):
assert 'mccabe' in output
+@skipif_slowtests_false
@xfailif_windows_no_ruby
@pytest.mark.integration
def test_additional_ruby_dependencies_installed(
@@ -392,6 +393,7 @@ def test_additional_ruby_dependencies_installed(
assert 'thread_safe' in output
+@skipif_slowtests_false
@xfailif_windows_no_node
@pytest.mark.integration
def test_additional_node_dependencies_installed(
| pre-commit fails with .git/index.lock
pre-commit itself seems to be causing this lock, since it's gone after pre-commit ends.
```
$ git commit -a --amend
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /nail/home/rkwills/.pre-commit/patch1450735701.
An unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', '--', '.']
Return code: 128
Expected return code: 0
Output: (none)
Errors:
fatal: Unable to create '/nail/home/rkwills/trees/yelp/pip-faster/.git/index.lock': File exists.
If no other git process is currently running, this probably means a
git process crashed in this repository earlier. Make sure no other git
process is running and remove the file manually to continue.
Check the log at ~/.pre-commit/pre-commit.log
[Mon 12-21 02:08:21PM]$ pip install pre-commit --upgrade
Requirement already up-to-date: pre-commit in ./venv-venv_update/lib/python2.7/site-packages
Cleaning up...
$ ls -l /nail/home/rkwills/trees/yelp/pip-faster/.git/index.lock
ls: cannot access /nail/home/rkwills/trees/yelp/pip-faster/.git/index.lock: No such file or directory
```
| note: removing the environ.pop('GIT_INDEX_FILE') fixes it, for us.
Downgrading to 0.7.0 worked as a workaround, but now we can't reproduce on the latest version :S
This is reproducible:
```
git clone [email protected]:pre-commit/pre-commit
cd pre-commit
echo >> Makefile
make venv
./venv-pre_commit/bin/pre-comit install
git commit -a -m foo
```
Useful output:
```
$ git commit -a -m foo
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/asottile/.pre-commit/patch1450834913.
An unexpected error has occurred: CalledProcessError: Command: ['git', 'checkout', '--', '.']
Return code: 128
Expected return code: 0
Output: (none)
Errors:
fatal: Unable to create '/tmp/foo/pre-commit/.git/index.lock': File exists.
If no other git process is currently running, this probably means a
git process crashed in this repository earlier. Make sure no other git
process is running and remove the file manually to continue.
Check the log at ~/.pre-commit/pre-commit.log
```
I'm going to play with the submodule change and see if I can't work around it...
| 2015-12-23T02:39:07 |
pre-commit/pre-commit | 335 | pre-commit__pre-commit-335 | [
"334"
] | 1dbcfe3adbdde132fe03254fe66e4f731545fde0 | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -5,8 +5,6 @@
import os
import sys
-import virtualenv
-
from pre_commit.languages import helpers
from pre_commit.util import clean_path_on_failure
from pre_commit.util import shell_escape
@@ -15,13 +13,22 @@
ENVIRONMENT_DIR = 'py_env'
+def bin_dir(venv):
+ """On windows there's a different directory for the virtualenv"""
+ if os.name == 'nt': # pragma: no cover (windows)
+ return os.path.join(venv, 'Scripts')
+ else:
+ return os.path.join(venv, 'bin')
+
+
class PythonEnv(helpers.Environment):
@property
def env_prefix(self):
- return ". '{{prefix}}{0}activate' &&".format(
- virtualenv.path_locations(
+ return ". '{{prefix}}{0}{1}activate' &&".format(
+ bin_dir(
helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)
- )[-1].rstrip(os.sep) + os.sep,
+ ),
+ os.sep,
)
| Latest virtualenv breaks pre-commit
See also #299
Failure looks like:
```
17:00:19 hookid: sort-simple-yaml
17:00:19
17:00:19 bash: /nail/home/push/.pre-commit/reposkzFrD//tmp/tmp.cEk6TCoZOS/srv-configs/py_env-default/bin/activate: No such file or directory
```
```
$ pip install virtualenv --upgrade
Downloading/unpacking virtualenv
Downloading virtualenv-14.0.0-py2.py3-none-any.whl (1.8MB): 1.8MB downloaded
Installing collected packages: virtualenv
Successfully installed virtualenv
Cleaning up...
$ python
Python 2.6.7 (r267:88850, Dec 2 2011, 20:27:26)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import virtualenv
>>> virtualenv.path_locations('foo')
('/nail/home/asottile/foo', '/nail/home/asottile/foo/lib/python2.6', '/nail/home/asottile/foo/include/python2.6', '/nail/home/asottile/foo/bin')
>>>
$ pip install virtualenv==1.11.5
Downloading/unpacking virtualenv==1.11.5
Downloading virtualenv-1.11.5.tar.gz (1.8MB): 1.8MB downloaded
Running setup.py (path:/nail/home/asottile/venv/build/virtualenv/setup.py) egg_info for package virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing collected packages: virtualenv
Found existing installation: virtualenv 14.0.0
Uninstalling virtualenv:
Successfully uninstalled virtualenv
Running setup.py install for virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing virtualenv script to /nail/home/asottile/venv/bin
Installing virtualenv-2.6 script to /nail/home/asottile/venv/bin
Successfully installed virtualenv
Cleaning up...
$ python
Python 2.6.7 (r267:88850, Dec 2 2011, 20:27:26)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import virtualenv
>>> virtualenv.path_locations('foo')
('foo', 'foo/lib/python2.6', 'foo/include/python2.6', 'foo/bin')
>>>
```
| 2016-01-20T02:08:53 |
||
pre-commit/pre-commit | 346 | pre-commit__pre-commit-346 | [
"199"
] | 139744582b1c8425d73f11a5896a4865dee84c5f | diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,27 +1,15 @@
from __future__ import unicode_literals
-import os
-import subprocess
import sys
+from backports.shutil_get_terminal_size import get_terminal_size
+
from pre_commit import color
from pre_commit import five
-
# TODO: smell: import side-effects
-try:
- if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
- raise OSError('Cannot determine width without TERM')
- else: # pragma no cover (windows)
- COLS = int(
- subprocess.Popen(
- ('tput', 'cols'), stdout=subprocess.PIPE,
- ).communicate()[0] or
- # Default in the case of no terminal
- 80
- )
-except OSError: # pragma: no cover (windows)
- COLS = 80
+# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4
+COLS = get_terminal_size().columns or 80
def get_hook_message(
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,6 +41,7 @@
install_requires=[
'argparse',
'aspy.yaml',
+ 'backports.shutil_get_terminal_size',
'cached-property',
'jsonschema',
'nodeenv>=0.11.1',
| Windows: Terminal width support
We detect terminal width in unixlikes by running `tput cols`. This works fine for those platforms but doesn't work well for windows. Maybe find a package which does this logic for us and depend on that.
| 2016-02-21T04:56:37 |
||
pre-commit/pre-commit | 370 | pre-commit__pre-commit-370 | [
"369"
] | 749b8406953e53afe3c9b8b665bc27b9f86731bb | diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -19,16 +19,18 @@
def get_env_patch(venv, language_version):
- return (
+ patches = (
('GEM_HOME', os.path.join(venv, 'gems')),
('RBENV_ROOT', venv),
- ('RBENV_VERSION', language_version),
('PATH', (
os.path.join(venv, 'gems', 'bin'), os.pathsep,
os.path.join(venv, 'shims'), os.pathsep,
os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
)),
)
+ if language_version != 'default':
+ patches += (('RBENV_VERSION', language_version),)
+ return patches
@contextlib.contextmanager
| Ruby hooks failing with rbenv installed
Pre-commit has been failing for the past few weeks.
https://gist.github.com/ThatGerber/d6533155848076b25e5e0d5cb02e20eb
Seems to be an issue with the ruby (rbenv) environment.
Tried running `pre-commit clean && pre-commit` but it returns the same issue. Setting `rbenv global 2.2.4` and `rbenv shell 2.2.4` does help either.
| indeed, seems broken for unspecified versions -- unfortunately we didn't notice this because we force a newer version that our (ancient) os supplies.
I'll get a fix for this (and hopefully a test to prevent this from regressing in the future!)
Thanks. Let me know if I can assist in any way.
Actually, I'm having some issues reproducing :(
Can you supply the following (might make it easier for me to reproduce):
`which rbenv`
`env | grep -Ei '(ruby|rbenv|rvm)'`
@ThatGerber ^
perhaps also the version of rbenv you're using as well :)
https://gist.github.com/ThatGerber/18a0f90b5b709ea05e67edfdec9f4ba4
Here you go.
Ah I can reproduce with the following (in a docker container)
```
2 apt-get update
3 apt-get install git libssl-dev libreadline-dev nano virtualenv python
4 git clone http://github.com/rbenv/rbenv ~/.rbenv
5 cd ~/.rbenv/
7 git checkout 9fdce5d
13 cd ~/.rbenv/
15 apt-get install build-essential
16 apt-get install curl
17 ./src/configure
18 make -C src
# Add the rbenv stuff to bashrc
26 nano ~/.bashrc
27 . ~/.bashrc
29 git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
33 apt-get install libyaml-dev
36 rbenv install 2.2.4
38 rbenv global 2.2.4
...
2 cd ~
3 virtualenv venv
4 . venv/bin/activate
6 apt-get install python-dev python3-dev -y
7 pip install pre-commit
8 git clone git://github.com/pre-commit/demo-repo
9 cd demo-repo/
# Remove all the hooks except for the ruby one
10 nano .pre-commit-config.yaml
13 pre-commit run --all-files
```
And then:
```
(venv) root@1fcbf03f029a:~/demo-repo# pre-commit run --all-files
[INFO] Initializing environment for git://github.com/pre-commit/mirrors-scss-lint.
[INFO] Installing environment for git://github.com/pre-commit/mirrors-scss-lint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: (u'/bin/bash', u'/root/.rbenv/shims/gem', 'build', '__fake_gem.gemspec')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
rbenv: version `default' is not installed (set by RBENV_VERSION environment variable)
```
This patch seems to fix it \o/
``` diff
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -19,16 +19,18 @@ ENVIRONMENT_DIR = 'rbenv'
def get_env_patch(venv, language_version):
- return (
+ patches = (
('GEM_HOME', os.path.join(venv, 'gems')),
('RBENV_ROOT', venv),
- ('RBENV_VERSION', language_version),
('PATH', (
os.path.join(venv, 'gems', 'bin'), os.pathsep,
os.path.join(venv, 'shims'), os.pathsep,
os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
)),
)
+ if language_version != 'default':
+ patches += (('RBENV_VERSION', language_version),)
+ return patches
@contextlib.contextmanager
```
| 2016-05-17T15:24:37 |
|
pre-commit/pre-commit | 372 | pre-commit__pre-commit-372 | [
"371"
] | cd03f78d08cb04c9f1cda23cf271f046b1703af7 | diff --git a/pre_commit/parse_shebang.py b/pre_commit/parse_shebang.py
--- a/pre_commit/parse_shebang.py
+++ b/pre_commit/parse_shebang.py
@@ -12,6 +12,10 @@
printable = frozenset(string.printable)
+class ExecutableNotFoundError(OSError):
+ pass
+
+
def parse_bytesio(bytesio):
"""Parse the shebang from a file opened for reading binary."""
if bytesio.read(2) != b'#!':
@@ -70,7 +74,9 @@ def normexe(orig_exe):
if os.sep not in orig_exe:
exe = find_executable(orig_exe)
if exe is None:
- raise OSError('Executable {0} not found'.format(orig_exe))
+ raise ExecutableNotFoundError(
+ 'Executable `{0}` not found'.format(orig_exe),
+ )
return exe
else:
return orig_exe
diff --git a/pre_commit/util.py b/pre_commit/util.py
--- a/pre_commit/util.py
+++ b/pre_commit/util.py
@@ -181,23 +181,26 @@ def cmd_output(*cmd, **kwargs):
for key, value in kwargs.pop('env', {}).items()
) or None
- cmd = parse_shebang.normalize_cmd(cmd)
-
- popen_kwargs.update(kwargs)
- proc = __popen(cmd, **popen_kwargs)
- stdout, stderr = proc.communicate()
- if encoding is not None and stdout is not None:
- stdout = stdout.decode(encoding)
- if encoding is not None and stderr is not None:
- stderr = stderr.decode(encoding)
- returncode = proc.returncode
+ try:
+ cmd = parse_shebang.normalize_cmd(cmd)
+ except parse_shebang.ExecutableNotFoundError as e:
+ returncode, stdout, stderr = (-1, e.args[0].encode('UTF-8'), b'')
+ else:
+ popen_kwargs.update(kwargs)
+ proc = __popen(cmd, **popen_kwargs)
+ stdout, stderr = proc.communicate()
+ if encoding is not None and stdout is not None:
+ stdout = stdout.decode(encoding)
+ if encoding is not None and stderr is not None:
+ stderr = stderr.decode(encoding)
+ returncode = proc.returncode
if retcode is not None and retcode != returncode:
raise CalledProcessError(
returncode, cmd, retcode, output=(stdout, stderr),
)
- return proc.returncode, stdout, stderr
+ return returncode, stdout, stderr
def rmtree(path):
| diff --git a/testing/resources/not_found_exe/hooks.yaml b/testing/resources/not_found_exe/hooks.yaml
new file mode 100644
--- /dev/null
+++ b/testing/resources/not_found_exe/hooks.yaml
@@ -0,0 +1,5 @@
+- id: not-found-exe
+ name: Not found exe
+ entry: i-dont-exist-lol
+ language: system
+ files: ''
diff --git a/tests/parse_shebang_test.py b/tests/parse_shebang_test.py
--- a/tests/parse_shebang_test.py
+++ b/tests/parse_shebang_test.py
@@ -108,7 +108,7 @@ def test_find_executable_path_ext(in_tmpdir):
def test_normexe_does_not_exist():
with pytest.raises(OSError) as excinfo:
parse_shebang.normexe('i-dont-exist-lol')
- assert excinfo.value.args == ('Executable i-dont-exist-lol not found',)
+ assert excinfo.value.args == ('Executable `i-dont-exist-lol` not found',)
def test_normexe_already_full_path():
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -164,6 +164,16 @@ def test_system_hook_with_spaces(tempdir_factory, store):
)
[email protected]
+def test_missing_executable(tempdir_factory, store):
+ _test_hook_repo(
+ tempdir_factory, store, 'not_found_exe',
+ 'not-found-exe', ['/dev/null'],
+ b'Executable `i-dont-exist-lol` not found',
+ expected_return_code=1,
+ )
+
+
@pytest.mark.integration
def test_run_a_script_hook(tempdir_factory, store):
_test_hook_repo(
| Not-found executable crashes framework
This was introduced with the new exe logic in 0.8.0
Here's a simple reproduction:
``` yaml
- repo: local
hooks:
- id: test
name: test
language: system
entry: i-dont-exist-lol
files: '\.py$'
```
```
$ pre-commit run --all-files
test.....................................................................An unexpected error has occurred: OSError: Executable i-dont-exist-lol not found
Check the log at ~/.pre-commit/pre-commit.log
```
| 2016-05-20T20:23:17 |
|
pre-commit/pre-commit | 376 | pre-commit__pre-commit-376 | [
"374"
] | 6654fee5f9c40b4483f30d44a5ccda70b238b3ce | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -69,7 +69,11 @@ def get_conflicted_files():
@memoize_by_cwd
def get_staged_files():
- return cmd_output('git', 'diff', '--staged', '--name-only')[1].splitlines()
+ return cmd_output(
+ 'git', 'diff', '--staged', '--name-only',
+ # Everything except for D
+ '--diff-filter=ACMRTUXB'
+ )[1].splitlines()
@memoize_by_cwd
| diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -33,6 +33,16 @@ def test_get_root_not_git_dir(tempdir_factory):
git.get_root()
+def test_get_staged_files_deleted(tempdir_factory):
+ path = git_dir(tempdir_factory)
+ with cwd(path):
+ open('test', 'a').close()
+ cmd_output('git', 'add', 'test')
+ cmd_output('git', 'commit', '-m', 'foo', '--allow-empty')
+ cmd_output('git', 'rm', '--cached', 'test')
+ assert git.get_staged_files() == []
+
+
def test_is_not_in_merge_conflict(tempdir_factory):
path = git_dir(tempdir_factory)
with cwd(path):
| Newly gitignored (but file still exists) files are linted
(they should not be)
| 2016-05-25T15:45:51 |
|
pre-commit/pre-commit | 387 | pre-commit__pre-commit-387 | [
"386"
] | 99edd0d5c95298b9689013bafca6cf505e21f370 | diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -78,7 +78,9 @@ def hooks(self):
logger.error(
'`{0}` is not present in repository {1}. '
'Typo? Perhaps it is introduced in a newer version? '
- 'Often `pre-commit autoupdate` fixes this.'.format(
+ 'Often you can fix this by removing the hook, running '
+ '`pre-commit autoupdate`, '
+ 'and then adding the hook.'.format(
hook['id'], self.repo_config['repo'],
)
)
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -556,7 +556,9 @@ def test_hook_id_not_present(tempdir_factory, store, fake_log_handler):
assert fake_log_handler.handle.call_args[0][0].msg == (
'`i-dont-exist` is not present in repository {0}. '
'Typo? Perhaps it is introduced in a newer version? '
- 'Often `pre-commit autoupdate` fixes this.'.format(path)
+ 'Often you can fix this by removing the hook, '
+ 'running `pre-commit autoupdate`, '
+ 'and then adding the hook.'.format(path)
)
| do not recommend `pre-commit autoupdate` on failure of `pre-commit autoupdate`
It would be preferable to recommend something that has chance of fixing the problem
instruction to reproduce in #385
| So `autoupdate` is useful to get you into a working state, however the directions should be:
- Remove the hook, run autoupdate, and add the hook back.
See also: https://github.com/pre-commit/pre-commit/pull/368#issue-153987791
| 2016-06-25T15:15:15 |
pre-commit/pre-commit | 399 | pre-commit__pre-commit-399 | [
"263"
] | ea05189c28084e42014b20eae3a1130ea14c9d93 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,6 @@
classifiers=[
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
@@ -46,9 +45,6 @@
'pyyaml',
'virtualenv',
],
- extras_require={
- ':python_version=="2.6"': ['argparse', 'ordereddict'],
- },
entry_points={
'console_scripts': [
'pre-commit = pre_commit.main:main',
| Drop python2.6?
Is it worth attempting to continue to support python2.6?
| Is it a lot of work to keep 2.6 considering we're doing 2.7? We still have a lot of projects at Yelp on 2.6, I imagine other companies/projects still need to support it too.
There's no reason pre-commit itself needs to run on Python 2.6, right? It's the hooks itself that maybe need to support older code (e.g. if they parse the Python AST).
It's sometimes convenient to install into the same virtualenv, but it's straightforward to use tox with a different Python version.
Yeah but it's slightly awkward if you're not a python shop/project and you're stuck on lucid / old debian / old osx / old etc. hmmmmm
To be fair, lucid is unsupported, squeeze is only supported until February (last Debian release with 2.6), and I'm pretty sure OS X has basically no support period :p
But yeah, point taken.
My 2 cents, support 2.6 until a test fails on that release. At that point, kill it. Anybody using Python 2.6 for active development is in a bad way. We can all concede that there are Python 2.6 codebases still living, but they're not being actively developed - and the only purpose of this tool is to support active development.
@adamn: it seems you assume there that the only way in which Python 2.6 becomes a burden for `pre-commit` is when tests break on it. In addition to untested failure modes that aren't caught that way, being Python 2.6 compatible also has an impact on the code and tooling (e.g., `pip` availability) itself in the first place. Breaking loose from Python 2.6 could be an advantage. So, I would say we should take a broader perspective on the ups and downs of keeping Python 2.6 compat.
I would like to add to your point about living code bases, that projects can just keep using older versions of pre-commit that are compatible. I.e., support for platforms for legacy reasons shouldn't really be an argument here. Only if Python 2.6-based projects were still expecting to use new features in precommit this would be an issue, but as you wrote, it is a dead platform.
@sanmai-NL Funny, I was trying to be diplomatic about dropping Python 2.6 in order to gently nudge people towards that. I'm a huge fan of just dropping it now though - that would be great!
Our tests are now failing due to flake8 dropping python2.6. I believe this is enough reason to drop python2.6 support.
0.8.x will be the last versions that support python2.6, starting in 0.9.0 python2.7+ will only be supported
| 2016-08-18T14:32:40 |
|
pre-commit/pre-commit | 400 | pre-commit__pre-commit-400 | [
"397"
] | f11338ccfa612e36a6c1f2dc688080ec08fd66b0 | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -45,7 +45,7 @@ def staged_files_only(cmd_runner):
finally:
# Try to apply the patch we saved
try:
- cmd_runner.run(['git', 'apply', patch_filename])
+ cmd_runner.run(('git', 'apply', patch_filename), encoding=None)
except CalledProcessError:
logger.warning(
'Stashed changes conflicted with hook auto-fixes... '
@@ -55,7 +55,7 @@ def staged_files_only(cmd_runner):
# by hooks.
# Roll back the changes made by hooks.
cmd_runner.run(['git', 'checkout', '--', '.'])
- cmd_runner.run(['git', 'apply', patch_filename])
+ cmd_runner.run(('git', 'apply', patch_filename), encoding=None)
logger.info('Restored changes from {0}.'.format(patch_filename))
else:
# There weren't any staged files so we don't need to do anything
| diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -280,3 +280,28 @@ def test_stage_non_utf8_changes(foo_staged, cmd_runner):
with staged_files_only(cmd_runner):
_test_foo_state(foo_staged)
_test_foo_state(foo_staged, contents, 'AM', encoding='latin-1')
+
+
+def test_non_utf8_conflicting_diff(foo_staged, cmd_runner):
+ """Regression test for #397"""
+ # The trailing whitespace is important here, this triggers git to produce
+ # an error message which looks like:
+ #
+ # ...patch1471530032:14: trailing whitespace.
+ # [[unprintable character]][[space character]]
+ # error: patch failed: foo:1
+ # error: foo: patch does not apply
+ #
+ # Previously, the error message (though discarded immediately) was being
+ # decoded with the UTF-8 codec (causing a crash)
+ contents = 'ú \n'
+ with io.open('foo', 'w', encoding='latin-1') as foo_file:
+ foo_file.write(contents)
+
+ _test_foo_state(foo_staged, contents, 'AM', encoding='latin-1')
+ with staged_files_only(cmd_runner):
+ _test_foo_state(foo_staged)
+ # Create a conflicting diff that will need to be rolled back
+ with io.open('foo', 'w') as foo_file:
+ foo_file.write('')
+ _test_foo_state(foo_staged, contents, 'AM', encoding='latin-1')
| Stashed changes lost if hook fails with non-UTF-8 diff containing trailing whitespace
Hi,
A colleague almost lost all the changes she was working on after launching a `git commit` (with zero file added) and `pre-commit` crashing without restoring its [patch](https://github.com/pre-commit/pre-commit/blob/master/pre_commit/staged_files_only.py#L15).
Here is the terminal message she got:
```
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: ['git', 'apply', 'C:\\Users\\toto\\.pre-commit\\patch1471341002']
```
This seems very similar to a past solved issue:
https://github.com/pre-commit/pre-commit/issues/176
I think it had to do with CRLF conversion.
I'm going to try to reproduce this.
| Strange! In the worst case the patch files remain around. Looking forward to the reproduction. Going to tag windows for now
Follow-up in https://github.com/pre-commit/pre-commit-hooks/issues/134
There's actually two bugs here it seems, I'm going to reopen this.
I think this is the actual actionable one, though it's not entirely clear why it's happening yet!
| 2016-08-18T14:37:30 |
pre-commit/pre-commit | 407 | pre-commit__pre-commit-407 | [
"401"
] | 0a810249e3120315474efeb17a24ed0398982d63 | diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -4,6 +4,7 @@
import io
import os.path
import shutil
+import tarfile
from pre_commit.envcontext import envcontext
from pre_commit.envcontext import Var
@@ -11,7 +12,6 @@
from pre_commit.util import CalledProcessError
from pre_commit.util import clean_path_on_failure
from pre_commit.util import resource_filename
-from pre_commit.util import tarfile_open
from pre_commit.xargs import xargs
@@ -46,7 +46,7 @@ def in_env(repo_cmd_runner, language_version):
def _install_rbenv(repo_cmd_runner, version='default'):
directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
- with tarfile_open(resource_filename('rbenv.tar.gz')) as tf:
+ with tarfile.open(resource_filename('rbenv.tar.gz')) as tf:
tf.extractall(repo_cmd_runner.path('.'))
shutil.move(
repo_cmd_runner.path('rbenv'), repo_cmd_runner.path(directory),
@@ -55,11 +55,11 @@ def _install_rbenv(repo_cmd_runner, version='default'):
# Only install ruby-build if the version is specified
if version != 'default':
# ruby-download
- with tarfile_open(resource_filename('ruby-download.tar.gz')) as tf:
+ with tarfile.open(resource_filename('ruby-download.tar.gz')) as tf:
tf.extractall(repo_cmd_runner.path(directory, 'plugins'))
# ruby-build
- with tarfile_open(resource_filename('ruby-build.tar.gz')) as tf:
+ with tarfile.open(resource_filename('ruby-build.tar.gz')) as tf:
tf.extractall(repo_cmd_runner.path(directory, 'plugins'))
activate_path = repo_cmd_runner.path(directory, 'bin', 'activate')
diff --git a/pre_commit/make_archives.py b/pre_commit/make_archives.py
--- a/pre_commit/make_archives.py
+++ b/pre_commit/make_archives.py
@@ -3,12 +3,12 @@
from __future__ import unicode_literals
import os.path
+import tarfile
from pre_commit import five
from pre_commit.util import cmd_output
from pre_commit.util import cwd
from pre_commit.util import rmtree
-from pre_commit.util import tarfile_open
from pre_commit.util import tmpdir
@@ -53,7 +53,7 @@ def make_archive(name, repo, ref, destdir):
# runtime
rmtree(os.path.join(tempdir, '.git'))
- with tarfile_open(five.n(output_path), 'w|gz') as tf:
+ with tarfile.open(five.n(output_path), 'w|gz') as tf:
tf.add(tempdir, name)
return output_path
diff --git a/pre_commit/util.py b/pre_commit/util.py
--- a/pre_commit/util.py
+++ b/pre_commit/util.py
@@ -8,7 +8,6 @@
import shutil
import stat
import subprocess
-import tarfile
import tempfile
import pkg_resources
@@ -82,16 +81,6 @@ def no_git_env():
)
[email protected]
-def tarfile_open(*args, **kwargs):
- """Compatibility layer because python2.6"""
- tf = tarfile.open(*args, **kwargs)
- try:
- yield tf
- finally:
- tf.close()
-
-
@contextlib.contextmanager
def tmpdir():
"""Contextmanager to create a temporary directory. It will be cleaned up
| diff --git a/tests/make_archives_test.py b/tests/make_archives_test.py
--- a/tests/make_archives_test.py
+++ b/tests/make_archives_test.py
@@ -2,6 +2,7 @@
from __future__ import unicode_literals
import os.path
+import tarfile
import mock
import pytest
@@ -9,7 +10,6 @@
from pre_commit import make_archives
from pre_commit.util import cmd_output
from pre_commit.util import cwd
-from pre_commit.util import tarfile_open
from testing.fixtures import git_dir
from testing.util import get_head_sha
from testing.util import skipif_slowtests_false
@@ -41,7 +41,7 @@ def test_make_archive(tempdir_factory):
extract_dir = tempdir_factory.get()
# Extract the tar
- with tarfile_open(archive_path) as tf:
+ with tarfile.open(archive_path) as tf:
tf.extractall(extract_dir)
# Verify the contents of the tar
| Remove python2.6 tarfile_open compatibility
| See #399
| 2016-08-31T23:25:02 |
pre-commit/pre-commit | 420 | pre-commit__pre-commit-420 | [
"419"
] | bbf1f62ed686a3e321280703a227fbd957e76151 | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -71,6 +71,8 @@ def install_environment(
]
if version != 'default':
venv_cmd.extend(['-p', norm_version(version)])
+ else:
+ venv_cmd.extend(['-p', os.path.realpath(sys.executable)])
repo_cmd_runner.run(venv_cmd)
with in_env(repo_cmd_runner, version):
helpers.run_setup_cmd(
| Not working on macOS Sierra?
Attempting to utilize a collection of hooks from the default repo, I get the following:
```
An unexpected error has occurred: CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
Return code: 100
Expected return code: 0
Output:
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3
Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python
ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
ERROR: virtualenv is not compatible with this system or executable
Errors: (none)
Traceback (most recent call last):
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/error_handler.py", line 47, in error_handler
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/main.py", line 157, in main
return run(runner, args)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 212, in run
return _run_hooks(repo_hooks, args, write, environ)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/staged_files_only.py", line 63, in staged_files_only
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 195, in run
repo_hooks = list(get_repo_hooks(runner))
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 141, in get_repo_hooks
for repo in runner.repositories:
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/runner.py", line 47, in repositories
repository.require_installed()
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py", line 117, in require_installed
self.install()
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py", line 187, in install
self.additional_dependencies[language_name][language_version],
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py", line 78, in install_environment
('pip', 'install', '.') + additional_dependencies,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py", line 58, in clean_path_on_failure
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py", line 74, in install_environment
repo_cmd_runner.run(venv_cmd)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/prefixed_command_runner.py", line 39, in run
return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py", line 189, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
Return code: 100
Expected return code: 0
Output:
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3
Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python
ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
ERROR: virtualenv is not compatible with this system or executable
Errors: (none)
```
Using the Python.org-provided Python 3.5 installer package. I'm already within a virtual environment at the point of execution.
| Thanks for the report!
Does `virtualenv venv` give the same error message? What version of virtualenv do you have installed alongside pre-commit?
Ahah! I am able to reproduce this outside pre-commit (on el capitan no less), though I think I can fix it inside pre-commit. I'll cook up a feature branch! It seems to be caused by the mixing of stdlib `venv` (previously `pyvenv`) and the slightly more maintained `virtualenv` project.
Here's my minimal reproduction btw:
```
$ python3.5 -m venv venv35
$ . venv35/bin/activate
(venv35) $ pip install virtualenv
Collecting virtualenv
Using cached virtualenv-15.0.3-py2.py3-none-any.whl
Installing collected packages: virtualenv
Successfully installed virtualenv-15.0.3
You are using pip version 8.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(venv35) $ virtualenv venv2
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
Overwriting /Users/asottile/workspace/pre-commit/venv2/lib/python3.5/orig-prefix.txt with new content
New python executable in /Users/asottile/workspace/pre-commit/venv2/bin/python3.5
Not overwriting existing python script /Users/asottile/workspace/pre-commit/venv2/bin/python (you must use /Users/asottile/workspace/pre-commit/venv2/bin/python3.5)
ERROR: The executable /Users/asottile/workspace/pre-commit/venv2/bin/python3.5 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/asottile/workspace/pre-commit/venv2')
ERROR: virtualenv is not compatible with this system or executable
```
However, I can work around this it seems:
```
(venv35) $ virtualenv venv2 -p "$(python -c 'import os, sys; print(os.path.realpath(sys.executable))')"
Running virtualenv with interpreter /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5
Using base prefix '/Library/Frameworks/Python.framework/Versions/3.5'
Overwriting /Users/asottile/workspace/pre-commit/venv2/lib/python3.5/orig-prefix.txt with new content
New python executable in /Users/asottile/workspace/pre-commit/venv2/bin/python3.5
Not overwriting existing python script /Users/asottile/workspace/pre-commit/venv2/bin/python (you must use /Users/asottile/workspace/pre-commit/venv2/bin/python3.5)
Installing setuptools, pip, wheel...done.
```
Awesome, thanks! Apologies for not noticing your initial reply (got lost in a bunch of Travis traffic after I pushed updates to way, way too many repos and branches at the same time…)
With a virtualenv installed system-wide (`/usr/local/bin/virtualenv`, 15.0.3, running under macOS-provided Python 2.7) and virtualenv installed within my venv (`$VIRTUAL_ENV/bin/virtualenv`, 15.0.3, running under venv-provided 3.5.2 from official Python.org `.pkg`) running `virtualenv foo` does, in fact, explode gloriously.
Edited to add: running `virtualenv foo` outside of my existing venv works fine. (Defaulting to `--python=python2.7`.) Explicitly running `virtualenv --python=python3.5 foo` also works fine when outside of an existing environment.
Makes sense! I recognize you from contributions on dahlia/libsass-python :)
| 2016-10-23T23:44:11 |
|
pre-commit/pre-commit | 427 | pre-commit__pre-commit-427 | [
"425"
] | 4f73a743780299b2138bd01a2239b1ba11bc7efd | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -73,7 +73,7 @@ def install_environment(
venv_cmd.extend(['-p', norm_version(version)])
else:
venv_cmd.extend(['-p', os.path.realpath(sys.executable)])
- repo_cmd_runner.run(venv_cmd)
+ repo_cmd_runner.run(venv_cmd, cwd='/')
with in_env(repo_cmd_runner, version):
helpers.run_setup_cmd(
repo_cmd_runner,
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -81,6 +81,20 @@ def test_python_hook_args_with_spaces(tempdir_factory, store):
)
[email protected]
+def test_python_hook_weird_setup_cfg(tempdir_factory, store):
+ path = git_dir(tempdir_factory)
+ with cwd(path):
+ with io.open('setup.cfg', 'w') as setup_cfg:
+ setup_cfg.write('[install]\ninstall_scripts=/usr/sbin\n')
+
+ _test_hook_repo(
+ tempdir_factory, store, 'python_hooks_repo',
+ 'foo', [os.devnull],
+ b"['" + five.to_bytes(os.devnull) + b"']\nHello World\n"
+ )
+
+
@pytest.mark.integration
def test_switch_language_versions_doesnt_clobber(tempdir_factory, store):
# We're using the python3 repo because it prints the python version
| setup.cfg prevent pre-commit to install
For some reason I have a setup.cfg file in the root directory of my repo for my app where the parameter **install_scripts** is set to **/usr/sbin**. This prevent pre-commit to set up and crash.
Here is a repro in a fresh git repository containing only **setup.cfg** file and **.pre-commit-config.yaml** (for the [install guide](http://pre-commit.com/#install)
<pre>
$ mkdir repro; cd repro
$ git init
Dépôt Git vide initialisé dans /home/wilfried/repro/.git/
$ pre-commit clean
Cleaned /home/wilfried/.pre-commit.
$ pre-commit install
pre-commit installed at /home/wilfried/repro/.git/hooks/pre-commit
$ cat setup.cfg
[install]
install_scripts=/usr/sbin
$ cat .pre-commit-config.yaml
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: v0.4.2
hooks:
- id: trailing-whitespace
</pre>
Now, with those two files setup, I try to run a simple pre-commit run which try to initiate the virtualenv.
<pre>
$ pre-commit run --all-files
[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/python', '-m', 'virtualenv', '/home/wilfried/.pre-commit/repoaXLSIv/py_env-default', '-p', '/usr/bin/python2.7')
Return code: 1
Expected return code: 0
Output:
New python executable in /home/wilfried/.pre-commit/repoaXLSIv/py_env-default/bin/python2.7
Also creating executable in /home/wilfried/.pre-commit/repoaXLSIv/py_env-default/bin/python
Installing setuptools, pip, wheel...
Complete output from command /home/wilfried/.pre-...efault/bin/python2.7 - setuptools pip wheel:
...Installing setuptools, pip, wheel...done.
Running virtualenv with interpreter /usr/bin/python2.7
Errors:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2327, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 711, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 944, in create_environment
download=download,
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 900, in install_wheel
call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 792, in call_subprocess
logger.notify('\n'.join(all_output) + '\n----------------------------------------')
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 199, in notify
self.log(self.NOTIFY, msg, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 231, in log
consumer.write(rendered+'\n')
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 2254: ordinal not in range(128)
Check the log at ~/.pre-commit/pre-commit.log
</pre>
You'll find the content on pre-commit.log on [pastebin](http://pastebin.com/Ls61EQDj).
Now the if I comment out the install_scripts parameter, everything works fine
<pre>
$ cat setup.cfg
[install]
#install_scripts=/usr/sbin
$ pre-commit clean
Cleaned /home/wilfried/.pre-commit.
$ pre-commit run --all-files
[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
Trim Trailing Whitespace.............................(no files to check)Skipped
</pre>
I'm running on a linux mint 18, with python 2.7.12 and pre-commit 0.9.2
<pre>
$ python --version
Python 2.7.12
$ pre-commit --version
pre-commit 0.9.2
</pre>
Let my know if you need anything else.
| Very strange! I'll poke at this, I think I have an idea for a fix already :)
| 2016-11-07T16:59:02 |
pre-commit/pre-commit | 436 | pre-commit__pre-commit-436 | [
"354"
] | d17daf7fd18469f888ec50222a11d4d0a7b3a278 | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -4,6 +4,7 @@
import logging
import os.path
import re
+import sys
from pre_commit.errors import FatalError
from pre_commit.util import CalledProcessError
@@ -102,3 +103,26 @@ def wrapper(include_expr, exclude_expr):
get_staged_files_matching = get_files_matching(get_staged_files)
get_all_files_matching = get_files_matching(get_all_files)
get_conflicted_files_matching = get_files_matching(get_conflicted_files)
+
+
+def check_for_cygwin_mismatch():
+ """See https://github.com/pre-commit/pre-commit/issues/354"""
+ if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)
+ is_cygwin_python = sys.platform == 'cygwin'
+ toplevel = cmd_output('git', 'rev-parse', '--show-toplevel')[1]
+ is_cygwin_git = toplevel.startswith('/')
+
+ if is_cygwin_python ^ is_cygwin_git:
+ exe_type = {True: '(cygwin)', False: '(windows)'}
+ logger.warn(
+ 'pre-commit has detected a mix of cygwin python / git\n'
+ 'This combination is not supported, it is likely you will '
+ 'receive an error later in the program.\n'
+ 'Make sure to use cygwin git+python while using cygwin\n'
+ 'These can be installed through the cygwin installer.\n'
+ ' - python {}\n'
+ ' - git {}\n'.format(
+ exe_type[is_cygwin_python],
+ exe_type[is_cygwin_git],
+ )
+ )
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -152,6 +152,7 @@ def main(argv=None):
with error_handler():
add_logging_handler(args.color)
+ git.check_for_cygwin_mismatch()
runner = Runner.create()
if args.command == 'install':
| Warn when mismatching cygwin git/python
See #352 and #353 for how this can manifest itself
| - cygwin python can be detected using `sys.platform == 'cygwin'`
- cygwin git can be detected using `check_output(('git', 'rev-parse', '--show-toplevel')).decode('UTF-8').startswith('/')` (maybe there's a better way?)
I think the code that would need to be added to verify this is:
```python
if sys.platform in ('win32', 'cygwin'): # pragma no cover (windows only)
is_cygwin_python = sys.platform == 'cygwin'
is_cygwin_git = cmd_output('git', 'rev-parse', '--show-toplevel')[1].startswith('/')
if is_cygwin_python ^ is_cygwin_git:
logger.warn(...)
```
| 2016-11-26T23:03:19 |
|
pre-commit/pre-commit | 438 | pre-commit__pre-commit-438 | [
"437"
] | 88f9f76e4895e0a331a138e3b61971c3f5ec2980 | diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -152,8 +152,8 @@ def main(argv=None):
with error_handler():
add_logging_handler(args.color)
- git.check_for_cygwin_mismatch()
runner = Runner.create()
+ git.check_for_cygwin_mismatch()
if args.command == 'install':
return install(
| diff --git a/tests/main_test.py b/tests/main_test.py
--- a/tests/main_test.py
+++ b/tests/main_test.py
@@ -7,6 +7,7 @@
import pytest
from pre_commit import main
+from pre_commit.error_handler import PreCommitSystemExit
from pre_commit.util import cwd
from testing.auto_namedtuple import auto_namedtuple
@@ -142,3 +143,16 @@ def test_help_cmd_in_empty_directory(
mock.call(['help', 'run']),
mock.call(['run', '--help']),
])
+
+
+def test_expected_fatal_error_no_git_repo(
+ tempdir_factory, cap_out, mock_out_store_directory,
+):
+ with cwd(tempdir_factory.get()):
+ with pytest.raises(PreCommitSystemExit):
+ main.main([])
+ assert cap_out.get() == (
+ 'An error has occurred: FatalError: git failed. '
+ 'Is it installed, and are you in a Git repository directory?\n'
+ 'Check the log at ~/.pre-commit/pre-commit.log\n'
+ )
| cygwin python checking should happen after setup code
Regressed as part of #436
Before:
```
$ ./pre-commit/venv_cygwin_python/bin/pre-commit clean
An error has occurred: FatalError: git failed. Is it installed, and are you in a Git repository directory?
Check the log at ~/.pre-commit/pre-commit.log
```
Current master:
```
$ ./pre-commit/venv_cygwin_python/bin/pre-commit clean
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/git', 'rev-parse', '--show-toplevel')
Return code: 128
Expected return code: 0
Output: (none)
Errors:
fatal: Not a git repository (or any of the parent directories): .git
Check the log at ~/.pre-commit/pre-commit.log
```
| 2016-11-26T23:18:06 |
|
pre-commit/pre-commit | 460 | pre-commit__pre-commit-460 | [
"456"
] | 8837cfa7ffcc419216d4e01392cee0f1ceee9c88 | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -82,12 +82,16 @@ def install(runner, overwrite=False, hooks=False, hook_type='pre-commit'):
# If they requested we install all of the hooks, do so.
if hooks:
- for repository in runner.repositories:
- repository.require_installed()
+ install_hooks(runner)
return 0
+def install_hooks(runner):
+ for repository in runner.repositories:
+ repository.require_installed()
+
+
def uninstall(runner, hook_type='pre-commit'):
"""Uninstall the pre-commit hooks."""
hook_path = runner.get_hook_path(hook_type)
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -12,6 +12,7 @@
from pre_commit.commands.autoupdate import autoupdate
from pre_commit.commands.clean import clean
from pre_commit.commands.install_uninstall import install
+from pre_commit.commands.install_uninstall import install_hooks
from pre_commit.commands.install_uninstall import uninstall
from pre_commit.commands.run import run
from pre_commit.error_handler import error_handler
@@ -78,6 +79,17 @@ def main(argv=None):
default='pre-commit',
)
+ install_hooks_parser = subparsers.add_parser(
+ 'install-hooks',
+ help=(
+ 'Install hook environemnts for all environemnts in the config '
+ 'file. You may find `pre-commit install --install-hooks` more '
+ 'useful.'
+ ),
+ )
+ _add_color_option(install_hooks_parser)
+ _add_config_option(install_hooks_parser)
+
uninstall_parser = subparsers.add_parser(
'uninstall', help='Uninstall the pre-commit script.',
)
@@ -171,6 +183,8 @@ def main(argv=None):
runner, overwrite=args.overwrite, hooks=args.install_hooks,
hook_type=args.hook_type,
)
+ elif args.command == 'install-hooks':
+ return install_hooks(runner)
elif args.command == 'uninstall':
return uninstall(runner, hook_type=args.hook_type)
elif args.command == 'clean':
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -13,6 +13,7 @@
import pre_commit.constants as C
from pre_commit.commands.install_uninstall import IDENTIFYING_HASH
from pre_commit.commands.install_uninstall import install
+from pre_commit.commands.install_uninstall import install_hooks
from pre_commit.commands.install_uninstall import is_our_pre_commit
from pre_commit.commands.install_uninstall import is_previous_pre_commit
from pre_commit.commands.install_uninstall import PREVIOUS_IDENTIFYING_HASHES
@@ -460,6 +461,20 @@ def test_installs_hooks_with_hooks_True(
assert PRE_INSTALLED.match(output)
+def test_install_hooks_command(tempdir_factory, mock_out_store_directory):
+ path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
+ with cwd(path):
+ runner = Runner(path, C.CONFIG_FILE)
+ install(runner)
+ install_hooks(runner)
+ ret, output = _get_commit_output(
+ tempdir_factory, pre_commit_home=mock_out_store_directory,
+ )
+
+ assert ret == 0
+ assert PRE_INSTALLED.match(output)
+
+
def test_installed_from_venv(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
diff --git a/tests/main_test.py b/tests/main_test.py
--- a/tests/main_test.py
+++ b/tests/main_test.py
@@ -15,17 +15,19 @@
@pytest.yield_fixture
def mock_commands():
with mock.patch.object(main, 'autoupdate') as autoupdate_mock:
- with mock.patch.object(main, 'clean') as clean_mock:
+ with mock.patch.object(main, 'install_hooks') as install_hooks_mock:
with mock.patch.object(main, 'install') as install_mock:
with mock.patch.object(main, 'uninstall') as uninstall_mock:
with mock.patch.object(main, 'run') as run_mock:
- yield auto_namedtuple(
- autoupdate_mock=autoupdate_mock,
- clean_mock=clean_mock,
- install_mock=install_mock,
- uninstall_mock=uninstall_mock,
- run_mock=run_mock,
- )
+ with mock.patch.object(main, 'clean') as clean_mock:
+ yield auto_namedtuple(
+ autoupdate_mock=autoupdate_mock,
+ clean_mock=clean_mock,
+ install_mock=install_mock,
+ install_hooks_mock=install_hooks_mock,
+ uninstall_mock=uninstall_mock,
+ run_mock=run_mock,
+ )
class CalledExit(Exception):
@@ -121,6 +123,12 @@ def test_run_command(mock_commands):
assert_only_one_mock_called(mock_commands)
+def test_install_hooks_command(mock_commands):
+ main.main(('install-hooks',))
+ assert mock_commands.install_hooks_mock.call_count == 1
+ assert_only_one_mock_called(mock_commands)
+
+
def test_no_commands_run_command(mock_commands):
main.main([])
assert mock_commands.run_mock.call_count == 1
| Hooks pre-fetching
Hi,
I want to suggest introducing a command for downloading all required hooks forcefully. The use case is simple:
1. I pack everything needed for testing into container
2. I deploy that into CI
3. It gets built and tests are ran
I'd like to separate fetching of the hooks and pre-installing them into a build step. This would allow me use caching for containers, saving time for tests.
P.S. Perhaps it's needed to add some argument to `run` command, so that it won't try downloading stuff when executing tests.
| We use `pre-commit install --install-hooks` to accomplish this for our jobs, does this work for you?
Thanks for the suggestion. It doesn't look installing all the dependencies into `$HOME/.pre-commit/` env, like `pre-commit run` does.
Can you provide some additional output? The two commands run the same code:
## install --install-hooks
- https://github.com/pre-commit/pre-commit/blob/8837cfa7ffcc419216d4e01392cee0f1ceee9c88/pre_commit/commands/install_uninstall.py#L86
## run
- https://github.com/pre-commit/pre-commit/blob/8837cfa7ffcc419216d4e01392cee0f1ceee9c88/pre_commit/commands/run.py#L159
- https://github.com/pre-commit/pre-commit/blob/8837cfa7ffcc419216d4e01392cee0f1ceee9c88/pre_commit/runner.py#L41-L46
You are right, there was no output, because the hooks were installed before that.
Now I've faced another problem: whenever I run `pre-commit run --all-files` it tries upgrading them.
I need some way to prevent that. Any ideas?
It shouldn't, can you show some output or something reproducible?
ah I took a peek at your [.pre-commit-config.yaml](https://github.com/GDG-Ukraine/gdg.org.ua/blob/c7f5c91e326ec7933f936c926928808ddfe0fde7/.pre-commit-config.yaml) and it's probably because you're using unsupported `master` as `sha`. You can read more about why this doesn't quite work the way you want it to: https://github.com/pre-commit/pre-commit/issues/158#issuecomment-54103765
The suggested workflow for reproducibility is to use a sha or tag in that field (and periodically upgrade using `pre-commit autoupdate`). Actually I think `pre-commit autoupdate` will fix your current situation automatically
Oh, I see.. Thanks for explanation.
My goal is to avoid having git installed in container during runtime. Normally I have it as a build dependency only. But it is running git for checks:
```
An error has occurred: FatalError: git failed. Is it installed, and are you in a Git repository directory?
Check the log at ~/.pre-commit/pre-commit.log
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/pre_commit/git.py", line 20, in get_root
return cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()
File "/usr/local/lib/python3.5/dist-packages/pre_commit/util.py", line 188, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ('git', 'rev-parse', '--show-toplevel')
Return code: 1
Expected return code: 0
Output:
Executable `git` not found
Errors: (none)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/pre_commit/error_handler.py", line 47, in error_handler
yield
File "/usr/local/lib/python3.5/dist-packages/pre_commit/main.py", line 166, in main
runner = Runner.create(args.config)
File "/usr/local/lib/python3.5/dist-packages/pre_commit/runner.py", line 28, in create
root = git.get_root()
File "/usr/local/lib/python3.5/dist-packages/pre_commit/git.py", line 23, in get_root
'git failed. Is it installed, and are you in a Git repository '
pre_commit.errors.FatalError: git failed. Is it installed, and are you in a Git repository directory?
```
Any chance to turn the check off completely? Given that all hooks will already be downloaded and set up at build time.
Ah, pre-commit uses git to both find the top level of the repository and acquire a file list to run hooks against. The failure here is actually while pre-commit is detecting the top level
In other words, `git` is an essential runtime dependency even when the hooks are already preinstalled. I think this becomes wontfix?
Well, if it cannot be easily substituted with just a python code, then I think yes. You may close this issue.
Yeah it'd have to reimplement git internals which I'm not really interested in doing :)
@asottile It seems we need to re-open the question.
Project workdir in the container is just a folder of the repo at the host machine in RW mode. Thus when it changes the pre-commit hook of the repo it starts pointing to pre-commit inside of the container, which breaks the hook for the host machine.
Any ideas? Probably introducing some extra option would help.
This is an inherent problem with prefix installation. If you're using shebangs of executables then the paths need to be the same inside and outside the container. There's not really much that can be done to make them work at different paths since both the environment setup and installation are handled externally (through virtualenv, setuptools, and pip).
If possible, I'd suggest mounting at the same path as you installed the hooks and it should work
Well, I just don't want it to change `.git/hooks/pre-commit` while running inside of container. Thus I would prefer some `--no-not-change-git-hook` CLI argument or so.
I'll reopen for now, mostly because I'm curious of your workflow. Can you give me a comprehensive list of commands that you're running and whether they're inside / outside of the container as well as the (or an approximation of) the docker command you're running (mostly interested in the mounts). With this I can hopefully suggest either an existing workflow which accomplishes what you want or perhaps a feature if I think it's necessary to do what you want.
For development I use [vagga](https://vagga.readthedocs.io/en/latest/) — containerization tool without daemons. Like docker, it also builds and runs LXC containers. But, unlike docker, it runs everything in userspace and isn't meant to be used in production (there's a supervisor [lithos](https://lithos.readthedocs.io/en/latest/), systemd-nspawn and lots of other tools for this).
The config is:
```yaml
containers:
mysql:
setup:
- !Alpine v3.4
- !Install
- mariadb
- mariadb-client
- !EnsureDir /data
- !EnsureDir /run
environ:
DB_DATABASE: gdg
DB_USERNAME: mysql
DB_PASSWORD: mysql
DB_HOST: 127.0.0.1
DB_PORT: 3307
DB_DATA_DIR: /data
volumes:
/data: !Persistent
name: mysql-data
init-command: _init-mysql
/run: !Tmpfs
mode: 0o766
subdirs:
mysqld:
app:
setup:
- !Ubuntu xenial
- !UbuntuUniverse
- &app-build-deps !BuildDeps
- git
- mercurial
- python3.5-dev
- !Install
- ca-certificates
- python3.5
- !PipConfig
dependencies: true
python-exe: python3.5
- !Depends setup.py
- !Py3Requirements requirements/dev.txt
- !NpmInstall [bower]
- !Sh bower install --allow-root
- !EnsureDir /mnt/db_host
environ-file: /work/.env
test:
setup:
- !Container app
- !BuildDeps *app-build-deps
- !Depends .pre-commit-config.yaml
- !Py3Requirements requirements/test.txt
- !Py3Requirements requirements/test-env.txt
# Git is needed for pre-commit in runtime. Ref:
# github.com/pre-commit/pre-commit/issues/456#issuecomment-269653630
- !Install [git]
# Shadow git-hooks dir, so that pre-commit won't break git hooks in host
# Ref: github.com/pre-commit/pre-commit/issues/456#issuecomment-269856503
- !Sh mount -t tmpfs none /work/.git/hooks
- !Sh HOME=/root pre-commit install --install-hooks
environ:
# Ref:
# github.com/pre-commit/pre-commit-hooks/pull/161#issuecomment-269662841
LANG: en_US.UTF-8
BLUEBERRYPY_CONFIG: "{}"
NOSE_TESTCONFIG_AUTOLOAD_YAML: "config/test/app.yml"
commands:
_init-mysql: !Command
description: Initialize mysql database
container: mysql
run: |
mysql_install_db --datadir=$DB_DATA_DIR
mysqld_safe --user=root --datadir=$DB_DATA_DIR \
--bind-address=$DB_HOST --port=$DB_PORT \
--no-auto-restart --no-watch
while [ ! -S /run/mysqld/mysqld.sock ]; do
sleep .2
done # wait for server to be ready
mysqladmin create $DB_DATABASE
mysql -e "CREATE USER '$DB_USERNAME'@'localhost' IDENTIFIED BY '$DB_PASSWORD';"
mysql -e "GRANT ALL PRIVILEGES ON $DB_DATABASE.* TO '$DB_USERNAME'@'localhost';"
mysql -e "FLUSH PRIVILEGES;"
clean-db: !Command
description: Cleanup mysql database
container: mysql
run: |
mysql_install_db --datadir=$DB_DATA_DIR
mysqld_safe --user=root --datadir=$DB_DATA_DIR \
--bind-address=$DB_HOST --port=$DB_PORT \
--no-auto-restart --no-watch
while [ ! -S /run/mysqld/mysqld.sock ]; do
sleep .2
done # wait for server to be ready
mysqladmin -f drop "$DB_DATABASE"
mysqladmin create "$DB_DATABASE"
mysql -e "GRANT ALL PRIVILEGES ON $DB_DATABASE.* TO '$DB_USERNAME'@'localhost';"
mysql -e "FLUSH PRIVILEGES;"
blueberrypy: !Command
description: |
Run blueberrypy command (you have to provide command and arguments by yourself)
container: app
run: [ blueberrypy ]
mysql: !Command
description: Run RDBMS shell
container: mysql
run: |
mysqld_safe --user=root --datadir=$DB_DATA_DIR \
--bind-address=$DB_HOST --port=$DB_PORT \
--no-auto-restart --no-watch
while [ ! -S /run/mysqld/mysqld.sock ]; do
sleep .2
done
mysql -D $DB_DATABASE
run: !Supervise
description: Run application in development mode
mode: stop-on-failure
children:
run-app: !Command
container: app
run: |
touch /work/.dbcreation # Create lock file
while [ -f /work/.dbcreation ]; do # Acquire lock
sleep .2
done
current_version=$(alembic -c config/alembic.ini -x environment=dev current)
head_version=$(alembic -c config/alembic.ini -x environment=dev heads)
if [ "${current_version}" != "${head_version}" ]; then
alembic -c config/alembic.ini -x environment=dev upgrade head
fi
if [ -z "${current_version}" ]; then
load_gdg_fixtures "$DATABASE_URL" src/GDGUkraine/fixtures/fixtures.yaml || exit 1
fi
blueberrypy serve -b 0.0.0.0:8080
run-db: !Command
container: mysql
run: |
mysqld_safe --user=root --datadir=$DB_DATA_DIR \
--bind-address=$DB_HOST --port=$DB_PORT \
--no-auto-restart --no-watch
while [ ! -S /run/mysqld/mysqld.sock ]; do
sleep .2
done # wait for server to be ready
rm -f /work/.dbcreation # Release lock
while :; do # Emulate infinite loop
sleep 1d;
done
lint: !Command
description: Run linters for gdg.org.ua project
container: test
run: pre-commit run --all-files
'py.test': !Command
description: Run tests for gdg.org.ua project
container: test
run: [py.test, --cov, -v]
test: !Command
description: Run tests for gdg.org.ua project
container: test
run: py.test --cov -v src/tests/
```
So when I run `vagga lint` it:
1) builds `app` and `test` containers *if they don't exist or any of their dependencies in project changed (`!Depend` or `requirements*.txt` etc.)*
1.0) during the build my current directory (with config, which is a repo dir) is being mounted as `/work`, rw
1.1) at this stage all needed dependencies are being installed
1.2) during the build I hacked around the issue and shadowed `/work/.git/hooks` with `tmpfs`, but it doesn't look like a right thing to do
1.3) `HOME=/root pre-commit install --install-hooks` installs hooks to dir, which is `$HOME` at runtime
2) executes `pre-commit run --all-files` command inside of `test` container
2.0) during the run my project dir is also mounted as `/work`, rw
I'm doing `git commit`, which runs pre-commit outside of container, that is why I don't want the build process to influence any git configuration at host.
ok. In that case, I'll add a `pre-commit install-hooks` cmdline, I think that's probably the cleanest way to do this | 2017-01-04T15:54:43 |
pre-commit/pre-commit | 472 | pre-commit__pre-commit-472 | [
"471"
] | ad8eb93af4db3e08e8cc988b05a9996942443f36 | diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -22,6 +22,7 @@ def get_env_patch(venv, language_version): # pragma: windows no cover
patches = (
('GEM_HOME', os.path.join(venv, 'gems')),
('RBENV_ROOT', venv),
+ ('BUNDLE_IGNORE_CONFIG', '1'),
('PATH', (
os.path.join(venv, 'gems', 'bin'), os.pathsep,
os.path.join(venv, 'shims'), os.pathsep,
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -207,6 +207,30 @@ def test_run_versioned_ruby_hook(tempdir_factory, store):
)
+@skipif_slowtests_false
+@xfailif_windows_no_ruby
[email protected]
+def test_run_ruby_hook_with_disable_shared_gems(
+ tempdir_factory,
+ store,
+ tmpdir,
+):
+ """Make sure a Gemfile in the project doesn't interfere."""
+ tmpdir.join('Gemfile').write('gem "lol_hai"')
+ tmpdir.join('.bundle').mkdir()
+ tmpdir.join('.bundle', 'config').write(
+ 'BUNDLE_DISABLE_SHARED_GEMS: true\n'
+ 'BUNDLE_PATH: vendor/gem\n'
+ )
+ with cwd(tmpdir.strpath):
+ _test_hook_repo(
+ tempdir_factory, store, 'ruby_versioned_hooks_repo',
+ 'ruby_hook',
+ ['/dev/null'],
+ b'2.1.5\nHello world from a ruby hook\n',
+ )
+
+
@pytest.mark.integration
def test_system_hook_with_spaces(tempdir_factory, store):
_test_hook_repo(
| Can't use ruby hooks with "BUNDLE_DISABLE_SHARED_GEMS: true" in .bundle/config
I have a repo with a `.bundle/config` file with these contents:
```yaml
BUNDLE_DISABLE_SHARED_GEMS: true
BUNDLE_PATH: vendor/gem
```
And a `Gemfile` with these contents:
```ruby
gem 'lol_hai'
```
I can't use any Ruby hooks in this repo; I get an error like this:
```
/nail/home/ckuehl/.pre-commit/repobarlh9c4/rbenv-1.9.3-p551/versions/1.9.3-p551/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:315:in `to_specs': Could not find '__fake_gem' (>= 0) among 0 total gem(s) (Gem::LoadError)
Checked in 'GEM_PATH=/nail/tmp/tmp.jPQDWVcTGz/pre-commit-bug/vendor/gem/ruby/1.9.1', execute `gem env` for more information
from /nail/home/ckuehl/.pre-commit/repobarlh9c4/rbenv-1.9.3-p551/versions/1.9.3-p551/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:324:in `to_spec'
from /nail/home/ckuehl/.pre-commit/repobarlh9c4/rbenv-1.9.3-p551/versions/1.9.3-p551/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_gem.rb:58:in `gem'
from /nail/home/ckuehl/.pre-commit/repobarlh9c4/rbenv-1.9.3-p551/gems/bin/puppet-validate:22:in `<main>'
from /nail/home/ckuehl/.pre-commit/repobarlh9c4/rbenv-1.9.3-p551/gems/bin/ruby_executable_hooks:15:in `eval'
from /nail/home/ckuehl/.pre-commit/repobarlh9c4/rbenv-1.9.3-p551/gems/bin/ruby_executable_hooks:15:in `<main>'
```
Interesting bit is: `GEM_PATH=/nail/tmp/tmp.jPQDWVcTGz/pre-commit-bug/vendor/gem/ruby/1.9.1`
That doesn't look right (it's a path in my project).
Here's a failing test:
```patch
commit 260f981ae8cdf1c6b1f796dda5cf56811ed237d3 (HEAD -> gemfile-in-root, origin/gemfile-in-root)
Author: Chris Kuehl <[email protected]>
AuthorDate: Mon Jan 23 19:39:47 2017 -0800
Commit: Chris Kuehl <[email protected]>
CommitDate: Mon Jan 23 19:59:28 2017 -0800
Add failing test for BUNDLE_DISABLE_SHARED_GEMS
diff --git a/tests/repository_test.py b/tests/repository_test.py
index b7ce8dd..203852c 100644
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -207,6 +207,30 @@ def test_run_versioned_ruby_hook(tempdir_factory, store):
)
+@skipif_slowtests_false
+@xfailif_windows_no_ruby
[email protected]
+def test_run_ruby_hook_with_disable_shared_gems(
+ tempdir_factory,
+ store,
+ tmpdir,
+):
+ """Make sure a Gemfile in the project doesn't interfere."""
+ tmpdir.join('Gemfile').write('gem "lol_hai"')
+ tmpdir.join('.bundle').mkdir()
+ tmpdir.join('.bundle', 'config').write(
+ 'BUNDLE_DISABLE_SHARED_GEMS: true\n'
+ 'BUNDLE_PATH: vendor/gem\n'
+ )
+ with cwd(tmpdir.strpath):
+ _test_hook_repo(
+ tempdir_factory, store, 'ruby_versioned_hooks_repo',
+ 'ruby_hook',
+ ['/dev/null'],
+ b'2.1.5\nHello world from a ruby hook\n',
+ )
+
+
@pytest.mark.integration
def test_system_hook_with_spaces(tempdir_factory, store):
_test_hook_repo(
```
| 2017-01-24T05:24:04 |
|
pre-commit/pre-commit | 478 | pre-commit__pre-commit-478 | [
"477"
] | 3986db81ae35758daa870dd602e42bbe754d2521 | diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -43,12 +43,14 @@ def build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover
pull = kwargs.pop('pull')
assert not kwargs, kwargs
cmd = (
- 'docker', 'build', '.',
+ 'docker', 'build',
'--tag', docker_tag(repo_cmd_runner),
'--label', PRE_COMMIT_LABEL,
)
if pull:
cmd += ('--pull',)
+ # This must come last for old versions of docker. See #477
+ cmd += ('.',)
helpers.run_setup_cmd(repo_cmd_runner, cmd)
| `docker build` argument order is invalid on old versions of Docker
We do: `docker build . --tag thing --label thing`
But this produces an error on Docker 1.11.2:
```
ckuehl@dev4-uswest1cdevc:~/proj/pre-commit$ docker build . --tag thing --label thing
docker: "build" requires 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
```
The path needs to go at the end on 1.11.2, but it works on 1.13.0 as-is. We should probably just change the order of the arguments to make every version happy.
| 2017-01-27T22:22:29 |
||
pre-commit/pre-commit | 493 | pre-commit__pre-commit-493 | [
"491"
] | 927f471a6cadc84d46e53f2cb53f28ef81ac631b | diff --git a/pre_commit/constants.py b/pre_commit/constants.py
--- a/pre_commit/constants.py
+++ b/pre_commit/constants.py
@@ -17,6 +17,8 @@
# Bump when installation changes in a backwards / forwards incompatible way
INSTALLED_STATE_VERSION = '1'
+# Bump when modifying `empty_template`
+LOCAL_REPO_VERSION = '1'
VERSION = pkg_resources.get_distribution('pre-commit').version
VERSION_PARSED = pkg_resources.parse_version(VERSION)
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -9,6 +9,7 @@
from cached_property import cached_property
+import pre_commit.constants as C
from pre_commit.prefixed_command_runner import PrefixedCommandRunner
from pre_commit.util import clean_path_on_failure
from pre_commit.util import cmd_output
@@ -129,7 +130,7 @@ def make_local(self, deps):
def make_local_strategy(directory):
copy_tree_to_path(resource_filename('empty_template'), directory)
return self._new_repo(
- 'local:{}'.format(','.join(sorted(deps))), 'N/A',
+ 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,
make_local_strategy,
)
| Encode some sort of "version" for language-local repositories
Without this, they'll never get upgraded if fixes are made in the pre-commit empty template
| 2017-02-16T20:17:49 |
||
pre-commit/pre-commit | 501 | pre-commit__pre-commit-501 | [
"253"
] | 0ece39c484e512d36cb5b9570713967a1ec056a9 | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -3,6 +3,7 @@
import logging
import os
+import subprocess
import sys
from pre_commit import color
@@ -152,6 +153,13 @@ def _run_hooks(repo_hooks, args, environ):
retval = 0
for repo, hook in repo_hooks:
retval |= _run_single_hook(hook, repo, args, skips, cols)
+ if (
+ retval and
+ args.show_diff_on_failure and
+ subprocess.call(('git', 'diff', '--quiet')) != 0
+ ):
+ print('All changes made by hooks:')
+ subprocess.call(('git', 'diff'))
return retval
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -149,6 +149,10 @@ def main(argv=None):
'--hook-stage', choices=('commit', 'push'), default='commit',
help='The stage during which the hook is fired e.g. commit or push.',
)
+ run_parser.add_argument(
+ '--show-diff-on-failure', action='store_true',
+ help='When hooks fail, run `git diff` directly afterward.',
+ )
run_mutex_group = run_parser.add_mutually_exclusive_group(required=False)
run_mutex_group.add_argument(
'--all-files', '-a', action='store_true', default=False,
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -58,6 +58,7 @@ def _get_opts(
source='',
allow_unstaged_config=False,
hook_stage='commit',
+ show_diff_on_failure=False,
):
# These are mutually exclusive
assert not (all_files and files)
@@ -67,11 +68,12 @@ def _get_opts(
color=color,
verbose=verbose,
hook=hook,
- hook_stage=hook_stage,
no_stash=no_stash,
origin=origin,
source=source,
allow_unstaged_config=allow_unstaged_config,
+ hook_stage=hook_stage,
+ show_diff_on_failure=show_diff_on_failure,
)
@@ -151,6 +153,23 @@ def test_hook_that_modifies_but_returns_zero(
)
+def test_show_diff_on_failure(
+ capfd, cap_out, tempdir_factory, mock_out_store_directory,
+):
+ git_path = make_consuming_repo(
+ tempdir_factory, 'modified_file_returns_zero_repo',
+ )
+ with cwd(git_path):
+ stage_a_file('bar.py')
+ _test_run(
+ cap_out, git_path, {'show_diff_on_failure': True},
+ # we're only testing the output after running
+ (), 1, True,
+ )
+ out, _ = capfd.readouterr()
+ assert 'diff --git' in out
+
+
@pytest.mark.parametrize(
('options', 'outputs', 'expected_ret', 'stage'),
(
| Add option to print diff if hooks failed and files were changed.
I'm currently debugging a pre-commit failure in only one Python version that happens on Travis.
Not sure if it is out of scope of the project, but it would be handy to have some kind of `--print-diff` option that simply prints a diff of the files on failure. It would generally only be useful in CI when you can't just go back and inspect what changed.
Currently all I see in Travis logs is
```
Reordering imports in pre_commit/main.py
Reordering imports in pre_commit/clientlib/validate_base.py
Reordering imports in tests/main_test.py
```
which doesn't really tell me what changed or give me useful hints on tracking down the problem.
As an example, here's the actual diff. It's clear now what actually happened and why this only happened on Python 2.6:
```
diff --git a/pre_commit/clientlib/validate_base.py b/pre_commit/clientlib/validate_base.py
index 707bdde..af18725 100644
--- a/pre_commit/clientlib/validate_base.py
+++ b/pre_commit/clientlib/validate_base.py
@@ -1,11 +1,11 @@
from __future__ import print_function
from __future__ import unicode_literals
-import argparse
import os.path
import re
import sys
+import argparse
import jsonschema
import jsonschema.exceptions
import yaml
```
| Would running `git diff` after all the hooks satisfy this?
| 2017-02-25T18:15:28 |
pre-commit/pre-commit | 517 | pre-commit__pre-commit-517 | [
"198"
] | b63748f5572682e7bf8b30b12d67b293b2eae3dc | diff --git a/pre_commit/color.py b/pre_commit/color.py
--- a/pre_commit/color.py
+++ b/pre_commit/color.py
@@ -1,7 +1,15 @@
from __future__ import unicode_literals
+import os
import sys
+if os.name == 'nt': # pragma: no cover (windows)
+ from pre_commit.color_windows import enable_virtual_terminal_processing
+ try:
+ enable_virtual_terminal_processing()
+ except WindowsError:
+ pass
+
RED = '\033[41m'
GREEN = '\033[42m'
YELLOW = '\033[43;30m'
diff --git a/pre_commit/color_windows.py b/pre_commit/color_windows.py
new file mode 100644
--- /dev/null
+++ b/pre_commit/color_windows.py
@@ -0,0 +1,49 @@
+from __future__ import unicode_literals
+
+from ctypes import POINTER
+from ctypes import windll
+from ctypes import WinError
+from ctypes import WINFUNCTYPE
+from ctypes.wintypes import BOOL
+from ctypes.wintypes import DWORD
+from ctypes.wintypes import HANDLE
+
+STD_OUTPUT_HANDLE = -11
+ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4
+
+
+def bool_errcheck(result, func, args):
+ if not result:
+ raise WinError()
+ return args
+
+
+GetStdHandle = WINFUNCTYPE(HANDLE, DWORD)(
+ ("GetStdHandle", windll.kernel32),
+ ((1, "nStdHandle"), )
+)
+
+GetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, POINTER(DWORD))(
+ ("GetConsoleMode", windll.kernel32),
+ ((1, "hConsoleHandle"), (2, "lpMode"))
+)
+GetConsoleMode.errcheck = bool_errcheck
+
+SetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, DWORD)(
+ ("SetConsoleMode", windll.kernel32),
+ ((1, "hConsoleHandle"), (1, "dwMode"))
+)
+SetConsoleMode.errcheck = bool_errcheck
+
+
+def enable_virtual_terminal_processing():
+ """As of Windows 10, the Windows console supports (some) ANSI escape
+ sequences, but it needs to be enabled using `SetConsoleMode` first.
+
+ More info on the escape sequences supported:
+ https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx
+
+ """
+ stdout = GetStdHandle(STD_OUTPUT_HANDLE)
+ flags = GetConsoleMode(stdout)
+ SetConsoleMode(stdout, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
| Windows: Color support
On unixlike platforms we support coloring via ANSI escapes. Windows supports coloring somehow, figure out how that works and implement it.
| 2017-04-01T18:39:25 |
||
pre-commit/pre-commit | 529 | pre-commit__pre-commit-529 | [
"525"
] | 1be4e4f82e31336fa5fca096c962c72ac0041537 | diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import argparse
+import logging
import os
import sys
@@ -20,6 +21,8 @@
from pre_commit.runner import Runner
+logger = logging.getLogger('pre_commit')
+
# https://github.com/pre-commit/pre-commit/issues/217
# On OSX, making a virtualenv using pyvenv at . causes `virtualenv` and `pip`
# to install packages to the wrong place. We don't want anything to deal with
@@ -117,7 +120,14 @@ def main(argv=None):
_add_color_option(autoupdate_parser)
_add_config_option(autoupdate_parser)
autoupdate_parser.add_argument(
- '--tags-only', action='store_true', help='Update to tags only.',
+ '--tags-only', action='store_true', help='LEGACY: for compatibility',
+ )
+ autoupdate_parser.add_argument(
+ '--bleeding-edge', action='store_true',
+ help=(
+ 'Update to the bleeding edge of `master` instead of the latest '
+ 'tagged version (the default behavior).'
+ ),
)
run_parser = subparsers.add_parser('run', help='Run hooks.')
@@ -209,7 +219,9 @@ def main(argv=None):
elif args.command == 'clean':
return clean(runner)
elif args.command == 'autoupdate':
- return autoupdate(runner, args.tags_only)
+ if args.tags_only:
+ logger.warning('--tags-only is the default')
+ return autoupdate(runner, tags_only=not args.bleeding_edge)
elif args.command == 'run':
return run(runner, args)
elif args.command == 'sample-config':
| diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -186,3 +186,12 @@ def cap_out():
with mock.patch.object(output, 'write', write):
with mock.patch.object(output, 'write_line', write_line):
yield Fixture(stream)
+
+
[email protected]_fixture
+def fake_log_handler():
+ handler = mock.Mock(level=logging.INFO)
+ logger = logging.getLogger('pre_commit')
+ logger.addHandler(handler)
+ yield handler
+ logger.removeHandler(handler)
diff --git a/tests/main_test.py b/tests/main_test.py
--- a/tests/main_test.py
+++ b/tests/main_test.py
@@ -127,3 +127,8 @@ def test_expected_fatal_error_no_git_repo(
'Is it installed, and are you in a Git repository directory?\n'
'Check the log at ~/.pre-commit/pre-commit.log\n'
)
+
+
+def test_warning_on_tags_only(mock_commands, cap_out):
+ main.main(('autoupdate', '--tags-only'))
+ assert '--tags-only is the default' in cap_out.get()
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -2,7 +2,6 @@
from __future__ import unicode_literals
import io
-import logging
import os.path
import re
import shutil
@@ -680,15 +679,6 @@ def test_local_python_repo(store):
assert ret[1].replace(b'\r\n', b'\n') == b"['filename']\nHello World\n"
[email protected]_fixture
-def fake_log_handler():
- handler = mock.Mock(level=logging.INFO)
- logger = logging.getLogger('pre_commit')
- logger.addHandler(handler)
- yield handler
- logger.removeHandler(handler)
-
-
def test_hook_id_not_present(tempdir_factory, store, fake_log_handler):
path = make_repo(tempdir_factory, 'script_hooks_repo')
config = make_config_from_repo(path)
| [RFC] Make the default of `pre-commit autoupdate` use `--tags-only`?
I find that `--tags-only` to be much better than the default.
My proposal:
- Make the `--tags-only` behaviour the default behaviour
- Make `--tags-only` a noop argument which produces a warning and does the default
- Add a `--bleeding-edge` which does the current default behaviour
@chriskuehl thoughts?
| +1, I've been using `--tags-only` pretty exclusively lately. | 2017-04-29T21:32:56 |
pre-commit/pre-commit | 530 | pre-commit__pre-commit-530 | [
"499"
] | 5d43b05bd375473004ce48cd61307690572d423b | diff --git a/pre_commit/clientlib.py b/pre_commit/clientlib.py
--- a/pre_commit/clientlib.py
+++ b/pre_commit/clientlib.py
@@ -53,6 +53,7 @@ def _make_argparser(filenames_help):
'^$',
),
schema.Optional('language_version', schema.check_string, 'default'),
+ schema.OptionalNoDefault('log_file', schema.check_string),
schema.Optional('minimum_pre_commit_version', schema.check_string, '0'),
schema.Optional('stages', schema.check_array(schema.check_string), []),
)
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -121,7 +121,10 @@ def _run_single_hook(hook, repo, args, skips, cols):
for out in (stdout, stderr):
assert type(out) is bytes, type(out)
if out.strip():
- output.write_line(out.strip())
+ output.write_line(
+ out.strip(),
+ logfile_name=hook.get('log_file'),
+ )
output.write_line()
return retcode
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -71,8 +71,17 @@ def write(s, stream=stdout_byte_stream):
stream.flush()
-def write_line(s=None, stream=stdout_byte_stream):
- if s is not None:
- stream.write(five.to_bytes(s))
- stream.write(b'\n')
- stream.flush()
+def write_line(s=None, stream=stdout_byte_stream, logfile_name=None):
+ def output_streams():
+ yield stream
+ try:
+ with open(logfile_name, 'ab') as logfile:
+ yield logfile
+ except (TypeError, IOError):
+ pass
+
+ for output_stream in output_streams():
+ if s is not None:
+ output_stream.write(five.to_bytes(s))
+ output_stream.write(b'\n')
+ output_stream.flush()
| diff --git a/testing/resources/logfile_repo/.pre-commit-hooks.yaml b/testing/resources/logfile_repo/.pre-commit-hooks.yaml
new file mode 100644
--- /dev/null
+++ b/testing/resources/logfile_repo/.pre-commit-hooks.yaml
@@ -0,0 +1,6 @@
+- id: logfile test hook
+ name: Logfile test hook
+ entry: bin/hook.sh
+ language: script
+ files: .
+ log_file: test.log
diff --git a/testing/resources/logfile_repo/bin/hook.sh b/testing/resources/logfile_repo/bin/hook.sh
new file mode 100755
--- /dev/null
+++ b/testing/resources/logfile_repo/bin/hook.sh
@@ -0,0 +1,5 @@
+#!/usr/bin/env bash
+echo "This is STDOUT output"
+echo "This is STDERR output" 1>&2
+
+exit 1
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -211,6 +211,35 @@ def test_run(
)
+def test_run_output_logfile(
+ cap_out,
+ tempdir_factory,
+ mock_out_store_directory,
+):
+
+ expected_output = (
+ b'This is STDOUT output\n',
+ b'This is STDERR output\n',
+ )
+
+ git_path = make_consuming_repo(tempdir_factory, 'logfile_repo')
+ with cwd(git_path):
+ _test_run(
+ cap_out,
+ git_path, {},
+ expected_output,
+ expected_ret=1,
+ stage=True
+ )
+ logfile_path = os.path.join(git_path, 'test.log')
+ assert os.path.exists(logfile_path)
+ with open(logfile_path, 'rb') as logfile:
+ logfile_content = logfile.readlines()
+
+ for expected_output_part in expected_output:
+ assert expected_output_part in logfile_content
+
+
def test_always_run(
cap_out, repo_with_passing_hook, mock_out_store_directory,
):
| A config option to store hook output into a file
Hi, this is a feature request and if people would like it then I will submit a pull request.
What I would like is to be able to specify in the config an output file on a per hook basis. When the hook is run, it will output to that file (as well as to STDOUT as currently).
The reason I want this is because I want to run pre-commit on Jenkins and I want the reports of certain tools to be stored in files so that they can be easily consumed by Jenkins.
| This sounds reasonable.
You _can_ hack it into the current scheme but it's not pretty nor very portable (without first-class support):
```yaml
- repo: https://github.com/pre-commit/pre-commit-hooks
sha: v0.7.1
hooks:
- id: flake8
entry: bash -c -o pipefail 'exec flake8 $@ |& tee -a flake8.log' --
```
To verify:
```
# important, since we use `-a` to append to the log (since pre-commit will invoke 1..many times)
rm flake8.log
pre-commit run flake8 --all-files || true # if you want to demonstrate a failure writing to the log
cat flake8.log
```
For me (after breaking a file) this looks like this:
```
$ rm flake8.log
$ pre-commit run flake8 --all-files
Flake8...................................................................Failed
hookid: flake8
setup.py:1:1: F401 'os' imported but unused
$ cat flake8.log
setup.py:1:1: F401 'os' imported but unused
```
As for first class, I think if you want to implement this I'd accept something like this:
```yaml
- id: flake8
log_file: flake8.log
```
Note that pre-commit combines stdout and stderr, so you'd get combined stdout / stderr into that log file.
| 2017-05-04T07:25:15 |
pre-commit/pre-commit | 537 | pre-commit__pre-commit-537 | [
"536"
] | e3b14c35f782ed464e3f96b44e8509048187689f | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -48,10 +48,10 @@ def is_in_merge_conflict():
def parse_merge_msg_for_conflicts(merge_msg):
# Conflicted files start with tabs
return [
- line.lstrip('#').strip()
+ line.lstrip(b'#').strip().decode('UTF-8')
for line in merge_msg.splitlines()
# '#\t' for git 2.4.1
- if line.startswith(('\t', '#\t'))
+ if line.startswith((b'\t', b'#\t'))
]
@@ -60,7 +60,7 @@ def get_conflicted_files():
logger.info('Checking merge-conflict files only.')
# Need to get the conflicted files from the MERGE_MSG because they could
# have resolved the conflict by choosing one side or the other
- merge_msg = open(os.path.join(get_git_dir('.'), 'MERGE_MSG')).read()
+ merge_msg = open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb').read()
merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)
# This will get the rest of the changes made after the merge.
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -1,3 +1,4 @@
+# -*- coding: UTF-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
@@ -190,6 +191,18 @@ def test_commit_am(tempdir_factory):
assert ret == 0
+def test_unicode_merge_commit_message(tempdir_factory):
+ path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
+ with cwd(path):
+ assert install(Runner(path, C.CONFIG_FILE)) == 0
+ cmd_output('git', 'checkout', 'master', '-b', 'foo')
+ cmd_output('git', 'commit', '--allow-empty', '-m', 'branch2')
+ cmd_output('git', 'checkout', 'master')
+ cmd_output('git', 'merge', 'foo', '--no-ff', '--no-commit', '-m', '☃')
+ # Used to crash
+ cmd_output('git', 'commit', '--no-edit')
+
+
def test_install_idempotent(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -142,8 +142,8 @@ def test_get_conflicted_files_unstaged_files(in_merge_conflict):
assert ret == {'conflict_file'}
-MERGE_MSG = "Merge branch 'foo' into bar\n\nConflicts:\n\tconflict_file\n"
-OTHER_MERGE_MSG = MERGE_MSG + '\tother_conflict_file\n'
+MERGE_MSG = b"Merge branch 'foo' into bar\n\nConflicts:\n\tconflict_file\n"
+OTHER_MERGE_MSG = MERGE_MSG + b'\tother_conflict_file\n'
@pytest.mark.parametrize(
| Unicode error: python 2 + merge conflict + non-ascii commit message
The important part of the stack:
```
File "...python2.7/site-packages/pre_commit/commands/run.py", line 52, in get_filenames
return getter(include_expr, exclude_expr)
File "...python2.7/site-packages/pre_commit/util.py", line 46, in wrapper
ret = wrapper._cache[key] = func(*args)
File "...python2.7/site-packages/pre_commit/git.py", line 98, in wrapper
for filename in all_file_list_strategy()
File "...python2.7/site-packages/pre_commit/util.py", line 46, in wrapper
ret = wrapper._cache[key] = func(*args)
File "...python2.7/site-packages/pre_commit/git.py", line 64, in get_conflicted_files
merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)
File "...python2.7/site-packages/pre_commit/git.py", line 54, in parse_merge_msg_for_conflicts
if line.startswith(('\t', '#\t'))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 37: ordinal not in range(128)
```
An easy fix: https://github.com/pre-commit/pre-commit/blob/e3b14c35f782ed464e3f96b44e8509048187689f/pre_commit/git.py#L63
| 2017-05-10T19:53:23 |
|
pre-commit/pre-commit | 540 | pre-commit__pre-commit-540 | [
"533"
] | e5c9d3614bc4191105e776cddb127aa1edf1ae63 | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -57,7 +57,7 @@ def get_filenames(args, include_expr, exclude_expr):
def _run_single_hook(hook, repo, args, skips, cols):
- filenames = get_filenames(args, hook['files'], hook['exclude'])
+ filenames = get_filenames(args, hook.get('files', ''), hook['exclude'])
if hook['id'] in skips:
output.write(get_hook_message(
_hook_msg_start(hook, args.verbose),
| always_run + no `files` still crashes on `KeyError: files`
I was under the impression `files` was completely ignored for `always_run`, I guess not!
Here's a small reproduction:
```yaml
- repo: local
hooks:
- id: foo
name: foo
always_run: true
entry: bash -c 'echo hello && exit 1'
language: system
```
```
$ ./venv-pre_commit/bin/pre-commit run foo
An unexpected error has occurred: KeyError: u'files'
Check the log at ~/.pre-commit/pre-commit.log
```
```
$ cat ~/.pre-commit/pre-commit.log
An unexpected error has occurred: KeyError: u'files'
Traceback (most recent call last):
File "/tmp/foo/pre-commit/pre_commit/error_handler.py", line 48, in error_handler
yield
File "/tmp/foo/pre-commit/pre_commit/main.py", line 226, in main
return run(runner, args)
File "/tmp/foo/pre-commit/pre_commit/commands/run.py", line 235, in run
return _run_hooks(repo_hooks, args, environ)
File "/tmp/foo/pre-commit/pre_commit/commands/run.py", line 155, in _run_hooks
retval |= _run_single_hook(hook, repo, args, skips, cols)
File "/tmp/foo/pre-commit/pre_commit/commands/run.py", line 60, in _run_single_hook
filenames = get_filenames(args, hook['files'], hook['exclude'])
KeyError: u'files'
```
| 2017-05-31T02:11:29 |
||
pre-commit/pre-commit | 546 | pre-commit__pre-commit-546 | [
"545"
] | 3874636f8f71aed8c2ab472db828475a549d1479 | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -32,7 +32,8 @@ def _hook_msg_start(hook, verbose):
def get_changed_files(new, old):
return cmd_output(
- 'git', 'diff', '--name-only', '{}...{}'.format(old, new),
+ 'git', 'diff', '--no-ext-diff', '--name-only',
+ '{}...{}'.format(old, new),
)[1].splitlines()
@@ -85,12 +86,16 @@ def _run_single_hook(hook, repo, args, skips, cols):
))
sys.stdout.flush()
- diff_before = cmd_output('git', 'diff', retcode=None, encoding=None)
+ diff_before = cmd_output(
+ 'git', 'diff', '--no-ext-diff', retcode=None, encoding=None,
+ )
retcode, stdout, stderr = repo.run_hook(
hook,
tuple(filenames) if hook['pass_filenames'] else (),
)
- diff_after = cmd_output('git', 'diff', retcode=None, encoding=None)
+ diff_after = cmd_output(
+ 'git', 'diff', '--no-ext-diff', retcode=None, encoding=None,
+ )
file_modifications = diff_before != diff_after
@@ -159,10 +164,10 @@ def _run_hooks(repo_hooks, args, environ):
if (
retval and
args.show_diff_on_failure and
- subprocess.call(('git', 'diff', '--quiet')) != 0
+ subprocess.call(('git', 'diff', '--quiet', '--no-ext-diff')) != 0
):
print('All changes made by hooks:')
- subprocess.call(('git', 'diff'))
+ subprocess.call(('git', 'diff', '--no-ext-diff'))
return retval
@@ -179,7 +184,10 @@ def _has_unmerged_paths(runner):
def _has_unstaged_config(runner):
retcode, _, _ = runner.cmd_runner.run(
- ('git', 'diff', '--exit-code', runner.config_file_path),
+ (
+ 'git', 'diff', '--no-ext-diff', '--exit-code',
+ runner.config_file_path,
+ ),
retcode=None,
)
# be explicit, other git errors don't mean it has an unstaged config.
diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -68,7 +68,8 @@ def get_conflicted_files():
# this will also include the conflicted files
tree_hash = cmd_output('git', 'write-tree')[1].strip()
merge_diff_filenames = cmd_output(
- 'git', 'diff', '-m', tree_hash, 'HEAD', 'MERGE_HEAD', '--name-only',
+ 'git', 'diff', '--no-ext-diff',
+ '-m', tree_hash, 'HEAD', 'MERGE_HEAD', '--name-only',
)[1].splitlines()
return set(merge_conflict_filenames) | set(merge_diff_filenames)
@@ -76,7 +77,7 @@ def get_conflicted_files():
@memoize_by_cwd
def get_staged_files():
return cmd_output(
- 'git', 'diff', '--staged', '--name-only',
+ 'git', 'diff', '--staged', '--name-only', '--no-ext-diff',
# Everything except for D
'--diff-filter=ACMRTUXB'
)[1].splitlines()
diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -21,10 +21,10 @@ def staged_files_only(cmd_runner):
"""
# Determine if there are unstaged files
retcode, diff_stdout_binary, _ = cmd_runner.run(
- [
+ (
'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',
'--no-color', '--no-ext-diff',
- ],
+ ),
retcode=None,
encoding=None,
)
| git diff enhancement to add --no-ext-diff
Hi,
Is there a way when running command
```
pre-commit run --all-files
```
To use git diff --no-ext-diff instead of git diff
Maybe using an environment variable to add this external --no-ext-diff parameter in the script https://github.com/pre-commit/pre-commit/blob/master/pre_commit/git.py ?
My issue comes from the fact that I am using meld in my ~/.gitconfig
```
[diff]
external = /usr/local/bin/git-diff.sh
[push]
default = simple
[merge]
tool = meld
renamelimit = 10000
```
less /usr/local/bin/git-diff.sh
```
#!/bin/bash
meld "$2" "$5" > /dev/null 2>&1
```
So every-time pre-commit run --all-files run I have a popup with meld that I must close.
It is annoying...
Feel free to help me to find the right code and I might find some time to do a pull request...
Best regards,
Alban
| Woops! Yeah I keep tracking down these places as well :)
You can find all of them by running `git grep "'diff'"`
```
$ git grep "'diff'" -- pre_commit
pre_commit/commands/run.py: 'git', 'diff', '--name-only', '{}...{}'.format(old, new),
pre_commit/commands/run.py: diff_before = cmd_output('git', 'diff', retcode=None, encoding=None)
pre_commit/commands/run.py: diff_after = cmd_output('git', 'diff', retcode=None, encoding=None)
pre_commit/commands/run.py: subprocess.call(('git', 'diff', '--quiet')) != 0
pre_commit/commands/run.py: subprocess.call(('git', 'diff'))
pre_commit/commands/run.py: ('git', 'diff', '--exit-code', runner.config_file_path),
pre_commit/git.py: 'git', 'diff', '-m', tree_hash, 'HEAD', 'MERGE_HEAD', '--name-only',
pre_commit/git.py: 'git', 'diff', '--staged', '--name-only',
pre_commit/staged_files_only.py: 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',
``` | 2017-06-09T15:34:27 |
|
pre-commit/pre-commit | 566 | pre-commit__pre-commit-566 | [
"389"
] | dd182fb42e0820d25ca865249d8c86fe98a0c8ec | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -56,16 +56,21 @@ def install(
with io.open(hook_path, 'w') as pre_commit_file_obj:
if hook_type == 'pre-push':
- with io.open(resource_filename('pre-push-tmpl')) as fp:
- pre_push_contents = fp.read()
+ with io.open(resource_filename('pre-push-tmpl')) as f:
+ hook_specific_contents = f.read()
+ elif hook_type == 'commit-msg':
+ with io.open(resource_filename('commit-msg-tmpl')) as f:
+ hook_specific_contents = f.read()
+ elif hook_type == 'pre-commit':
+ hook_specific_contents = ''
else:
- pre_push_contents = ''
+ raise AssertionError('Unknown hook type: {}'.format(hook_type))
skip_on_missing_conf = 'true' if skip_on_missing_conf else 'false'
contents = io.open(resource_filename('hook-tmpl')).read().format(
sys_executable=sys.executable,
hook_type=hook_type,
- pre_push=pre_push_contents,
+ hook_specific=hook_specific_contents,
skip_on_missing_conf=skip_on_missing_conf,
)
pre_commit_file_obj.write(contents)
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -58,6 +58,9 @@ def get_filenames(args, include_expr, exclude_expr):
getter = git.get_files_matching(
lambda: get_changed_files(args.origin, args.source),
)
+ elif args.hook_stage == 'commit-msg':
+ def getter(*_):
+ return (args.commit_msg_filename,)
elif args.files:
getter = git.get_files_matching(lambda: args.files)
elif args.all_files:
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -76,7 +76,7 @@ def main(argv=None):
),
)
install_parser.add_argument(
- '-t', '--hook-type', choices=('pre-commit', 'pre-push'),
+ '-t', '--hook-type', choices=('pre-commit', 'pre-push', 'commit-msg'),
default='pre-commit',
)
install_parser.add_argument(
@@ -149,6 +149,10 @@ def main(argv=None):
'--source', '-s',
help="The remote branch's commit_id when using `git push`.",
)
+ run_parser.add_argument(
+ '--commit-msg-filename',
+ help='Filename to check when running during `commit-msg`',
+ )
run_parser.add_argument(
'--allow-unstaged-config', default=False, action='store_true',
help=(
@@ -157,7 +161,8 @@ def main(argv=None):
),
)
run_parser.add_argument(
- '--hook-stage', choices=('commit', 'push'), default='commit',
+ '--hook-stage', choices=('commit', 'push', 'commit-msg'),
+ default='commit',
help='The stage during which the hook is fired e.g. commit or push.',
)
run_parser.add_argument(
| diff --git a/testing/fixtures.py b/testing/fixtures.py
--- a/testing/fixtures.py
+++ b/testing/fixtures.py
@@ -23,8 +23,7 @@
def git_dir(tempdir_factory):
path = tempdir_factory.get()
- with cwd(path):
- cmd_output('git', 'init')
+ cmd_output('git', 'init', path)
return path
diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -56,7 +56,7 @@ def test_install_pre_commit(tempdir_factory):
expected_contents = io.open(pre_commit_script).read().format(
sys_executable=sys.executable,
hook_type='pre-commit',
- pre_push='',
+ hook_specific='',
skip_on_missing_conf='false',
)
assert pre_commit_contents == expected_contents
@@ -71,7 +71,7 @@ def test_install_pre_commit(tempdir_factory):
expected_contents = io.open(pre_commit_script).read().format(
sys_executable=sys.executable,
hook_type='pre-push',
- pre_push=pre_push_template_contents,
+ hook_specific=pre_push_template_contents,
skip_on_missing_conf='false',
)
assert pre_push_contents == expected_contents
@@ -118,10 +118,11 @@ def test_uninstall(tempdir_factory):
def _get_commit_output(tempdir_factory, touch_file='foo', **kwargs):
+ commit_msg = kwargs.pop('commit_msg', 'Commit!')
open(touch_file, 'a').close()
cmd_output('git', 'add', touch_file)
return cmd_output_mocked_pre_commit_home(
- 'git', 'commit', '-am', 'Commit!', '--allow-empty',
+ 'git', 'commit', '-am', commit_msg, '--allow-empty',
# git commit puts pre-commit to stderr
stderr=subprocess.STDOUT,
retcode=None,
@@ -560,6 +561,24 @@ def test_pre_push_integration_empty_push(tempdir_factory):
assert retc == 0
+def test_commit_msg_integration_failing(commit_msg_repo, tempdir_factory):
+ install(Runner(commit_msg_repo, C.CONFIG_FILE), hook_type='commit-msg')
+ retc, out = _get_commit_output(tempdir_factory)
+ assert retc == 1
+ assert out.startswith('Must have "Signed off by:"...')
+ assert out.strip().endswith('...Failed')
+
+
+def test_commit_msg_integration_passing(commit_msg_repo, tempdir_factory):
+ install(Runner(commit_msg_repo, C.CONFIG_FILE), hook_type='commit-msg')
+ msg = 'Hi\nSigned off by: me, lol'
+ retc, out = _get_commit_output(tempdir_factory, commit_msg=msg)
+ assert retc == 0
+ first_line = out.splitlines()[0]
+ assert first_line.startswith('Must have "Signed off by:"...')
+ assert first_line.endswith('...Passed')
+
+
def test_install_disallow_mising_config(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -60,6 +60,7 @@ def _get_opts(
allow_unstaged_config=False,
hook_stage='commit',
show_diff_on_failure=False,
+ commit_msg_filename='',
):
# These are mutually exclusive
assert not (all_files and files)
@@ -75,6 +76,7 @@ def _get_opts(
allow_unstaged_config=allow_unstaged_config,
hook_stage=hook_stage,
show_diff_on_failure=show_diff_on_failure,
+ commit_msg_filename=commit_msg_filename,
)
@@ -572,40 +574,7 @@ def test_lots_of_files(mock_out_store_directory, tempdir_factory):
)
[email protected](
- (
- 'hook_stage', 'stage_for_first_hook', 'stage_for_second_hook',
- 'expected_output',
- ),
- (
- ('push', ['commit'], ['commit'], [b'', b'']),
- (
- 'push', ['commit', 'push'], ['commit', 'push'],
- [b'hook 1', b'hook 2'],
- ),
- ('push', [], [], [b'hook 1', b'hook 2']),
- ('push', [], ['commit'], [b'hook 1', b'']),
- ('push', ['push'], ['commit'], [b'hook 1', b'']),
- ('push', ['commit'], ['push'], [b'', b'hook 2']),
- (
- 'commit', ['commit', 'push'], ['commit', 'push'],
- [b'hook 1', b'hook 2'],
- ),
- ('commit', ['commit'], ['commit'], [b'hook 1', b'hook 2']),
- ('commit', [], [], [b'hook 1', b'hook 2']),
- ('commit', [], ['commit'], [b'hook 1', b'hook 2']),
- ('commit', ['push'], ['commit'], [b'', b'hook 2']),
- ('commit', ['commit'], ['push'], [b'hook 1', b'']),
- ),
-)
-def test_local_hook_for_stages(
- cap_out,
- repo_with_passing_hook, mock_out_store_directory,
- stage_for_first_hook,
- stage_for_second_hook,
- hook_stage,
- expected_output,
-):
+def test_push_hook(cap_out, repo_with_passing_hook, mock_out_store_directory):
config = OrderedDict((
('repo', 'local'),
(
@@ -613,37 +582,61 @@ def test_local_hook_for_stages(
OrderedDict((
('id', 'flake8'),
('name', 'hook 1'),
- ('entry', 'python -m flake8.__main__'),
+ ('entry', "'{}' -m flake8".format(sys.executable)),
('language', 'system'),
- ('files', r'\.py$'),
- ('stages', stage_for_first_hook),
- )), OrderedDict((
+ ('types', ['python']),
+ ('stages', ['commit']),
+ )),
+ OrderedDict((
('id', 'do_not_commit'),
('name', 'hook 2'),
('entry', 'DO NOT COMMIT'),
('language', 'pcre'),
- ('files', '^(.*)$'),
- ('stages', stage_for_second_hook),
+ ('types', ['text']),
+ ('stages', ['push']),
)),
),
),
))
add_config_to_repo(repo_with_passing_hook, config)
- with io.open('dummy.py', 'w') as staged_file:
- staged_file.write('"""TODO: something"""\n')
+ open('dummy.py', 'a').close()
cmd_output('git', 'add', 'dummy.py')
_test_run(
cap_out,
repo_with_passing_hook,
- {'hook_stage': hook_stage},
- expected_outputs=expected_output,
+ {'hook_stage': 'commit'},
+ expected_outputs=[b'hook 1'],
+ expected_ret=0,
+ stage=False,
+ )
+
+ _test_run(
+ cap_out,
+ repo_with_passing_hook,
+ {'hook_stage': 'push'},
+ expected_outputs=[b'hook 2'],
expected_ret=0,
stage=False,
)
+def test_commit_msg_hook(cap_out, commit_msg_repo, mock_out_store_directory):
+ filename = '.git/COMMIT_EDITMSG'
+ with io.open(filename, 'w') as f:
+ f.write('This is the commit message')
+
+ _test_run(
+ cap_out,
+ commit_msg_repo,
+ {'hook_stage': 'commit-msg', 'commit_msg_filename': filename},
+ expected_outputs=[b'Must have "Signed off by:"', b'Failed'],
+ expected_ret=1,
+ stage=False,
+ )
+
+
def test_local_hook_passes(
cap_out, repo_with_passing_hook, mock_out_store_directory,
):
@@ -654,7 +647,7 @@ def test_local_hook_passes(
OrderedDict((
('id', 'flake8'),
('name', 'flake8'),
- ('entry', 'python -m flake8.__main__'),
+ ('entry', "'{}' -m flake8".format(sys.executable)),
('language', 'system'),
('files', r'\.py$'),
)), OrderedDict((
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import
from __future__ import unicode_literals
+import collections
import functools
import io
import logging
@@ -20,6 +21,7 @@
from pre_commit.util import cwd
from testing.fixtures import git_dir
from testing.fixtures import make_consuming_repo
+from testing.fixtures import write_config
@pytest.yield_fixture
@@ -92,6 +94,29 @@ def in_conflicting_submodule(tempdir_factory):
yield
[email protected]
+def commit_msg_repo(tempdir_factory):
+ path = git_dir(tempdir_factory)
+ config = collections.OrderedDict((
+ ('repo', 'local'),
+ (
+ 'hooks',
+ [collections.OrderedDict((
+ ('id', 'must-have-signoff'),
+ ('name', 'Must have "Signed off by:"'),
+ ('entry', 'grep -q "Signed off by:"'),
+ ('language', 'system'),
+ ('stages', ['commit-msg']),
+ ))],
+ ),
+ ))
+ write_config(path, config)
+ with cwd(path):
+ cmd_output('git', 'add', '.')
+ cmd_output('git', 'commit', '-m', 'add hooks')
+ yield path
+
+
@pytest.yield_fixture(autouse=True, scope='session')
def dont_write_to_home_directory():
"""pre_commit.store.Store will by default write to the home directory
@@ -170,7 +195,7 @@ def __init__(self, stream):
def get_bytes(self):
"""Get the output as-if no encoding occurred"""
data = self._stream.data.getvalue()
- self._stream = io.BytesIO()
+ self._stream.data.truncate(0)
return data
def get(self):
| Support for commit-msg
Totally not trying to be snarky here-- the project is called _pre-commit_, after all-- but is there a reason commit-msg hooks are not supported?
Being able to validate commit messages for appropriate issue-tracking ids, length, etc is important for a lot of us and having to put together another mechanism to deal with it or utilize another framework alongside pre-commit seems non-DRY-y.
I would put this as a "pro" for overcommit (see #384) but I'd really like to see it in pre-commit, as overcommit's performance in Windows is... subpar. (More of a ruby issue than overcommit.)
| We're already supporting other hooks (`pre-push`) and would accept a PR if one were to make one :)
There's already a way to configure hooks to only run in specific stages (currently just `commit` or `push`: http://pre-commit.com/#confining-hooks-to-run-at-certain-stages). There's also a way to configure hooks to always run independent of files changed `always_run`: http://pre-commit.com/#plugins
I imagine to implement `commit-msg` you'd simply need to teach pre-commit about how to make the file for it and how to call a hook with the right stuff.
Ok cool yeah I saw the `pre-push` option and figured there would be a similar way to "hook" into the other stages.
If I get some time in the next few days I'll do some spelunking :microscope:
Ok I started a fork for these changes making some [initial progress](https://github.com/zaps/pre-commit/commit/159e04290dbff2a0d3b3af37186a60477ebeeb33) getting `commit-msg` in on the fun. (It's totally not optimal atm because TDD 🌴 )
One question I had was, in terms of testing the additional hook stage (ie adding something like `stages: [commit-msg,push]` in `.pre-commit-config.yaml`), how is that done?
I see a dictionary-type concoction in [run_test.py](https://github.com/pre-commit/pre-commit/blob/master/tests/commands/run_test.py#L434-L452) but I'm not quite sure of what the format of it is. Can you give a quick rundown and maybe thrown in a sample line for my purposes?
Thanks!
You know, I'm not actually too sure about the test you linked! It actually seems to be doing a lot of useless work and I'll probably go about factoring it out.
The tests that check the integration of the different hook types (strangely) live in tests/commands/install_uninstall_test.py (well not that strange actually, the reason they're there is in order to test that the installation is successful you kind of have to do a full integration run). I'd probably try from that angle first? Let me know if you need additional direction :)
Ok I think I'm with you now -- I'll start sticking tests for commit-msg in `install_uninstall_test.py`, do a full integration run each time and see what blows up.
Thanks again
@zaps have you made more progress on this? I'd be interested in `commit-msg` hooks.
I'm also interested in `commit-msg` hooks!
Looking for this too, may be switch to overcommit or other tools that support this feature
pre-commit/pre-commit-hooks/issues/193
I have a branch which adds `commit-msg` support: #566 | 2017-07-22T22:18:05 |
pre-commit/pre-commit | 575 | pre-commit__pre-commit-575 | [
"570"
] | ce7481f75b3ece0d6d88a04f62a4c51665e0efb8 | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -11,6 +11,16 @@
logger = logging.getLogger('pre_commit')
+def _git_apply(cmd_runner, patch):
+ args = ('apply', '--whitespace=nowarn', patch)
+ try:
+ cmd_runner.run(('git',) + args, encoding=None)
+ except CalledProcessError:
+ # Retry with autocrlf=false -- see #570
+ cmd = ('git', '-c', 'core.autocrlf=false') + args
+ cmd_runner.run(cmd, encoding=None)
+
+
@contextlib.contextmanager
def staged_files_only(cmd_runner):
"""Clear any unstaged changes from the git working directory inside this
@@ -46,10 +56,7 @@ def staged_files_only(cmd_runner):
finally:
# Try to apply the patch we saved
try:
- cmd_runner.run(
- ('git', 'apply', '--whitespace=nowarn', patch_filename),
- encoding=None,
- )
+ _git_apply(cmd_runner, patch_filename)
except CalledProcessError:
logger.warning(
'Stashed changes conflicted with hook auto-fixes... '
@@ -59,10 +66,7 @@ def staged_files_only(cmd_runner):
# by hooks.
# Roll back the changes made by hooks.
cmd_runner.run(('git', 'checkout', '--', '.'))
- cmd_runner.run(
- ('git', 'apply', patch_filename, '--whitespace=nowarn'),
- encoding=None,
- )
+ _git_apply(cmd_runner, patch_filename)
logger.info('Restored changes from {}.'.format(patch_filename))
else:
# There weren't any staged files so we don't need to do anything
| diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -354,3 +354,17 @@ def test_crlf(in_git_dir, cmd_runner, crlf_before, crlf_after, autocrlf):
def test_whitespace_errors(in_git_dir, cmd_runner):
cmd_output('git', 'config', '--local', 'apply.whitespace', 'error')
test_crlf(in_git_dir, cmd_runner, True, True, 'true')
+
+
+def test_autocrlf_commited_crlf(in_git_dir, cmd_runner):
+ """Regression test for #570"""
+ cmd_output('git', 'config', '--local', 'core.autocrlf', 'false')
+ _write(b'1\r\n2\r\n')
+ cmd_output('git', 'add', 'foo')
+ cmd_output('git', 'commit', '-m', 'Check in crlf')
+
+ cmd_output('git', 'config', '--local', 'core.autocrlf', 'true')
+ _write(b'1\r\n2\r\n\r\n\r\n\r\n')
+
+ with staged_files_only(cmd_runner):
+ assert_no_diff()
| git unadd changes lost if hook fails on windows
```
D:\CubeadProjects\devops [test +0 ~2 -0 | +0 ~1 -0 !]> git cm "asd"
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to C:\Users\56929\.pre-commit\patch1501482991.
run pylint...............................................................Failed
hookid: python-pylint
************* Module install
C: 10, 0: Exactly one space required around assignment
a=1
^ (bad-whitespace)
C: 46, 0: Line too long (108/100) (line-too-long)
W: 39, 4: Unused variable 'stylelint_root' (unused-variable)
W: 37, 4: Unused variable 'node_root' (unused-variable)
W: 24, 8: Unused variable 'checks' (unused-variable)
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: ('C:\\Program Files\\Git\\mingw64\\libexec\\git-core\\git.exe', 'apply', 'C:\\Users\\56929\\.pre-commit\\patch1501483011')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20
error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply
Check the log at ~/.pre-commit/pre-commit.log
```
### ~/.pre-commit/pre-commit.log
```
An unexpected error has occurred: CalledProcessError: Command: ('C:\\Program Files\\Git\\mingw64\\libexec\\git-core\\git.exe', 'apply', 'C:\\Users\\56929\\.pre-commit\\patch1501483011')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20
error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply
Traceback (most recent call last):
File "c:\python27\lib\site-packages\pre_commit\error_handler.py", line 48, in error_handler
yield
File "c:\python27\lib\site-packages\pre_commit\main.py", line 231, in main
return run(runner, args)
File "c:\python27\lib\site-packages\pre_commit\commands\run.py", line 273, in run
return _run_hooks(repo_hooks, args, environ)
File "c:\python27\lib\contextlib.py", line 24, in __exit__
self.gen.next()
File "c:\python27\lib\site-packages\pre_commit\staged_files_only.py", line 58, in staged_files_only
cmd_runner.run(('git', 'apply', patch_filename), encoding=None)
File "c:\python27\lib\site-packages\pre_commit\prefixed_command_runner.py", line 38, in run
return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)
File "c:\python27\lib\site-packages\pre_commit\util.py", line 189, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
CalledProcessError: Command: ('C:\\Program Files\\Git\\mingw64\\libexec\\git-core\\git.exe', 'apply', 'C:\\Users\\56929\\.pre-commit\\patch1501483011')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: svnchecker_stylelint_support/checks/Stylelint.py:20
error: svnchecker_stylelint_support/checks/Stylelint.py: patch does not apply
```
Then, I open the patch file. (C:\\Users\\56929\\.pre-commit\\patch1501483011),it looks like
```diff
diff --git a/svnchecker_stylelint_support/checks/Stylelint.py b/svnchecker_stylelint_support/checks/Stylelint.py
index 4422b4d..f85ecb1 100644
--- a/svnchecker_stylelint_support/checks/Stylelint.py
+++ b/svnchecker_stylelint_support/checks/Stylelint.py
@@ -20,3 +20,5 @@ def run(transaction, config):
return ('{}\n{}'.format(stdoutdata, stderrdata), 1)^M
^M
return ("", 0)^M
^M
^M
^M
```
| Strange, that patch doesn't look like a patch at all! I recently fixed something in this code section (during I think 0.14.2), can you include your version information (both for `pre-commit` and for `git`)?
Other information that may be useful:
- `~/.gitconfig`
- `.git/config`
- what the change actually was
I have a sneaking suspicion that one or more of the configuration changes caused one of the following to happen:
- `git checkout -- .` didn't actually remove changes
- `git diff` didn't actually create a patch but at the same time exited nonzero
fwiw, with the default installation of `git` on windows, this is working for me [and is rather heavily tested](https://github.com/pre-commit/pre-commit/blob/454e0f213a6a26a64b98b28ed20e0d08171c62a8/tests/staged_files_only_test.py) also in ci with appveyor.
I'm curious to see what settings cause this to happen (I'll be trying some combinations myself)
Also the output of the following commands would be useful for debugging
```
git diff --ignore-submodules --binary --exit-code --no-color --no-ext-diff
# with changes made, and with nothing you're concerned about losing, does the following remove all changes to you working directory
git checkout -- .
```
[this pr](https://github.com/pre-commit/pre-commit/pull/571) may help with this issue as it switches to the more-lower-level commands
```
D:\CubeadProjects\devops [test]> git version
git version 2.13.2.windows.1
D:\CubeadProjects\devops [test]> pre-commit -V
pre-commit 0.15.4
```
### local git config
```ini
[core]
repositoryformatversion = 0
filemode = false
bare = false
logallrefupdates = true
symlinks = false
ignorecase = true
autocrlf = true
whitespace = fix
[branch "master"]
[branch "devops_cubead_dev"]
[remote "origin"]
url = [email protected]:CubeADGroup/devops.git
fetch = +refs/heads/*:refs/remotes/origin/*
[user]
name = RunningToTheEdgeOfTheWorld
[branch "master"]
remote = origin
merge = refs/heads/master
```
### global git config
```ini
[filter "lfs"]
process = git-lfs filter-process
required = true
clean = git-lfs clean -- %f
smudge = git-lfs smudge -- %f
[user]
name = ZJ
[user]
email = [email protected]
[gui]
recentrepo = D:/CubeadProjects/omms
[alias]
s = status
cm = commit -m
co = checkout
a = add
b = branch
p = push
d = diff
dt = difftool
[core]
autocrlf = true
[diff]
tool = gvimdiff
[difftool]
prompt = No
[apply]
whitespace = nowarn
```
```
D:\CubeadProjects\devops [test +0 ~1 -0 | +0 ~1 -0 !]> git status
On branch test
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: svnchecker_stylelint_support/install.py
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: svnchecker_stylelint_support/checks/Stylelint.py
D:\CubeadProjects\devops [test +0 ~1 -0 | +0 ~1 -0 !]> git diff --ignore-submodules --binary --exit-code --no-color --no-ext-diff
diff --git a/svnchecker_stylelint_support/checks/Stylelint.py b/svnchecker_stylelint_support/checks/Stylelint.py
index 4422b4d..83fe854 100644
--- a/svnchecker_stylelint_support/checks/Stylelint.py
+++ b/svnchecker_stylelint_support/checks/Stylelint.py
@@ -20,3 +20,6 @@ def run(transaction, config):
return ('{}\n{}'.format(stdoutdata, stderrdata), 1)
return ("", 0)
+
+
+
```
That's super helpful! I should be able to reproduce this now and get a fix for you :)
hint:
while I delete the golbal config whitespace-nowarn, i got "trailing whitespace" error .
Interesting, I can trigger a fatal error by configuring as follows:
```ini
[apply]
whitespace = error
```
So there's *something* to fix at least, I'll make a patch for that and see if it fixes your error (**crosses fingers**)
thank you , and I found I can not apply anypatch( use --ignore-submodules --binary --exit-code --no-color -
-no-ext-diff, or --color=never ), got error: unrecognized input .
I think it maybe a git BUG
If you're using powershell [it might be because of this](https://stackoverflow.com/questions/13675782/git-shell-in-windows-patchs-default-character-encoding-is-ucs-2-little-endian) -- though that shouldn't affect pre-commit since there aren't any shells anywhere.
Here's the patch I've written for `whitespace.error`: https://github.com/pre-commit/pre-commit/pull/574
Can you try the latest master and see if that patch helped?
i am tyring
it doesn't work. but I found the problem.
whether I set the whitespace( cr-at-eol or true ), git diff output to a file is CRLF linesep,
so in staged_files_only.py line39:
patch_file.write(diff_stdout_binary.replace('\r', ''))
it worked.
Hmm, there has to be some commandline option to make that happen correctly -- I'll look at this in the morning (I still can't get it to reproduce under test!)
OK! I can finally reproduce this.
Here's my minimal reproduction (also reproducing on linux, but I don't think anyone uses `autocrlf=true` on linux):
### Script
```bash
#!/bin/bash
set -ex
rm -rf foo
git init foo
cd foo
# Commit crlf into repository
git config --local core.autocrlf false
python3 -c 'open("foo", "wb").write(b"1\r\n2\r\n")'
git add foo
git commit -m "Initial commit with crlf"
# Change whitespace mode to autocrlf, "commit lf, checkout crlf"
git config --local core.autocrlf true
python3 -c 'open("foo", "wb").write(b"1\r\n2\r\n\r\n\r\n\r\n")'
# Run pre-commit
head -4 ~/workspace/pre-commit/.pre-commit-config.yaml > .pre-commit-config.yaml
~/workspace/pre-commit/venv*/bin/pre-commit
```
### Output
```
+ rm -rf foo
+ git init foo
Initialized empty Git repository in /tmp/foo/.git/
+ cd foo
+ git config --local core.autocrlf false
+ python3 -c 'open("foo", "wb").write(b"1\r\n2\r\n")'
+ git add foo
+ git commit -m 'Initial commit with crlf'
[master (root-commit) 6644acc] Initial commit with crlf
1 file changed, 2 insertions(+)
create mode 100644 foo
+ git config --local core.autocrlf true
+ python3 -c 'open("foo", "wb").write(b"1\r\n2\r\n\r\n\r\n\r\n")'
+ head -4 /home/asottile/workspace/pre-commit/.pre-commit-config.yaml
+ /home/asottile/workspace/pre-commit/venv/bin/pre-commit
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/asottile/.pre-commit/patch1501609946.
Trim Trailing Whitespace.............................(no files to check)Skipped
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/git', 'apply', '/home/asottile/.pre-commit/patch1501609946', '--whitespace=nowarn')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: patch failed: foo:1
error: foo: patch does not apply
Check the log at ~/.pre-commit/pre-commit.log
```
The best approach I can come up with is to *temporarily* set `core.autocrlf = false` when applying patches. Replacing `\r` characters out of a patch will likely break other things, and using `git apply --ignore-whitespace` causes it to incorrectly modify other line endings.
I'll try and make a patch for this -- I also think this is probably a **bug** in git (a patch generated by `git diff-index --patch` cannot be applied by `git apply`) | 2017-08-01T18:50:44 |
pre-commit/pre-commit | 578 | pre-commit__pre-commit-578 | [
"455"
] | a3f7b408abae0f170587c524e688be51cc944065 | diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -17,10 +17,11 @@
def get_env_patch(venv): # pragma: windows no cover
+ config = os.path.join(venv, 'bin') if sys.platform == 'cygwin' else venv
return (
('NODE_VIRTUAL_ENV', venv),
- ('NPM_CONFIG_PREFIX', venv),
- ('npm_config_prefix', venv),
+ ('NPM_CONFIG_PREFIX', config),
+ ('npm_config_prefix', config),
('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),
)
| nodeenv try to download non existing tar.gz prebuilt under Cygwin
Hi,
Strange issue: I suspect a recent change broke this as it used to work last week, on another Windows computer with Cygwin.
Bug reproduction: `pre-commit run` using e.g. https://github.com/Lucas-C/pre-commit-hooks-html v1.1.0
`pre-commit` execute the following command under the hood, a command that also fails if I execute it manually:
```
nodeenv --prebuilt /cygdrive/c/Users/admin/.pre-commit/repoYHJ85q/node_env-default
```
The error is the following:
```
urllib2.HTTPError: HTTP Error 404: Not Found
```
The `tar.gz` it tries to install is https://nodejs.org/dist/v7.2.1/node-v7.2.1-cygwin_nt-6.1-x64.tar.gz, which does not exist. My guess is that `nodeenv` should use the Windows prebuilts instead: https://nodejs.org/dist/v7.2.1/node-v7.2.1-win-x64.zip This is because `platform.system()` is used: https://github.com/ekalinin/nodeenv/blob/master/nodeenv.py#L503
I'm going to ask for help on the https://github.com/ekalinin/nodeenv project, but do you have any hint at what the root cause could be here ?
| I'm actually surprised cygwin worked before, nothing seems changed in the last couple weeks in `nodeenv` and I don't see any changes to the files provided by nodejs either. To be honest, I didn't know nodeenv even worked on windows/cygwin: #200
I think you're probably right about the source of the issue. Another solution might be to have the nodejs guys compile against cygwin though it's a difficult platform to target (since the packaging moves like arch).
You're right, I must have mixed up things, it probably never worked.
I think there's no need to keep this ticket open, I'm tracking this in https://github.com/ekalinin/nodeenv/issues/178
Could it be possible to use the latest nodeenv version to fix this please ? :)
https://github.com/ekalinin/nodeenv/releases/tag/1.2.0
Yeah! pre-commit doesn't limit to a specific version of nodeenv so you should be able to just `pip install --upgrade nodeenv`
Great, thanks :)
Bad news : NodeJS hooks still do not work under Cygwin :(
To be more precise, currently dependencies are not installed in the dedicated node env, but globally
The issue comes from the `NPM_CONFIG_PREFIX` / `npm_config_prefix` environment variables:
https://github.com/pre-commit/pre-commit/blob/master/pre_commit/languages/node.py#L22-L23
Among the fixes I made in `nodeenv` to add support for Cygwin, I had to alter those 2 env variables:
https://github.com/ekalinin/nodeenv/blob/master/nodeenv.py#L850
Are you OK if I send a PR adding a `if cygwin` conditional in `node.py:get_env_patch` ?
seems fine to me :) | 2017-08-04T08:48:46 |
|
pre-commit/pre-commit | 592 | pre-commit__pre-commit-592 | [
"587"
] | 7139a47c1ca968a2699e467279677fa77ad68aae | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -217,7 +217,7 @@ def _has_unstaged_config(runner):
def run(runner, args, environ=os.environ):
- no_stash = args.no_stash or args.all_files or bool(args.files)
+ no_stash = args.all_files or bool(args.files)
# Check if we have unresolved merge conflict files and fail fast.
if _has_unmerged_paths(runner):
@@ -227,20 +227,11 @@ def run(runner, args, environ=os.environ):
logger.error('Specify both --origin and --source.')
return 1
if _has_unstaged_config(runner) and not no_stash:
- if args.allow_unstaged_config:
- logger.warn(
- 'You have an unstaged config file and have specified the '
- '--allow-unstaged-config option.\n'
- 'Note that your config will be stashed before the config is '
- 'parsed unless --no-stash is specified.',
- )
- else:
- logger.error(
- 'Your .pre-commit-config.yaml is unstaged.\n'
- '`git add .pre-commit-config.yaml` to fix this.\n'
- 'Run pre-commit with --allow-unstaged-config to silence this.',
- )
- return 1
+ logger.error(
+ 'Your .pre-commit-config.yaml is unstaged.\n'
+ '`git add .pre-commit-config.yaml` to fix this.\n',
+ )
+ return 1
# Expose origin / source as environment variables for hooks to consume
if args.origin and args.source:
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -135,10 +135,6 @@ def main(argv=None):
_add_color_option(run_parser)
_add_config_option(run_parser)
run_parser.add_argument('hook', nargs='?', help='A single hook-id to run')
- run_parser.add_argument(
- '--no-stash', default=False, action='store_true',
- help='Use this option to prevent auto stashing of unstaged files.',
- )
run_parser.add_argument(
'--verbose', '-v', action='store_true', default=False,
)
@@ -154,13 +150,6 @@ def main(argv=None):
'--commit-msg-filename',
help='Filename to check when running during `commit-msg`',
)
- run_parser.add_argument(
- '--allow-unstaged-config', default=False, action='store_true',
- help=(
- 'Allow an unstaged config to be present. Note that this will '
- 'be stashed before parsing unless --no-stash is specified.'
- ),
- )
run_parser.add_argument(
'--hook-stage', choices=('commit', 'push', 'commit-msg'),
default='commit',
@@ -173,7 +162,7 @@ def main(argv=None):
run_mutex_group = run_parser.add_mutually_exclusive_group(required=False)
run_mutex_group.add_argument(
'--all-files', '-a', action='store_true', default=False,
- help='Run on all the files in the repo. Implies --no-stash.',
+ help='Run on all the files in the repo.',
)
run_mutex_group.add_argument(
'--files', nargs='*', default=[],
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -54,10 +54,8 @@ def _get_opts(
color=False,
verbose=False,
hook=None,
- no_stash=False,
origin='',
source='',
- allow_unstaged_config=False,
hook_stage='commit',
show_diff_on_failure=False,
commit_msg_filename='',
@@ -70,10 +68,8 @@ def _get_opts(
color=color,
verbose=verbose,
hook=hook,
- no_stash=no_stash,
origin=origin,
source=source,
- allow_unstaged_config=allow_unstaged_config,
hook_stage=hook_stage,
show_diff_on_failure=show_diff_on_failure,
commit_msg_filename=commit_msg_filename,
@@ -332,38 +328,6 @@ def test_origin_source_error_msg(
assert warning_msg not in printed
[email protected](
- ('no_stash', 'all_files', 'expect_stash'),
- (
- (True, True, False),
- (True, False, False),
- (False, True, False),
- (False, False, True),
- ),
-)
-def test_no_stash(
- cap_out,
- repo_with_passing_hook,
- no_stash,
- all_files,
- expect_stash,
- mock_out_store_directory,
-):
- stage_a_file()
- # Make unstaged changes
- with open('foo.py', 'w') as foo_file:
- foo_file.write('import os\n')
-
- args = _get_opts(no_stash=no_stash, all_files=all_files)
- ret, printed = _do_run(cap_out, repo_with_passing_hook, args)
- assert ret == 0
- warning_msg = b'[WARNING] Unstaged files detected.'
- if expect_stash:
- assert warning_msg in printed
- else:
- assert warning_msg not in printed
-
-
@pytest.mark.parametrize(('output', 'expected'), (('some', True), ('', False)))
def test_has_unmerged_paths(output, expected):
mock_runner = mock.Mock()
@@ -715,37 +679,19 @@ def modified_config_repo(repo_with_passing_hook):
yield repo_with_passing_hook
-def test_allow_unstaged_config_option(
+def test_error_with_unstaged_config(
cap_out, modified_config_repo, mock_out_store_directory,
):
- args = _get_opts(allow_unstaged_config=True)
- ret, printed = _do_run(cap_out, modified_config_repo, args)
- expected = (
- b'You have an unstaged config file and have specified the '
- b'--allow-unstaged-config option.'
- )
- assert expected in printed
- assert ret == 0
-
-
-def test_no_allow_unstaged_config_option(
- cap_out, modified_config_repo, mock_out_store_directory,
-):
- args = _get_opts(allow_unstaged_config=False)
+ args = _get_opts()
ret, printed = _do_run(cap_out, modified_config_repo, args)
assert b'Your .pre-commit-config.yaml is unstaged.' in printed
assert ret == 1
@pytest.mark.parametrize(
- 'opts',
- (
- {'allow_unstaged_config': False, 'no_stash': True},
- {'all_files': True},
- {'files': [C.CONFIG_FILE]},
- ),
+ 'opts', ({'all_files': True}, {'files': [C.CONFIG_FILE]}),
)
-def test_unstaged_message_suppressed(
+def test_no_unstaged_error_with_all_files_or_files(
cap_out, modified_config_repo, mock_out_store_directory, opts,
):
args = _get_opts(**opts)
diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -137,8 +137,7 @@ def test_get_conflicted_files_in_submodule(in_conflicting_submodule):
def test_get_conflicted_files_unstaged_files(in_merge_conflict):
- # If they for whatever reason did pre-commit run --no-stash during a
- # conflict
+ """This case no longer occurs, but it is a useful test nonetheless"""
resolve_conflict()
# Make unstaged file.
| Deprecate and remove some (useless?) options
I find the following don't really have any good use cases (and don't come up in normal day-to-day) and are undocumented beyond `--help`. **I'm proposing removing these options**:
## `pre-commit run --no-stash`
This disables the auto-stashing of files when running -- though this is already the case for `pre-commit run --all-files` and `pre-commit run --files ...`.
The behaviour of `--no-stash` (without using `--no-stash`) can be achieved via `git diff --name-only | xargs pre-commit run --files`
It was added [along with the avoiding behaviour](https://github.com/pre-commit/pre-commit/pull/80) in the same pull request. I want to say this was my first idea for "fixing" the [original problem](https://github.com/pre-commit/pre-commit/issues/68) and then I forgot to undo it.
## `pre-commit run --allow-unstaged-config`
This (unfortunately) collides with `pre-commit run --all` (prefix match of `pre-commit run --all-files`) so I've wanted to get rid of it anyway.
This allows one to run with an unstaged configuration, which then (in most cases) gets the changes swiped out from under you and causes confusing situations where the hooks that are run aren't what was on disk at the time of running. The warning that's printed when doing this also explains this.
This was [originally my idea](https://github.com/pre-commit/pre-commit/issues/157#issuecomment-99080756) but now I think we can just do without the option at all -- requiring the pre-commit configuration to be staged when running pre-commit.
| 2017-08-23T17:24:54 |
|
pre-commit/pre-commit | 600 | pre-commit__pre-commit-600 | [
"599"
] | 4aa787db19980593c0f73711f7133b495c346da6 | diff --git a/pre_commit/file_lock.py b/pre_commit/file_lock.py
--- a/pre_commit/file_lock.py
+++ b/pre_commit/file_lock.py
@@ -12,18 +12,22 @@
_region = 0xffff
@contextlib.contextmanager
- def _locked(fileno):
- while True:
- try:
- msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
- except OSError as e:
- # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
- # flag is specified and the file cannot be locked after 10
- # attempts.
- if e.errno != errno.EDEADLOCK:
- raise
- else:
- break
+ def _locked(fileno, blocked_cb):
+ try:
+ msvcrt.locking(fileno, msvcrt.LK_NBLCK, _region)
+ except IOError:
+ blocked_cb()
+ while True:
+ try:
+ msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
+ except IOError as e:
+ # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
+ # flag is specified and the file cannot be locked after 10
+ # attempts.
+ if e.errno != errno.EDEADLOCK:
+ raise
+ else:
+ break
try:
yield
@@ -38,8 +42,12 @@ def _locked(fileno):
import fcntl
@contextlib.contextmanager
- def _locked(fileno):
- fcntl.flock(fileno, fcntl.LOCK_EX)
+ def _locked(fileno, blocked_cb):
+ try:
+ fcntl.flock(fileno, fcntl.LOCK_EX | fcntl.LOCK_NB)
+ except IOError:
+ blocked_cb()
+ fcntl.flock(fileno, fcntl.LOCK_EX)
try:
yield
finally:
@@ -47,7 +55,7 @@ def _locked(fileno):
@contextlib.contextmanager
-def lock(path):
+def lock(path, blocked_cb):
with open(path, 'a+') as f:
- with _locked(f.fileno()):
+ with _locked(f.fileno(), blocked_cb):
yield
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -47,10 +47,11 @@ def __init__(self, directory=None):
self.directory = directory
@contextlib.contextmanager
- def exclusive_lock(self, quiet=False):
- if not quiet:
+ def exclusive_lock(self):
+ def blocked_cb(): # pragma: no cover (tests are single-process)
logger.info('Locking pre-commit directory')
- with file_lock.lock(os.path.join(self.directory, '.lock')):
+
+ with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):
yield
def _write_readme(self):
@@ -89,7 +90,7 @@ def _create(self):
if os.path.exists(self.db_path):
return
- with self.exclusive_lock(quiet=True):
+ with self.exclusive_lock():
# Another process may have already completed this work
if os.path.exists(self.db_path): # pragma: no cover (race)
return
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -142,8 +142,7 @@ def _get_commit_output(tempdir_factory, touch_file='foo', **kwargs):
NORMAL_PRE_COMMIT_RUN = re.compile(
- r'^\[INFO\] Locking pre-commit directory\r?\n'
- r'\[INFO\] Initializing environment for .+\.\r?\n'
+ r'^\[INFO\] Initializing environment for .+\.\r?\n'
r'Bash hook\.+Passed\r?\n'
r'\[master [a-f0-9]{7}\] Commit!\r?\n' +
FILES_CHANGED +
@@ -255,8 +254,7 @@ def test_environment_not_sourced(tempdir_factory):
FAILING_PRE_COMMIT_RUN = re.compile(
- r'^\[INFO\] Locking pre-commit directory\r?\n'
- r'\[INFO\] Initializing environment for .+\.\r?\n'
+ r'^\[INFO\] Initializing environment for .+\.\r?\n'
r'Failing hook\.+Failed\r?\n'
r'hookid: failing_hook\r?\n'
r'\r?\n'
@@ -334,7 +332,6 @@ def test_install_existing_hook_no_overwrite_idempotent(tempdir_factory):
FAIL_OLD_HOOK = re.compile(
r'fail!\r?\n'
- r'\[INFO\] Locking pre-commit directory\r?\n'
r'\[INFO\] Initializing environment for .+\.\r?\n'
r'Bash hook\.+Passed\r?\n',
)
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -559,8 +559,8 @@ def test_reinstall(tempdir_factory, store, log_info_mock):
config = make_config_from_repo(path)
repo = Repository.create(config, store)
repo.require_installed()
- # We print some logging during clone (2) + install (4)
- assert log_info_mock.call_count == 6
+ # We print some logging during clone (1) + install (3)
+ assert log_info_mock.call_count == 4
log_info_mock.reset_mock()
# Reinstall with same repo should not trigger another install
repo.require_installed()
diff --git a/tests/store_test.py b/tests/store_test.py
--- a/tests/store_test.py
+++ b/tests/store_test.py
@@ -88,7 +88,7 @@ def test_clone(store, tempdir_factory, log_info_mock):
ret = store.clone(path, sha)
# Should have printed some stuff
- assert log_info_mock.call_args_list[1][0][0].startswith(
+ assert log_info_mock.call_args_list[0][0][0].startswith(
'Initializing environment for ',
)
| "Locking pre-commit directory" should only print if waiting for a lock
Otherwise this is just useless console noise
| 2017-09-04T18:29:52 |
|
pre-commit/pre-commit | 602 | pre-commit__pre-commit-602 | [
"562"
] | ef8347cf2dedfd818f7a170f8461131fb75223dc | diff --git a/pre_commit/commands/clean.py b/pre_commit/commands/clean.py
--- a/pre_commit/commands/clean.py
+++ b/pre_commit/commands/clean.py
@@ -8,7 +8,9 @@
def clean(runner):
- if os.path.exists(runner.store.directory):
- rmtree(runner.store.directory)
- output.write_line('Cleaned {}.'.format(runner.store.directory))
+ legacy_path = os.path.expanduser('~/.pre-commit')
+ for directory in (runner.store.directory, legacy_path):
+ if os.path.exists(directory):
+ rmtree(directory)
+ output.write_line('Cleaned {}.'.format(directory))
return 0
diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py
--- a/pre_commit/error_handler.py
+++ b/pre_commit/error_handler.py
@@ -28,10 +28,11 @@ def _log_and_exit(msg, exc, formatted):
_to_bytes(exc), b'\n',
))
output.write(error_msg)
- output.write_line('Check the log at ~/.pre-commit/pre-commit.log')
store = Store()
store.require_created()
- with open(os.path.join(store.directory, 'pre-commit.log'), 'wb') as log:
+ log_path = os.path.join(store.directory, 'pre-commit.log')
+ output.write_line('Check the log at {}'.format(log_path))
+ with open(log_path, 'wb') as log:
output.write(error_msg, stream=log)
output.write_line(formatted, stream=log)
raise SystemExit(1)
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -29,9 +29,9 @@ def _get_default_directory():
`Store.get_default_directory` can be mocked in tests and
`_get_default_directory` can be tested.
"""
- return os.environ.get(
- 'PRE_COMMIT_HOME',
- os.path.join(os.path.expanduser('~'), '.pre-commit'),
+ return os.environ.get('PRE_COMMIT_HOME') or os.path.join(
+ os.environ.get('XDG_CACHE_HOME') or os.path.expanduser('~/.cache'),
+ 'pre-commit',
)
| diff --git a/tests/commands/clean_test.py b/tests/commands/clean_test.py
--- a/tests/commands/clean_test.py
+++ b/tests/commands/clean_test.py
@@ -2,18 +2,35 @@
import os.path
+import mock
+import pytest
+
from pre_commit.commands.clean import clean
from pre_commit.util import rmtree
-def test_clean(runner_with_mocked_store):
[email protected](autouse=True)
+def fake_old_dir(tempdir_factory):
+ fake_old_dir = tempdir_factory.get()
+
+ def _expanduser(path, *args, **kwargs):
+ assert path == '~/.pre-commit'
+ return fake_old_dir
+
+ with mock.patch.object(os.path, 'expanduser', side_effect=_expanduser):
+ yield fake_old_dir
+
+
+def test_clean(runner_with_mocked_store, fake_old_dir):
+ assert os.path.exists(fake_old_dir)
assert os.path.exists(runner_with_mocked_store.store.directory)
clean(runner_with_mocked_store)
+ assert not os.path.exists(fake_old_dir)
assert not os.path.exists(runner_with_mocked_store.store.directory)
def test_clean_empty(runner_with_mocked_store):
- """Make sure clean succeeds when we the directory doesn't exist."""
+ """Make sure clean succeeds when the directory doesn't exist."""
rmtree(runner_with_mocked_store.store.directory)
assert not os.path.exists(runner_with_mocked_store.store.directory)
clean(runner_with_mocked_store)
diff --git a/tests/error_handler_test.py b/tests/error_handler_test.py
--- a/tests/error_handler_test.py
+++ b/tests/error_handler_test.py
@@ -81,12 +81,12 @@ def test_log_and_exit(cap_out, mock_out_store_directory):
)
printed = cap_out.get()
+ log_file = os.path.join(mock_out_store_directory, 'pre-commit.log')
assert printed == (
'msg: FatalError: hai\n'
- 'Check the log at ~/.pre-commit/pre-commit.log\n'
+ 'Check the log at {}\n'.format(log_file)
)
- log_file = os.path.join(mock_out_store_directory, 'pre-commit.log')
assert os.path.exists(log_file)
contents = io.open(log_file).read()
assert contents == (
@@ -102,6 +102,7 @@ def test_error_handler_non_ascii_exception(mock_out_store_directory):
def test_error_handler_no_tty(tempdir_factory):
+ pre_commit_home = tempdir_factory.get()
output = cmd_output_mocked_pre_commit_home(
sys.executable, '-c',
'from __future__ import unicode_literals\n'
@@ -110,8 +111,10 @@ def test_error_handler_no_tty(tempdir_factory):
' raise ValueError("\\u2603")\n',
retcode=1,
tempdir_factory=tempdir_factory,
+ pre_commit_home=pre_commit_home,
)
+ log_file = os.path.join(pre_commit_home, 'pre-commit.log')
assert output[1].replace('\r', '') == (
'An unexpected error has occurred: ValueError: ☃\n'
- 'Check the log at ~/.pre-commit/pre-commit.log\n'
+ 'Check the log at {}\n'.format(log_file)
)
diff --git a/tests/main_test.py b/tests/main_test.py
--- a/tests/main_test.py
+++ b/tests/main_test.py
@@ -2,6 +2,7 @@
from __future__ import unicode_literals
import argparse
+import os.path
import mock
import pytest
@@ -121,10 +122,11 @@ def test_expected_fatal_error_no_git_repo(
with cwd(tempdir_factory.get()):
with pytest.raises(SystemExit):
main.main([])
+ log_file = os.path.join(mock_out_store_directory, 'pre-commit.log')
assert cap_out.get() == (
'An error has occurred: FatalError: git failed. '
'Is it installed, and are you in a Git repository directory?\n'
- 'Check the log at ~/.pre-commit/pre-commit.log\n'
+ 'Check the log at {}\n'.format(log_file)
)
diff --git a/tests/store_test.py b/tests/store_test.py
--- a/tests/store_test.py
+++ b/tests/store_test.py
@@ -29,7 +29,15 @@ def test_our_session_fixture_works():
def test_get_default_directory_defaults_to_home():
# Not we use the module level one which is not mocked
ret = _get_default_directory()
- assert ret == os.path.join(os.path.expanduser('~'), '.pre-commit')
+ assert ret == os.path.join(os.path.expanduser('~/.cache'), 'pre-commit')
+
+
+def test_adheres_to_xdg_specification():
+ with mock.patch.dict(
+ os.environ, {'XDG_CACHE_HOME': '/tmp/fakehome'},
+ ):
+ ret = _get_default_directory()
+ assert ret == os.path.join('/tmp/fakehome', 'pre-commit')
def test_uses_environment_variable_when_present():
| pre-commit should meet the XDG Base Directory Specification
XDG Base Directory Specification is quite common now. Just `ls ~/.cache ~/.config ~/.local` to realize it.
I think `~/.pre-commit` should be moved to `$XDG_CACHE_HOME` or `$HOME/.cache`
https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
| Pull requests welcome, note that this also has to work well for windows which doesn't adhere to XDG.
I've been hesitant to move the directory as it'd be a bit awkward for current users (though perhaps `clean` can just be updated to remove both paths for some amount of time?)
> Pull requests welcome, note that this also has to work well for windows which doesn't adhere to XDG.
Is there any documentation for pre-commit on Windows?
Is it supposed to run on:
- the [official Python release for Windows](https://www.python.org/downloads/release/python-362/);
- Cygwin;
- [WSL](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux);
- all of the above?
On which Windows version is it supposed to run?
How do you proceed to test it on Windows? Do you use a VM or something?
> I've been hesitant to move the directory as it'd be a bit awkward for current users (though perhaps `clean` can just be updated to remove both paths for some amount of time?)
I understand. But it won't break anything since it would just re-cache all the needed stuff, right? We can imagine three scenarios if `~/.pre-commit` already exists:
1. Move the existing `~/.pre-commit` in `$XDG_CACHE_HOME` or `$HOME/.cache`, with or without warning (see `2.`);
1. Don't move the existing `~/.pre-commit`, but display a warning before re-caching;
3. Just re-cache the needed parts without displaying any message (and maybe delete the existing `~/.pre-commit`) ¯\\\_(ツ)\_/¯
Well, it has been done already with the transition from `hooks.yaml` to `.pre-commit-hooks.yaml` in some sort.
@nagromc If you ^F windows on http://pre-commit.com you can see the current support matrix
It is tested automatically through appveyor -- my home desktop at home also runs windows so I often do more-difficult debugging on that.
As for the three solutions above:
1. Isn't feasible -- virtualenvs are not relocatable (the shebangs contain full paths to the interpreter)
2. A possibility, though...
3. sounds best :)
OK I'll give it a try when I have time. Didn't know about AppVeyor.
And for the third solution, should we delete `~/.pre-commit`?
I think just leave it alone for the most part (unless `pre-commit clean` is run, we can include it in there for some amount of time and then after some amount of time pretend like it never existed) | 2017-09-04T21:51:16 |
pre-commit/pre-commit | 605 | pre-commit__pre-commit-605 | [
"603"
] | 8b14c6c5ae24757b0f6f60979a9ffe06039c298f | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -36,13 +36,6 @@ def _hook_msg_start(hook, verbose):
)
-def get_changed_files(new, old):
- return cmd_output(
- 'git', 'diff', '--no-ext-diff', '--name-only',
- '{}...{}'.format(old, new),
- )[1].splitlines()
-
-
def filter_filenames_by_types(filenames, types, exclude_types):
types, exclude_types = frozenset(types), frozenset(exclude_types)
ret = []
@@ -56,7 +49,7 @@ def filter_filenames_by_types(filenames, types, exclude_types):
def get_filenames(args, include_expr, exclude_expr):
if args.origin and args.source:
getter = git.get_files_matching(
- lambda: get_changed_files(args.origin, args.source),
+ lambda: git.get_changed_files(args.origin, args.source),
)
elif args.hook_stage == 'commit-msg':
def getter(*_):
diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -15,6 +15,14 @@
logger = logging.getLogger('pre_commit')
+def zsplit(s):
+ s = s.strip('\0')
+ if s:
+ return s.split('\0')
+ else:
+ return []
+
+
def get_root():
try:
return cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()
@@ -67,25 +75,32 @@ def get_conflicted_files():
# If they resolved the merge conflict by choosing a mesh of both sides
# this will also include the conflicted files
tree_hash = cmd_output('git', 'write-tree')[1].strip()
- merge_diff_filenames = cmd_output(
- 'git', 'diff', '--no-ext-diff',
- '-m', tree_hash, 'HEAD', 'MERGE_HEAD', '--name-only',
- )[1].splitlines()
+ merge_diff_filenames = zsplit(cmd_output(
+ 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
+ '-m', tree_hash, 'HEAD', 'MERGE_HEAD',
+ )[1])
return set(merge_conflict_filenames) | set(merge_diff_filenames)
@memoize_by_cwd
def get_staged_files():
- return cmd_output(
- 'git', 'diff', '--staged', '--name-only', '--no-ext-diff',
+ return zsplit(cmd_output(
+ 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',
# Everything except for D
'--diff-filter=ACMRTUXB',
- )[1].splitlines()
+ )[1])
@memoize_by_cwd
def get_all_files():
- return cmd_output('git', 'ls-files')[1].splitlines()
+ return zsplit(cmd_output('git', 'ls-files', '-z')[1])
+
+
+def get_changed_files(new, old):
+ return zsplit(cmd_output(
+ 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
+ '{}...{}'.format(old, new),
+ )[1])
def get_files_matching(all_file_list_strategy):
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -15,7 +15,6 @@
from pre_commit.commands.run import _compute_cols
from pre_commit.commands.run import _get_skips
from pre_commit.commands.run import _has_unmerged_paths
-from pre_commit.commands.run import get_changed_files
from pre_commit.commands.run import run
from pre_commit.runner import Runner
from pre_commit.util import cmd_output
@@ -501,18 +500,6 @@ def test_hook_install_failure(mock_out_store_directory, tempdir_factory):
assert '☃'.encode('UTF-8') + '²'.encode('latin1') in stdout
-def test_get_changed_files():
- files = get_changed_files(
- '78c682a1d13ba20e7cb735313b9314a74365cd3a',
- '3387edbb1288a580b37fe25225aa0b856b18ad1a',
- )
- assert files == ['CHANGELOG.md', 'setup.py']
-
- # files changed in source but not in origin should not be returned
- files = get_changed_files('HEAD~10', 'HEAD')
- assert files == []
-
-
def test_lots_of_files(mock_out_store_directory, tempdir_factory):
# windows xargs seems to have a bug, here's a regression test for
# our workaround
diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
@@ -162,3 +163,63 @@ def test_get_conflicted_files_unstaged_files(in_merge_conflict):
def test_parse_merge_msg_for_conflicts(input, expected_output):
ret = git.parse_merge_msg_for_conflicts(input)
assert ret == expected_output
+
+
+def test_get_changed_files():
+ files = git.get_changed_files(
+ '78c682a1d13ba20e7cb735313b9314a74365cd3a',
+ '3387edbb1288a580b37fe25225aa0b856b18ad1a',
+ )
+ assert files == ['CHANGELOG.md', 'setup.py']
+
+ # files changed in source but not in origin should not be returned
+ files = git.get_changed_files('HEAD~10', 'HEAD')
+ assert files == []
+
+
[email protected](
+ ('s', 'expected'),
+ (
+ ('foo\0bar\0', ['foo', 'bar']),
+ ('foo\0', ['foo']),
+ ('', []),
+ ('foo', ['foo']),
+ ),
+)
+def test_zsplit(s, expected):
+ assert git.zsplit(s) == expected
+
+
[email protected]
+def non_ascii_repo(tmpdir):
+ repo = tmpdir.join('repo').ensure_dir()
+ with repo.as_cwd():
+ cmd_output('git', 'init', '.')
+ cmd_output('git', 'commit', '--allow-empty', '-m', 'initial commit')
+ repo.join('интервью').ensure()
+ cmd_output('git', 'add', '.')
+ cmd_output('git', 'commit', '--allow-empty', '-m', 'initial commit')
+ yield repo
+
+
+def test_all_files_non_ascii(non_ascii_repo):
+ ret = git.get_all_files()
+ assert ret == ['интервью']
+
+
+def test_staged_files_non_ascii(non_ascii_repo):
+ non_ascii_repo.join('интервью').write('hi')
+ cmd_output('git', 'add', '.')
+ assert git.get_staged_files() == ['интервью']
+
+
+def test_changed_files_non_ascii(non_ascii_repo):
+ ret = git.get_changed_files('HEAD', 'HEAD^')
+ assert ret == ['интервью']
+
+
+def test_get_conflicted_files_non_ascii(in_merge_conflict):
+ open('интервью', 'a').close()
+ cmd_output('git', 'add', '.')
+ ret = git.get_conflicted_files()
+ assert ret == {'conflict_file', 'интервью'}
| Non-English directories are skipped
Looks like pre-commit skips non-English directories. Please take a look at those:
* https://github.com/GolangShow/golangshow.com/commit/df0529b104739ba8d1de9182d37dfb80bbddf679
* https://travis-ci.org/GolangShow/golangshow.com/builds/271948939
A relevant bit from the full Travis build output:
```
$ pre-commit run --all-files
[…]
Trim Trailing Whitespace.................................................Failed
hookid: trailing-whitespace
Files were modified by this hook. Additional output:
Fixing public/episode/2017/04-14-096/index.html
Fixing public/episode/2016/03-31-050/index.html
[…]
Fixing public/episode/2016/09-01-072/index.html
Fixing public/episode/2015/12-29-036/index.html
The command "pre-commit run --all-files" exited with 1.
$ git status
HEAD detached at df0529b
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: "public/categories/\320\263\320\276\321\201\321\202\320\270/index.html"
modified: "public/categories/\320\263\320\276\321\201\321\202\320\270/index.xml"
modified: "public/categories/\320\270\320\275\321\202\320\265\321\200\320\262\321\214\321\216/index.html"
modified: "public/categories/\320\270\320\275\321\202\320\265\321\200\320\262\321\214\321\216/index.xml"
modified: "public/categories/\320\270\321\202\320\276\320\263\320\270/index.html"
modified: "public/categories/\320\270\321\202\320\276\320\263\320\270/index.xml"
modified: "public/categories/\320\272\320\260\320\274\320\270\320\275/index.html"
modified: "public/categories/\320\272\320\260\320\274\320\270\320\275/index.xml"
modified: "public/categories/\320\275\320\276\320\262\320\276\321\201\321\202\320\270/index.html"
modified: "public/categories/\320\275\320\276\320\262\320\276\321\201\321\202\320\270/index.xml"
modified: public/js/highlight.pack.js
modified: public/page/1/index.html
no changes added to commit (use "git add" and/or "git commit -a")
```
Those directories in `public/categories/` are `гости`, `итоги`, `камин`, `новости`, and `интервью`.
| Thanks for the report! I should be able to reproduce and fix this pretty easily, let me see if there's a workaround to get you going quicker first :)
Ah definitely a bug! It has to do with our usage of `git ls-files` and should be easy to fix.
A workaround is to bypass the bug in `--all-files` (though not pretty): `git ls-files -z | xargs -0 pre-commit run --files`
I'll get a fix out for you asap! | 2017-09-05T15:06:06 |
pre-commit/pre-commit | 617 | pre-commit__pre-commit-617 | [
"281"
] | 3cc5aa023ed1068eca4f631fa29db112b8d1fc5e | diff --git a/pre_commit/clientlib.py b/pre_commit/clientlib.py
--- a/pre_commit/clientlib.py
+++ b/pre_commit/clientlib.py
@@ -130,6 +130,7 @@ def validate_manifest_main(argv=None):
'Config', None,
schema.RequiredRecurse('repos', schema.Array(CONFIG_REPO_DICT)),
+ schema.Optional('exclude', schema.check_regex, '^$'),
schema.Optional('fail_fast', schema.check_bool, False),
)
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -3,6 +3,7 @@
import logging
import os
+import re
import subprocess
import sys
@@ -36,7 +37,19 @@ def _hook_msg_start(hook, verbose):
)
-def filter_filenames_by_types(filenames, types, exclude_types):
+def _filter_by_include_exclude(filenames, include, exclude):
+ include_re, exclude_re = re.compile(include), re.compile(exclude)
+ return {
+ filename for filename in filenames
+ if (
+ include_re.search(filename) and
+ not exclude_re.search(filename) and
+ os.path.lexists(filename)
+ )
+ }
+
+
+def _filter_by_types(filenames, types, exclude_types):
types, exclude_types = frozenset(types), frozenset(exclude_types)
ret = []
for filename in filenames:
@@ -46,34 +59,15 @@ def filter_filenames_by_types(filenames, types, exclude_types):
return tuple(ret)
-def get_filenames(args, include_expr, exclude_expr):
- if args.origin and args.source:
- getter = git.get_files_matching(
- lambda: git.get_changed_files(args.origin, args.source),
- )
- elif args.hook_stage == 'commit-msg':
- def getter(*_):
- return (args.commit_msg_filename,)
- elif args.files:
- getter = git.get_files_matching(lambda: args.files)
- elif args.all_files:
- getter = git.get_all_files_matching
- elif git.is_in_merge_conflict():
- getter = git.get_conflicted_files_matching
- else:
- getter = git.get_staged_files_matching
- return getter(include_expr, exclude_expr)
-
-
SKIPPED = 'Skipped'
NO_FILES = '(no files to check)'
-def _run_single_hook(hook, repo, args, skips, cols):
- filenames = get_filenames(args, hook['files'], hook['exclude'])
- filenames = filter_filenames_by_types(
- filenames, hook['types'], hook['exclude_types'],
- )
+def _run_single_hook(filenames, hook, repo, args, skips, cols):
+ include, exclude = hook['files'], hook['exclude']
+ filenames = _filter_by_include_exclude(filenames, include, exclude)
+ types, exclude_types = hook['types'], hook['exclude_types']
+ filenames = _filter_by_types(filenames, types, exclude_types)
if hook['id'] in skips:
output.write(get_hook_message(
_hook_msg_start(hook, args.verbose),
@@ -169,13 +163,30 @@ def _compute_cols(hooks, verbose):
return max(cols, 80)
+def _all_filenames(args):
+ if args.origin and args.source:
+ return git.get_changed_files(args.origin, args.source)
+ elif args.hook_stage == 'commit-msg':
+ return (args.commit_msg_filename,)
+ elif args.files:
+ return args.files
+ elif args.all_files:
+ return git.get_all_files()
+ elif git.is_in_merge_conflict():
+ return git.get_conflicted_files()
+ else:
+ return git.get_staged_files()
+
+
def _run_hooks(config, repo_hooks, args, environ):
"""Actually run the hooks."""
skips = _get_skips(environ)
cols = _compute_cols([hook for _, hook in repo_hooks], args.verbose)
+ filenames = _all_filenames(args)
+ filenames = _filter_by_include_exclude(filenames, '', config['exclude'])
retval = 0
for repo, hook in repo_hooks:
- retval |= _run_single_hook(hook, repo, args, skips, cols)
+ retval |= _run_single_hook(filenames, hook, repo, args, skips, cols)
if retval and config['fail_fast']:
break
if (
diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -1,15 +1,12 @@
from __future__ import unicode_literals
-import functools
import logging
import os.path
-import re
import sys
from pre_commit.errors import FatalError
from pre_commit.util import CalledProcessError
from pre_commit.util import cmd_output
-from pre_commit.util import memoize_by_cwd
logger = logging.getLogger('pre_commit')
@@ -63,7 +60,6 @@ def parse_merge_msg_for_conflicts(merge_msg):
]
-@memoize_by_cwd
def get_conflicted_files():
logger.info('Checking merge-conflict files only.')
# Need to get the conflicted files from the MERGE_MSG because they could
@@ -82,7 +78,6 @@ def get_conflicted_files():
return set(merge_conflict_filenames) | set(merge_diff_filenames)
-@memoize_by_cwd
def get_staged_files():
return zsplit(cmd_output(
'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',
@@ -91,7 +86,6 @@ def get_staged_files():
)[1])
-@memoize_by_cwd
def get_all_files():
return zsplit(cmd_output('git', 'ls-files', '-z')[1])
@@ -103,29 +97,6 @@ def get_changed_files(new, old):
)[1])
-def get_files_matching(all_file_list_strategy):
- @functools.wraps(all_file_list_strategy)
- @memoize_by_cwd
- def wrapper(include_expr, exclude_expr):
- include_regex = re.compile(include_expr)
- exclude_regex = re.compile(exclude_expr)
- return {
- filename
- for filename in all_file_list_strategy()
- if (
- include_regex.search(filename) and
- not exclude_regex.search(filename) and
- os.path.lexists(filename)
- )
- }
- return wrapper
-
-
-get_staged_files_matching = get_files_matching(get_staged_files)
-get_all_files_matching = get_files_matching(get_all_files)
-get_conflicted_files_matching = get_files_matching(get_conflicted_files)
-
-
def check_for_cygwin_mismatch():
"""See https://github.com/pre-commit/pre-commit/issues/354"""
if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -12,6 +12,7 @@
import pre_commit.constants as C
from pre_commit.commands.install_uninstall import install
from pre_commit.commands.run import _compute_cols
+from pre_commit.commands.run import _filter_by_include_exclude
from pre_commit.commands.run import _get_skips
from pre_commit.commands.run import _has_unmerged_paths
from pre_commit.commands.run import run
@@ -25,6 +26,7 @@
from testing.fixtures import modify_config
from testing.fixtures import read_config
from testing.util import cmd_output_mocked_pre_commit_home
+from testing.util import xfailif_no_symlink
@pytest.yield_fixture
@@ -181,6 +183,21 @@ def test_exclude_types_hook_repository(
assert b'exe' not in printed
+def test_global_exclude(cap_out, tempdir_factory, mock_out_store_directory):
+ git_path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
+ with cwd(git_path):
+ with modify_config() as config:
+ config['exclude'] = '^foo.py$'
+ open('foo.py', 'a').close()
+ open('bar.py', 'a').close()
+ cmd_output('git', 'add', '.')
+ ret, printed = _do_run(cap_out, git_path, _get_opts(verbose=True))
+ assert ret == 0
+ # Does not contain foo.py since it was excluded
+ expected = b'hookid: bash_hook\n\nbar.py\nHello World\n\n'
+ assert printed.endswith(expected)
+
+
def test_show_diff_on_failure(
capfd, cap_out, tempdir_factory, mock_out_store_directory,
):
@@ -744,3 +761,45 @@ def test_fail_fast(
ret, printed = _do_run(cap_out, repo_with_failing_hook, _get_opts())
# it should have only run one hook
assert printed.count(b'Failing hook') == 1
+
+
[email protected]
+def some_filenames():
+ return (
+ '.pre-commit-hooks.yaml',
+ 'pre_commit/main.py',
+ 'pre_commit/git.py',
+ 'im_a_file_that_doesnt_exist.py',
+ )
+
+
+def test_include_exclude_base_case(some_filenames):
+ ret = _filter_by_include_exclude(some_filenames, '', '^$')
+ assert ret == {
+ '.pre-commit-hooks.yaml',
+ 'pre_commit/main.py',
+ 'pre_commit/git.py',
+ }
+
+
+@xfailif_no_symlink
+def test_matches_broken_symlink(tmpdir): # pramga: no cover (non-windows)
+ with tmpdir.as_cwd():
+ os.symlink('does-not-exist', 'link')
+ ret = _filter_by_include_exclude({'link'}, '', '^$')
+ assert ret == {'link'}
+
+
+def test_include_exclude_total_match(some_filenames):
+ ret = _filter_by_include_exclude(some_filenames, r'^.*\.py$', '^$')
+ assert ret == {'pre_commit/main.py', 'pre_commit/git.py'}
+
+
+def test_include_exclude_does_search_instead_of_match(some_filenames):
+ ret = _filter_by_include_exclude(some_filenames, r'\.yaml$', '^$')
+ assert ret == {'.pre-commit-hooks.yaml'}
+
+
+def test_include_exclude_exclude_removes_files(some_filenames):
+ ret = _filter_by_include_exclude(some_filenames, '', r'\.py$')
+ assert ret == {'.pre-commit-hooks.yaml'}
diff --git a/tests/git_test.py b/tests/git_test.py
--- a/tests/git_test.py
+++ b/tests/git_test.py
@@ -11,7 +11,6 @@
from pre_commit.util import cmd_output
from pre_commit.util import cwd
from testing.fixtures import git_dir
-from testing.util import xfailif_no_symlink
def test_get_root_at_root(tempdir_factory):
@@ -66,56 +65,6 @@ def test_cherry_pick_conflict(in_merge_conflict):
assert git.is_in_merge_conflict() is False
[email protected]
-def get_files_matching_func():
- def get_filenames():
- return (
- '.pre-commit-hooks.yaml',
- 'pre_commit/main.py',
- 'pre_commit/git.py',
- 'im_a_file_that_doesnt_exist.py',
- )
-
- return git.get_files_matching(get_filenames)
-
-
-def test_get_files_matching_base(get_files_matching_func):
- ret = get_files_matching_func('', '^$')
- assert ret == {
- '.pre-commit-hooks.yaml',
- 'pre_commit/main.py',
- 'pre_commit/git.py',
- }
-
-
-@xfailif_no_symlink
-def test_matches_broken_symlink(tmpdir): # pragma: no cover (non-windwos)
- with tmpdir.as_cwd():
- os.symlink('does-not-exist', 'link')
- func = git.get_files_matching(lambda: ('link',))
- assert func('', '^$') == {'link'}
-
-
-def test_get_files_matching_total_match(get_files_matching_func):
- ret = get_files_matching_func('^.*\\.py$', '^$')
- assert ret == {'pre_commit/main.py', 'pre_commit/git.py'}
-
-
-def test_does_search_instead_of_match(get_files_matching_func):
- ret = get_files_matching_func('\\.yaml$', '^$')
- assert ret == {'.pre-commit-hooks.yaml'}
-
-
-def test_does_not_include_deleted_fileS(get_files_matching_func):
- ret = get_files_matching_func('exist.py', '^$')
- assert ret == set()
-
-
-def test_exclude_removes_files(get_files_matching_func):
- ret = get_files_matching_func('', '\\.py$')
- assert ret == {'.pre-commit-hooks.yaml'}
-
-
def resolve_conflict():
with open('conflict_file', 'w') as conflicted_file:
conflicted_file.write('herp\nderp\n')
| Global configuration of files / exclude
A top-level entry of `files` and `exclude` that was used as the default by all hooks where those options are not defined would be really useful.
| Unfortunately, this suffers from the same problem as #240 in that the top level is currently a list and not a dictionary
| 2017-09-10T05:04:30 |
pre-commit/pre-commit | 622 | pre-commit__pre-commit-622 | [
"621"
] | 773a817f7fa300c5561e7d27ff6a67b11c261fc5 | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -8,6 +8,7 @@
from pre_commit.util import CalledProcessError
from pre_commit.util import cmd_output
+from pre_commit.util import mkdirp
logger = logging.getLogger('pre_commit')
@@ -43,6 +44,7 @@ def staged_files_only(patch_dir):
'Stashing unstaged files to {}.'.format(patch_filename),
)
# Save the current unstaged changes as a patch
+ mkdirp(patch_dir)
with io.open(patch_filename, 'wb') as patch_file:
patch_file.write(diff_stdout_binary)
| diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -75,6 +75,15 @@ def test_foo_something_unstaged(foo_staged, patch_dir):
_test_foo_state(foo_staged, 'herp\nderp\n', 'AM')
+def test_does_not_crash_patch_dir_does_not_exist(foo_staged, patch_dir):
+ with io.open(foo_staged.foo_filename, 'w') as foo_file:
+ foo_file.write('hello\nworld\n')
+
+ shutil.rmtree(patch_dir)
+ with staged_files_only(patch_dir):
+ pass
+
+
def test_something_unstaged_ext_diff_tool(foo_staged, patch_dir, tmpdir):
diff_tool = tmpdir.join('diff-tool.sh')
diff_tool.write('#!/usr/bin/env bash\necho "$@"\n')
| Unstaged files + never ran pre-commit => "No such file or directory: .../.cache/pre-commit/patch..."
```
$ pre-commit run
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/asottile/.cache/pre-commit/patch1505686307.
An unexpected error has occurred: IOError: [Errno 2] No such file or directory: '/home/asottile/.cache/pre-commit/patch1505686307'
Check the log at /home/asottile/.cache/pre-commit/pre-commit.log
```
Stacktrace:
```python
Traceback (most recent call last):
File "/home/asottile/workspace/pre-commit/pre_commit/error_handler.py", line 44, in error_handler
yield
File "/home/asottile/workspace/pre-commit/pre_commit/main.py", line 231, in main
return run(runner, args)
File "/home/asottile/workspace/pre-commit/pre_commit/commands/run.py", line 249, in run
with ctx:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/home/asottile/workspace/pre-commit/pre_commit/staged_files_only.py", line 46, in staged_files_only
with io.open(patch_filename, 'wb') as patch_file:
IOError: [Errno 2] No such file or directory: '/home/asottile/.cache/pre-commit/patch1505686307'
```
| Originally from #620 | 2017-09-17T22:23:10 |
pre-commit/pre-commit | 624 | pre-commit__pre-commit-624 | [
"623"
] | f4595dce8cddd4192e0d4b9e29e2701a9d4169d7 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,11 +29,7 @@
packages=find_packages(exclude=('tests*', 'testing*')),
package_data={
'pre_commit': [
- 'resources/hook-tmpl',
- 'resources/pre-push-tmpl',
- 'resources/rbenv.tar.gz',
- 'resources/ruby-build.tar.gz',
- 'resources/ruby-download.tar.gz',
+ 'resources/*',
'resources/empty_template/*',
'resources/empty_template/.npmignore',
],
| commit-msg stage does not work
Everything works as expected when running just `pre-commit install`, then hooks work.
But when running `pre-commit install -t commit-msg` `IOError` happens, since template could not be found.
Here's the detailed information.
## Env
- `python2.7`
- `pipenv 7.3.7`
- `pre-commit 1.1.1`
Actually tested with both `python2` and `python3`.
## Configuration
```yaml
- repo: local
hooks:
- id: gitlint
name: gitlint
entry: "bash -c 'gitlint lint'"
language: system
stages: [commit-msg]
- id: pytest
name: pytest
entry: "bash -c 'python manage.py test'"
language: system
- id: safety
name: safety
entry: "bash -c 'safety check'"
language: system
```
## Output
```
» pre-commit install -t commit-msg
Running in migration mode with existing hooks at /Users/sobolev/Desktop/test/.git/hooks/commit-msg.legacy
Use -f to use only pre-commit.
An unexpected error has occurred: IOError: [Errno 2] No such file or directory: '/Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl'
Check the log at /Users/sobolev/.cache/pre-commit/pre-commit.log
```
When I do `ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl` that's what is see:
```
(test-p4WySO70) ~/Desktop/test master ✗ ✚ 2 ⚡
» ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources
empty_template pre-push-tmpl ruby-build.tar.gz
hook-tmpl rbenv.tar.gz ruby-download.tar.gz
```
| 2017-09-20T12:30:19 |
||
pre-commit/pre-commit | 630 | pre-commit__pre-commit-630 | [
"404"
] | 3a7806ea30507dbfba6571260210420a62f8022d | diff --git a/pre_commit/languages/all.py b/pre_commit/languages/all.py
--- a/pre_commit/languages/all.py
+++ b/pre_commit/languages/all.py
@@ -5,6 +5,7 @@
from pre_commit.languages import golang
from pre_commit.languages import node
from pre_commit.languages import pcre
+from pre_commit.languages import pygrep
from pre_commit.languages import python
from pre_commit.languages import ruby
from pre_commit.languages import script
@@ -54,6 +55,7 @@
'golang': golang,
'node': node,
'pcre': pcre,
+ 'pygrep': pygrep,
'python': python,
'ruby': ruby,
'script': script,
diff --git a/pre_commit/languages/pygrep.py b/pre_commit/languages/pygrep.py
new file mode 100644
--- /dev/null
+++ b/pre_commit/languages/pygrep.py
@@ -0,0 +1,59 @@
+from __future__ import absolute_import
+from __future__ import unicode_literals
+
+import argparse
+import re
+import sys
+
+from pre_commit import output
+from pre_commit.languages import helpers
+from pre_commit.xargs import xargs
+
+
+ENVIRONMENT_DIR = None
+get_default_version = helpers.basic_get_default_version
+healthy = helpers.basic_healthy
+install_environment = helpers.no_install
+
+
+def _process_filename_by_line(pattern, filename):
+ retv = 0
+ with open(filename, 'rb') as f:
+ for line_no, line in enumerate(f, start=1):
+ if pattern.search(line):
+ retv = 1
+ output.write('{}:{}:'.format(filename, line_no))
+ output.write_line(line.rstrip(b'\r\n'))
+ return retv
+
+
+def run_hook(repo_cmd_runner, hook, file_args):
+ exe = (sys.executable, '-m', __name__)
+ exe += tuple(hook['args']) + (hook['entry'],)
+ return xargs(exe, file_args)
+
+
+def main(argv=None):
+ parser = argparse.ArgumentParser(
+ description=(
+ 'grep-like finder using python regexes. Unlike grep, this tool '
+ 'returns nonzero when it finds a match and zero otherwise. The '
+ 'idea here being that matches are "problems".'
+ ),
+ )
+ parser.add_argument('-i', '--ignore-case', action='store_true')
+ parser.add_argument('pattern', help='python regex pattern.')
+ parser.add_argument('filenames', nargs='*')
+ args = parser.parse_args(argv)
+
+ flags = re.IGNORECASE if args.ignore_case else 0
+ pattern = re.compile(args.pattern.encode(), flags)
+
+ retv = 0
+ for filename in args.filenames:
+ retv |= _process_filename_by_line(pattern, filename)
+ return retv
+
+
+if __name__ == '__main__':
+ exit(main())
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -202,8 +202,8 @@ class LocalRepository(Repository):
def _cmd_runner_from_deps(self, language_name, deps):
"""local repositories have a cmd runner per hook"""
language = languages[language_name]
- # pcre / script / system / docker_image do not have environments so
- # they work out of the current directory
+ # pcre / pygrep / script / system / docker_image do not have
+ # environments so they work out of the current directory
if language.ENVIRONMENT_DIR is None:
return PrefixedCommandRunner(git.get_root())
else:
| diff --git a/testing/fixtures.py b/testing/fixtures.py
--- a/testing/fixtures.py
+++ b/testing/fixtures.py
@@ -73,7 +73,7 @@ def config_with_local_hooks():
('id', 'do_not_commit'),
('name', 'Block if "DO NOT COMMIT" is found'),
('entry', 'DO NOT COMMIT'),
- ('language', 'pcre'),
+ ('language', 'pygrep'),
('files', '^(.*)$'),
))],
),
diff --git a/testing/resources/pcre_hooks_repo/.pre-commit-hooks.yaml b/testing/resources/pcre_hooks_repo/.pre-commit-hooks.yaml
deleted file mode 100644
--- a/testing/resources/pcre_hooks_repo/.pre-commit-hooks.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-- id: regex-with-quotes
- name: Regex with quotes
- entry: "foo'bar"
- language: pcre
- files: ''
-- id: other-regex
- name: Other regex
- entry: ^\[INFO\]
- language: pcre
- files: ''
-- id: regex-with-grep-args
- name: Regex with grep extra arguments
- entry: foo.+bar
- language: pcre
- files: ''
- args: [-i]
diff --git a/tests/languages/pygrep_test.py b/tests/languages/pygrep_test.py
new file mode 100644
--- /dev/null
+++ b/tests/languages/pygrep_test.py
@@ -0,0 +1,40 @@
+from __future__ import absolute_import
+from __future__ import unicode_literals
+
+import pytest
+
+from pre_commit.languages import pygrep
+
+
[email protected]
+def some_files(tmpdir):
+ tmpdir.join('f1').write_binary(b'foo\nbar\n')
+ tmpdir.join('f2').write_binary(b'[INFO] hi\n')
+ tmpdir.join('f3').write_binary(b"with'quotes\n")
+ with tmpdir.as_cwd():
+ yield
+
+
[email protected]('some_files')
[email protected](
+ ('pattern', 'expected_retcode', 'expected_out'),
+ (
+ ('baz', 0, ''),
+ ('foo', 1, 'f1:1:foo\n'),
+ ('bar', 1, 'f1:2:bar\n'),
+ (r'(?i)\[info\]', 1, 'f2:1:[INFO] hi\n'),
+ ("h'q", 1, "f3:1:with'quotes\n"),
+ ),
+)
+def test_main(some_files, cap_out, pattern, expected_retcode, expected_out):
+ ret = pygrep.main((pattern, 'f1', 'f2', 'f3'))
+ out = cap_out.get()
+ assert ret == expected_retcode
+ assert out == expected_out
+
+
+def test_ignore_case(some_files, cap_out):
+ ret = pygrep.main(('--ignore-case', 'info', 'f1', 'f2', 'f3'))
+ out = cap_out.get()
+ assert ret == 1
+ assert out == 'f2:1:[INFO] hi\n'
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import
from __future__ import unicode_literals
+import collections
import io
import os.path
import re
@@ -36,6 +37,10 @@
from testing.util import xfailif_windows_no_ruby
+def _norm_out(b):
+ return b.replace(b'\r\n', b'\n')
+
+
def _test_hook_repo(
tempdir_factory,
store,
@@ -54,7 +59,7 @@ def _test_hook_repo(
]
ret = repo.run_hook(hook_dict, args)
assert ret[0] == expected_return_code
- assert ret[1].replace(b'\r\n', b'\n') == expected
+ assert _norm_out(ret[1]) == expected
@pytest.mark.integration
@@ -114,7 +119,7 @@ def run_on_version(version, expected_output):
]
ret = repo.run_hook(hook_dict, [])
assert ret[0] == 0
- assert ret[1].replace(b'\r\n', b'\n') == expected_output
+ assert _norm_out(ret[1]) == expected_output
run_on_version('python3.4', b'3.4\n[]\nHello World\n')
run_on_version('python3.5', b'3.5\n[]\nHello World\n')
@@ -277,25 +282,6 @@ def test_missing_executable(tempdir_factory, store):
)
[email protected]
-def test_missing_pcre_support(tempdir_factory, store):
- orig_find_executable = parse_shebang.find_executable
-
- def no_grep(exe, **kwargs):
- if exe == pcre.GREP:
- return None
- else:
- return orig_find_executable(exe, **kwargs)
-
- with mock.patch.object(parse_shebang, 'find_executable', no_grep):
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'regex-with-quotes', ['/dev/null'],
- 'Executable `{}` not found'.format(pcre.GREP).encode('UTF-8'),
- expected_return_code=1,
- )
-
-
@pytest.mark.integration
def test_run_a_script_hook(tempdir_factory, store):
_test_hook_repo(
@@ -330,85 +316,88 @@ def test_run_hook_with_curly_braced_arguments(tempdir_factory, store):
)
-@xfailif_no_pcre_support
[email protected]
-def test_pcre_hook_no_match(tempdir_factory, store):
- path = git_dir(tempdir_factory)
- with cwd(path):
- with io.open('herp', 'w') as herp:
- herp.write('foo')
-
- with io.open('derp', 'w') as derp:
- derp.write('bar')
-
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'regex-with-quotes', ['herp', 'derp'], b'',
- )
-
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'other-regex', ['herp', 'derp'], b'',
- )
-
+def _make_grep_repo(language, entry, store, args=()):
+ config = collections.OrderedDict((
+ ('repo', 'local'),
+ (
+ 'hooks', [
+ collections.OrderedDict((
+ ('id', 'grep-hook'),
+ ('name', 'grep-hook'),
+ ('language', language),
+ ('entry', entry),
+ ('args', args),
+ ('types', ['text']),
+ )),
+ ],
+ ),
+ ))
+ repo = Repository.create(config, store)
+ (_, hook), = repo.hooks
+ return repo, hook
-@xfailif_no_pcre_support
[email protected]
-def test_pcre_hook_matching(tempdir_factory, store):
- path = git_dir(tempdir_factory)
- with cwd(path):
- with io.open('herp', 'w') as herp:
- herp.write("\nherpfoo'bard\n")
- with io.open('derp', 'w') as derp:
- derp.write('[INFO] information yo\n')
[email protected]
+def greppable_files(tmpdir):
+ with tmpdir.as_cwd():
+ cmd_output('git', 'init', '.')
+ tmpdir.join('f1').write_binary(b"hello'hi\nworld\n")
+ tmpdir.join('f2').write_binary(b'foo\nbar\nbaz\n')
+ tmpdir.join('f3').write_binary(b'[WARN] hi\n')
+ yield tmpdir
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'regex-with-quotes', ['herp', 'derp'], b"herp:2:herpfoo'bard\n",
- expected_return_code=1,
- )
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'other-regex', ['herp', 'derp'], b'derp:1:[INFO] information yo\n',
- expected_return_code=1,
- )
+class TestPygrep(object):
+ language = 'pygrep'
+ def test_grep_hook_matching(self, greppable_files, store):
+ repo, hook = _make_grep_repo(self.language, 'ello', store)
+ ret, out, _ = repo.run_hook(hook, ('f1', 'f2', 'f3'))
+ assert ret == 1
+ assert _norm_out(out) == b"f1:1:hello'hi\n"
-@xfailif_no_pcre_support
[email protected]
-def test_pcre_hook_case_insensitive_option(tempdir_factory, store):
- path = git_dir(tempdir_factory)
- with cwd(path):
- with io.open('herp', 'w') as herp:
- herp.write('FoOoOoObar\n')
+ def test_grep_hook_case_insensitive(self, greppable_files, store):
+ repo, hook = _make_grep_repo(self.language, 'ELLO', store, args=['-i'])
+ ret, out, _ = repo.run_hook(hook, ('f1', 'f2', 'f3'))
+ assert ret == 1
+ assert _norm_out(out) == b"f1:1:hello'hi\n"
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'regex-with-grep-args', ['herp'], b'herp:1:FoOoOoObar\n',
- expected_return_code=1,
- )
+ @pytest.mark.parametrize('regex', ('nope', "foo'bar", r'^\[INFO\]'))
+ def test_grep_hook_not_matching(self, regex, greppable_files, store):
+ repo, hook = _make_grep_repo(self.language, regex, store)
+ ret, out, _ = repo.run_hook(hook, ('f1', 'f2', 'f3'))
+ assert (ret, out) == (0, b'')
@xfailif_no_pcre_support
[email protected]
-def test_pcre_many_files(tempdir_factory, store):
- # This is intended to simulate lots of passing files and one failing file
- # to make sure it still fails. This is not the case when naively using
- # a system hook with `grep -H -n '...'` and expected_return_code=1.
- path = git_dir(tempdir_factory)
- with cwd(path):
- with io.open('herp', 'w') as herp:
- herp.write('[INFO] info\n')
-
- _test_hook_repo(
- tempdir_factory, store, 'pcre_hooks_repo',
- 'other-regex',
- ['/dev/null'] * 15000 + ['herp'],
- b'herp:1:[INFO] info\n',
- expected_return_code=1,
- )
+class TestPCRE(TestPygrep):
+ """organized as a class for xfailing pcre"""
+ language = 'pcre'
+
+ def test_pcre_hook_many_files(self, greppable_files, store):
+ # This is intended to simulate lots of passing files and one failing
+ # file to make sure it still fails. This is not the case when naively
+ # using a system hook with `grep -H -n '...'`
+ repo, hook = _make_grep_repo('pcre', 'ello', store)
+ ret, out, _ = repo.run_hook(hook, (os.devnull,) * 15000 + ('f1',))
+ assert ret == 1
+ assert _norm_out(out) == b"f1:1:hello'hi\n"
+
+ def test_missing_pcre_support(self, greppable_files, store):
+ orig_find_executable = parse_shebang.find_executable
+
+ def no_grep(exe, **kwargs):
+ if exe == pcre.GREP:
+ return None
+ else:
+ return orig_find_executable(exe, **kwargs)
+
+ with mock.patch.object(parse_shebang, 'find_executable', no_grep):
+ repo, hook = _make_grep_repo('pcre', 'ello', store)
+ ret, out, _ = repo.run_hook(hook, ('f1', 'f2', 'f3'))
+ assert ret == 1
+ expected = 'Executable `{}` not found'.format(pcre.GREP).encode()
+ assert out == expected
def _norm_pwd(path):
@@ -703,7 +692,7 @@ def test_local_python_repo(store):
(_, hook), = repo.hooks
ret = repo.run_hook(hook, ('filename',))
assert ret[0] == 0
- assert ret[1].replace(b'\r\n', b'\n') == b"['filename']\nHello World\n"
+ assert _norm_out(ret[1]) == b"['filename']\nHello World\n"
def test_hook_id_not_present(tempdir_factory, store, fake_log_handler):
diff --git a/tests/runner_test.py b/tests/runner_test.py
--- a/tests/runner_test.py
+++ b/tests/runner_test.py
@@ -70,7 +70,7 @@ def test_local_hooks(tempdir_factory, mock_out_store_directory):
('id', 'do_not_commit'),
('name', 'Block if "DO NOT COMMIT" is found'),
('entry', 'DO NOT COMMIT'),
- ('language', 'pcre'),
+ ('language', 'pygrep'),
('files', '^(.*)$'),
)),
),
@@ -105,7 +105,7 @@ def test_local_hooks_alt_config(tempdir_factory, mock_out_store_directory):
('id', 'do_not_commit'),
('name', 'Block if "DO NOT COMMIT" is found'),
('entry', 'DO NOT COMMIT'),
- ('language', 'pcre'),
+ ('language', 'pygrep'),
('files', '^(.*)$'),
)),
),
| Expose `negate` and deprecate pcre hooks
See https://github.com/pre-commit/pre-commit/blob/b05cc4077e621cec637b22d6ad2b80ee84749f97/pre_commit/xargs.py#L48
I imagine something like this:
``` yaml
- id: test
language: system
negate_returncode: true
entry: grep -P herp
```
Instead of the equivalent pcre hook
| Relying on standard Unix system commands won't make `pre-commit` very Windows-friendly.
That wouldn't be a regression, pcre hooks run `grep -P` on windows right now: https://github.com/pre-commit/pre-commit/blob/8837cfa7ffcc419216d4e01392cee0f1ceee9c88/pre_commit/languages/pcre.py#L9
Most of the reason I want to make pcre less special is due to the complexity maintained here and some oddities about cygwin compiled binaries (our tests for pcre are xfailed on windows despite it providing a grep binary which responds to -P and for the most part works)
I understand. But maybe we could emulate `grep` in Python ? It think that would not mean adding much code, and would be really more robust and testable.
pcre hooks are pretty powerful -- I've written a few of them (usually hooks specific to certain projects) because they're cheap and easy (and I probably wouldn't have taken the time to write them if it was harder than just an entry in `.pre-commit-config.yaml`). I think they're a pretty cool feature.
Would a simple Python implementation that basically runs `re.search` on the file contents be enough? It's not quite pcre (so, probably a breaking change), but I think that'd be preferable to dropping regex matching entirely.
Yeah that seems fine, probably in the same patch that can be added
On Jan 2, 2017 12:41 PM, "Chris Kuehl" <[email protected]> wrote:
> pcre hooks are pretty powerful -- I've written a few of them (usually
> hooks specific to certain projects) because they're cheap and easy (and I
> probably wouldn't have taken the time to write them if it was harder than
> just an entry in .pre-commit-config.yaml). I think they're a pretty cool
> feature.
>
> Would a simple Python implementation that basically runs re.search on the
> file contents be enough? It's not quite pcre (so, probably a breaking
> change), but I think that'd be preferable to dropping regex matching
> entirely.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pre-commit/pre-commit/issues/404#issuecomment-270000874>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABugn2n8Qk6BzGdjoMnlsJo8fKOyT2u9ks5rOTa4gaJpZM4JvfN7>
> .
>
| 2017-09-23T01:05:50 |
pre-commit/pre-commit | 680 | pre-commit__pre-commit-680 | [
"679"
] | 0628df535b47ee503efbda2b777254f1d7b7f4bc | diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -156,6 +156,21 @@ def clone_strategy(directory):
def make_local(self, deps):
def make_local_strategy(directory):
copy_tree_to_path(resource_filename('empty_template'), directory)
+
+ env = no_git_env()
+ name, email = 'pre-commit', '[email protected]'
+ env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name
+ env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email
+
+ # initialize the git repository so it looks more like cloned repos
+ def _git_cmd(*args):
+ cmd_output('git', '-C', directory, *args, env=env)
+
+ _git_cmd('init', '.')
+ _git_cmd('config', 'remote.origin.url', '<<unknown>>')
+ _git_cmd('add', '.')
+ _git_cmd('commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')
+
return self._new_repo(
'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,
make_local_strategy,
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -541,6 +541,24 @@ def test_additional_golang_dependencies_installed(
assert 'hello' in binaries
+def test_local_golang_additional_dependencies(store):
+ config = {
+ 'repo': 'local',
+ 'hooks': [{
+ 'id': 'hello',
+ 'name': 'hello',
+ 'entry': 'hello',
+ 'language': 'golang',
+ 'additional_dependencies': ['github.com/golang/example/hello'],
+ }],
+ }
+ repo = Repository.create(config, store)
+ (_, hook), = repo.hooks
+ ret = repo.run_hook(hook, ('filename',))
+ assert ret[0] == 0
+ assert _norm_out(ret[1]) == b"Hello, Go examples!\n"
+
+
def test_reinstall(tempdir_factory, store, log_info_mock):
path = make_repo(tempdir_factory, 'python_hooks_repo')
config = make_config_from_repo(path)
| Crash on `local`-only `golang` repositories
While investigating: https://github.com/pre-commit/pre-commit-hooks/issues/255
Using this configuration:
```yaml
repos:
- repo: local
hooks:
- id: talisman
name: talisman
entry: talisman -githook pre-commit
pass_filenames: false
types: [text]
language: golang
additional_dependencies: [github.com/thoughtworks/talisman]
```
```
$ pre-commit run --all-files
[INFO] Initializing environment for local:github.com/thoughtworks/talisman.
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/git', 'config', 'remote.origin.url')
Return code: 1
Expected return code: 0
Output: (none)
Errors: (none)
Check the log at /home/asottile/.cache/pre-commit/pre-commit.log
```
```
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/git', 'config', 'remote.origin.url')
Return code: 1
Expected return code: 0
Output: (none)
Errors: (none)
Traceback (most recent call last):
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 47, in error_handler
yield
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/main.py", line 259, in main
return run(runner, args)
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/commands/run.py", line 256, in run
repo.require_installed()
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/repository.py", line 202, in require_installed
_install_all(self._venvs, self.repo_config['repo'], self.store)
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/repository.py", line 102, in _install_all
language.install_environment(cmd_runner, version, deps)
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/languages/golang.py", line 60, in install_environment
remote = git.get_remote_url(repo_cmd_runner.path())
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/git.py", line 41, in get_remote_url
ret = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)[1]
File "/tmp/wat/venv/local/lib/python2.7/site-packages/pre_commit/util.py", line 188, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
CalledProcessError: Command: ('/usr/bin/git', 'config', 'remote.origin.url')
Return code: 1
Expected return code: 0
Output: (none)
Errors: (none)
```
| 2018-01-09T17:46:44 |
|
pre-commit/pre-commit | 685 | pre-commit__pre-commit-685 | [
"200"
] | 577f1093cf62502fbbd81ef770d2e29fba1e9253 | diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -7,6 +7,7 @@
from pre_commit.envcontext import envcontext
from pre_commit.envcontext import Var
from pre_commit.languages import helpers
+from pre_commit.languages.python import bin_dir
from pre_commit.util import clean_path_on_failure
from pre_commit.util import cmd_output
from pre_commit.xargs import xargs
@@ -17,10 +18,17 @@
healthy = helpers.basic_healthy
-def get_env_patch(venv): # pragma: windows no cover
+def _envdir(prefix, version):
+ directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ return prefix.path(directory)
+
+
+def get_env_patch(venv):
if sys.platform == 'cygwin': # pragma: no cover
_, win_venv, _ = cmd_output('cygpath', '-w', venv)
install_prefix = r'{}\bin'.format(win_venv.strip())
+ elif sys.platform == 'win32': # pragma: no cover
+ install_prefix = bin_dir(venv)
else:
install_prefix = venv
return (
@@ -28,29 +36,26 @@ def get_env_patch(venv): # pragma: windows no cover
('NPM_CONFIG_PREFIX', install_prefix),
('npm_config_prefix', install_prefix),
('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
- ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),
+ ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
)
@contextlib.contextmanager
-def in_env(prefix, language_version): # pragma: windows no cover
- envdir = prefix.path(
- helpers.environment_dir(ENVIRONMENT_DIR, language_version),
- )
- with envcontext(get_env_patch(envdir)):
+def in_env(prefix, language_version):
+ with envcontext(get_env_patch(_envdir(prefix, language_version))):
yield
-def install_environment(
- prefix, version, additional_dependencies,
-): # pragma: windows no cover
+def install_environment(prefix, version, additional_dependencies):
additional_dependencies = tuple(additional_dependencies)
assert prefix.exists('package.json')
- directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ envdir = _envdir(prefix, version)
- env_dir = prefix.path(directory)
- with clean_path_on_failure(env_dir):
- cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]
+ # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
+ if sys.platform == 'win32': # pragma: no cover
+ envdir = '\\\\?\\' + os.path.normpath(envdir)
+ with clean_path_on_failure(envdir):
+ cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', envdir]
if version != 'default':
cmd.extend(['-n', version])
cmd_output(*cmd)
@@ -62,6 +67,6 @@ def install_environment(
)
-def run_hook(prefix, hook, file_args): # pragma: windows no cover
+def run_hook(prefix, hook, file_args):
with in_env(prefix, hook['language_version']):
return xargs(helpers.to_cmd(hook), file_args)
| diff --git a/testing/resources/node_0_11_8_hooks_repo/.pre-commit-hooks.yaml b/testing/resources/node_0_11_8_hooks_repo/.pre-commit-hooks.yaml
deleted file mode 100644
--- a/testing/resources/node_0_11_8_hooks_repo/.pre-commit-hooks.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-- id: node-11-8-hook
- name: Node 0.11.8 hook
- entry: node-11-8-hook
- language: node
- language_version: 0.11.8
- files: \.js$
diff --git a/testing/resources/node_0_11_8_hooks_repo/package.json b/testing/resources/node_0_11_8_hooks_repo/package.json
deleted file mode 100644
--- a/testing/resources/node_0_11_8_hooks_repo/package.json
+++ /dev/null
@@ -1,5 +0,0 @@
-{
- "name": "node-11-8-hook",
- "version": "0.0.1",
- "bin": {"node-11-8-hook": "./bin/main.js"}
-}
diff --git a/testing/resources/node_versioned_hooks_repo/.pre-commit-hooks.yaml b/testing/resources/node_versioned_hooks_repo/.pre-commit-hooks.yaml
new file mode 100644
--- /dev/null
+++ b/testing/resources/node_versioned_hooks_repo/.pre-commit-hooks.yaml
@@ -0,0 +1,6 @@
+- id: versioned-node-hook
+ name: Versioned node hook
+ entry: versioned-node-hook
+ language: node
+ language_version: 9.3.0
+ files: \.js$
diff --git a/testing/resources/node_0_11_8_hooks_repo/bin/main.js b/testing/resources/node_versioned_hooks_repo/bin/main.js
similarity index 100%
rename from testing/resources/node_0_11_8_hooks_repo/bin/main.js
rename to testing/resources/node_versioned_hooks_repo/bin/main.js
diff --git a/testing/resources/node_versioned_hooks_repo/package.json b/testing/resources/node_versioned_hooks_repo/package.json
new file mode 100644
--- /dev/null
+++ b/testing/resources/node_versioned_hooks_repo/package.json
@@ -0,0 +1,5 @@
+{
+ "name": "versioned-node-hook",
+ "version": "0.0.1",
+ "bin": {"versioned-node-hook": "./bin/main.js"}
+}
diff --git a/testing/util.py b/testing/util.py
--- a/testing/util.py
+++ b/testing/util.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import os.path
+import sys
import pytest
@@ -42,9 +43,21 @@ def cmd_output_mocked_pre_commit_home(*args, **kwargs):
reason='Ruby support not yet implemented on windows.',
)
-xfailif_windows_no_node = pytest.mark.xfail(
- os.name == 'nt',
- reason='Node support not yet implemented on windows.',
+
+def broken_deep_listdir(): # pragma: no cover (platform specific)
+ if sys.platform != 'win32':
+ return False
+ try:
+ os.listdir(str('\\\\?\\') + os.path.abspath(str('.')))
+ except OSError:
+ return True
+ else:
+ return False
+
+
+xfailif_broken_deep_listdir = pytest.mark.xfail(
+ broken_deep_listdir(),
+ reason='Node on windows requires deep listdir',
)
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -31,8 +31,8 @@
from testing.util import get_resource_path
from testing.util import skipif_cant_run_docker
from testing.util import skipif_cant_run_swift
+from testing.util import xfailif_broken_deep_listdir
from testing.util import xfailif_no_pcre_support
-from testing.util import xfailif_windows_no_node
from testing.util import xfailif_windows_no_ruby
@@ -186,7 +186,7 @@ def test_run_a_docker_image_hook(tempdir_factory, store, hook_id):
)
-@xfailif_windows_no_node
+@xfailif_broken_deep_listdir
@pytest.mark.integration
def test_run_a_node_hook(tempdir_factory, store):
_test_hook_repo(
@@ -195,12 +195,12 @@ def test_run_a_node_hook(tempdir_factory, store):
)
-@xfailif_windows_no_node
+@xfailif_broken_deep_listdir
@pytest.mark.integration
def test_run_versioned_node_hook(tempdir_factory, store):
_test_hook_repo(
- tempdir_factory, store, 'node_0_11_8_hooks_repo',
- 'node-11-8-hook', [os.devnull], b'v0.11.8\nHello World\n',
+ tempdir_factory, store, 'node_versioned_hooks_repo',
+ 'versioned-node-hook', [os.devnull], b'v9.3.0\nHello World\n',
)
@@ -505,7 +505,7 @@ def test_additional_ruby_dependencies_installed(
assert 'tins' in output
-@xfailif_windows_no_node
+@xfailif_broken_deep_listdir
@pytest.mark.integration
def test_additional_node_dependencies_installed(
tempdir_factory, store,
| Windows: Node Support
This involves solving this ticket: https://github.com/ekalinin/nodeenv/issues/53
I've already started some work on this
| Track https://github.com/ekalinin/nodeenv/issues/140 for progress
Hi, I would appreciate more detail about what is broken and why it relates to `nodeenv`.
I found an apparent bug with node on Windows which may not be related to this bug. I also found a solution which I will explain below.
To summarise, the cmd given to __popen is the wrong cmd, eg the wrong exe is found by pre-commit.
I am using `pre-commit 0.12.2` and the `eslint` mirror (http://github.com/pre-commit/mirrors-eslint), `v3.9.1`.
I installed `eslint` itself through `npm` and it resides in my `node_modules` directory within my project folder. `npm` is `3.10.10` and `eslint` is `v3.15.0`.
The relevant block of my `.pre-commit-config.yaml` looks like this:
```
- repo: git://github.com/pre-commit/mirrors-eslint
sha: 'v3.9.1'
hooks:
- id: eslint
language: system
```
I invoked `eslint` like this from the windows command prompt (same thing happens from gitbash):
`pre-commit run --run-all eslint`
Note that I am in my python `virtualenv` and `node_modules/.bin` is in my PATH.
The result was a `FileNotFoundError`, raised from `util.cmd_outpuit()` at the line `proc = __popen(cmd, **popen_kwargs)`.
The value of `cmd` was `('\\bin\\sh', 'C:\\<path to project>\\node_modules\\.bin\\eslint', '<js file for linting>', '<another js file for linting')`.
In my `node_modules` directory, I actually have the following files: `eslint` and `eslint.cmd`. Looking at the sourcecode of `parse_shebang.find_execitable()` I could see that there was a candidate list of filenames that were attempted to match based on the executable file extensions of the windows environment. The first item in the list was `eslint` without any extension, so it always matched.
In order to make `eslint.cmd` match inside `parse_shebang.find_execitable()`, I altered the code.
With the altered code, the value of `cmd` was like this: `('C:\\<path to project>\\node_modules\\.bin\\eslint', '<js file for linting>', '<another js file for linting>')`.
When subsequently invoked `eslint` as before worked in both the windows cmd prompt and gitbash.
I will submit a pull request for this shortly. I assume the new logic for matching files will not cause other problems.
A lot to parse but I'll try my best :)
So this refers to first-class support for node projects -- the goal of pre-commit is you don't need any dependencies other than installing pre-commit. It can then handle creating isolated node / ruby / python / etc. environments without requiring the user to globally install things. Proper node support (from pre-commit's persepective) is to make `language: node` work without dependencies.
As for your hook configuration, you're not actually depending on the node environment that pre-commit may create so you may find a `local` configuration more convenient: http://pre-commit.com/#repository-local-hooks. Something like this
```yaml
- repo: local
hooks:
- id: eslint
name: eslint
entry: eslint
language: system
files: \.js$
```
As for the PR, yeah that sounds fine. I think windows also does a similar prioritization.
Thanks for the reply and explanation. Thanks for telling me about local hooks! I missed them in the docs and they would work much better for me.
neat, nodeenv has supported windows for a while. _Unfortunately_ our tests choke on this a bit:
```
E Errors:
E * Install prebuilt node (8.1.3) ..... done.
E You do not have sufficient privilege to perform this operation.
E You do not have sufficient privilege to perform this operation.
E Error: Failed to create nodejs.exe link
E * Install npm.js (latest) ... Traceback (most recent call last):
E File "c:\python27\Lib\runpy.py", line 162, in _run_module_as_main
E "__main__", fname, loader, pkg_name)
E File "c:\python27\Lib\runpy.py", line 72, in _run_code
E exec code in run_globals
E File "C:\Users\Anthony\Desktop\git\pre-commit\venv\lib\site-packages\nodeenv.py", line 1277, in <modul
e>
E main()
E File "C:\Users\Anthony\Desktop\git\pre-commit\venv\lib\site-packages\nodeenv.py", line 1028, in main
E create_environment(env_dir, opt)
E File "C:\Users\Anthony\Desktop\git\pre-commit\venv\lib\site-packages\nodeenv.py", line 872, in create_
environment
E instfunc(env_dir, src_dir, opt)
E File "C:\Users\Anthony\Desktop\git\pre-commit\venv\lib\site-packages\nodeenv.py", line 742, in install
_npm_win
E zipf.extractall(src_dir)
E File "c:\python27\Lib\zipfile.py", line 1053, in extractall
E self.extract(zipinfo, path, pwd)
E File "c:\python27\Lib\zipfile.py", line 1041, in extract
E return self._extract_member(member, path, pwd)
E File "c:\python27\Lib\zipfile.py", line 1092, in _extract_member
E os.mkdir(targetpath)
E WindowsError: [Error 206] The filename or extension is too long: 'C:\\temp\\a\\test_run_a_node_hook0\\0\
\.pre-commit\\repogflpmz\\node_env-default\\src\\npm-latest\\node_modules\\pacote\\node_modules\\make-fetch-happen\\node
_modules\\http-proxy-agent\\node_modules\\agent-base\\node_modules\\es6-promisify\\node_modules\\es6-promise\\dist'
```
I'm wondering if we pick a shorter tempdir that it'll work better? Also wondering if this *just works* given my home directory depth.
Yeah even with significantly limiting the path depth, I'm still having [issues](https://github.com/ekalinin/nodeenv/issues/190)
See also https://github.com/npm/npm/issues/17663
Here's the new issue: https://github.com/npm/npm/issues/18978
Shouldn't enabling long paths help with this? Spoiler: they don't.

I would hope so, but I've also seen that even with the setting enabled it still doesn't work (and I don't have a good reason for _why_) (maybe needs a system restart to take?)
I'm going to try poking this again with the reproduction I made in npm/npm#18978 and see what registry settings I can tweak.
Perhaps it's as simple as a missing `"\\?\"` prefix...I'm not a windows dev; guessing based on [msft docs](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396). | 2018-01-12T06:54:04 |
pre-commit/pre-commit | 687 | pre-commit__pre-commit-687 | [
"634"
] | 65f60e25930a4979a4571e41f320b81f622b2556 | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -67,6 +67,15 @@ def _run_single_hook(filenames, hook, repo, args, skips, cols):
filenames = _filter_by_include_exclude(filenames, include, exclude)
types, exclude_types = hook['types'], hook['exclude_types']
filenames = _filter_by_types(filenames, types, exclude_types)
+
+ if hook['language'] == 'pcre':
+ logger.warning(
+ '`{}` (from {}) uses the deprecated pcre language.\n'
+ 'The pcre language is scheduled for removal in pre-commit 2.x.\n'
+ 'The pygrep language is a more portable (and usually drop-in) '
+ 'replacement.'.format(hook['id'], repo.repo_config['repo']),
+ )
+
if hook['id'] in skips:
output.write(get_hook_message(
_hook_msg_start(hook, args.verbose),
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -529,7 +529,7 @@ def test_push_hook(cap_out, repo_with_passing_hook, mock_out_store_directory):
('id', 'do_not_commit'),
('name', 'hook 2'),
('entry', 'DO NOT COMMIT'),
- ('language', 'pcre'),
+ ('language', 'pygrep'),
('types', ['text']),
('stages', ['push']),
)),
@@ -592,7 +592,7 @@ def test_local_hook_passes(
('id', 'do_not_commit'),
('name', 'Block if "DO NOT COMMIT" is found'),
('entry', 'DO NOT COMMIT'),
- ('language', 'pcre'),
+ ('language', 'pygrep'),
('files', '^(.*)$'),
)),
),
@@ -645,6 +645,35 @@ def test_local_hook_fails(
)
+def test_pcre_deprecation_warning(
+ cap_out, repo_with_passing_hook, mock_out_store_directory,
+):
+ config = OrderedDict((
+ ('repo', 'local'),
+ (
+ 'hooks', [OrderedDict((
+ ('id', 'pcre-hook'),
+ ('name', 'pcre-hook'),
+ ('language', 'pcre'),
+ ('entry', '.'),
+ ))],
+ ),
+ ))
+ add_config_to_repo(repo_with_passing_hook, config)
+
+ _test_run(
+ cap_out,
+ repo_with_passing_hook,
+ opts={},
+ expected_outputs=[
+ b'[WARNING] `pcre-hook` (from local) uses the deprecated '
+ b'pcre language.',
+ ],
+ expected_ret=0,
+ stage=False,
+ )
+
+
def test_meta_hook_passes(
cap_out, repo_with_passing_hook, mock_out_store_directory,
):
| Deprecate `pcre` language
Now that pygrep (#630) is a much more portable alternative, pcre is unnecessary and should be deprecated.
A deprecation warning should be issued when loading a configuration containing `language: pcre` and should point the consumer in the right direction to correcting it (either by suggesting a pull request, or by indicating they should modify their `local` configuration).
The `pcre` language will likely be removed in `pre-commit==2.0.0`
| 2018-01-14T01:29:11 |
|
pre-commit/pre-commit | 711 | pre-commit__pre-commit-711 | [
"590"
] | 29715c9268dc866facf0b8a9cbe21d218d948a7b | diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py
--- a/pre_commit/commands/autoupdate.py
+++ b/pre_commit/commands/autoupdate.py
@@ -33,9 +33,9 @@ def _update_repo(repo_config, runner, tags_only):
Args:
repo_config - A config for a repository
"""
- repo = Repository.create(repo_config, runner.store)
+ repo_path = runner.store.clone(repo_config['repo'], repo_config['sha'])
- with cwd(repo._repo_path):
+ with cwd(repo_path):
cmd_output('git', 'fetch')
tag_cmd = ('git', 'describe', 'origin/master', '--tags')
if tags_only:
@@ -57,7 +57,7 @@ def _update_repo(repo_config, runner, tags_only):
new_repo = Repository.create(new_config, runner.store)
# See if any of our hooks were deleted with the new commits
- hooks = {hook['id'] for hook in repo.repo_config['hooks']}
+ hooks = {hook['id'] for hook in repo_config['hooks']}
hooks_missing = hooks - (hooks & set(new_repo.manifest_hooks))
if hooks_missing:
raise RepositoryCannotBeUpdatedError(
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -7,7 +7,6 @@
import pipes
import shutil
import sys
-from collections import defaultdict
import pkg_resources
from cached_property import cached_property
@@ -149,22 +148,11 @@ def create(cls, config, store):
else:
return cls(config, store)
- @cached_property
- def _repo_path(self):
- return self.store.clone(
- self.repo_config['repo'], self.repo_config['sha'],
- )
-
- @cached_property
- def _prefix(self):
- return Prefix(self._repo_path)
-
- def _prefix_from_deps(self, language_name, deps):
- return self._prefix
-
@cached_property
def manifest_hooks(self):
- manifest_path = os.path.join(self._repo_path, C.MANIFEST_FILE)
+ repo, sha = self.repo_config['repo'], self.repo_config['sha']
+ repo_path = self.store.clone(repo, sha)
+ manifest_path = os.path.join(repo_path, C.MANIFEST_FILE)
return {hook['id']: hook for hook in load_manifest(manifest_path)}
@cached_property
@@ -185,21 +173,25 @@ def hooks(self):
for hook in self.repo_config['hooks']
)
- @cached_property
+ def _prefix_from_deps(self, language_name, deps):
+ repo, sha = self.repo_config['repo'], self.repo_config['sha']
+ return Prefix(self.store.clone(repo, sha, deps))
+
def _venvs(self):
- deps_dict = defaultdict(_UniqueList)
- for _, hook in self.hooks:
- deps_dict[(hook['language'], hook['language_version'])].update(
- hook['additional_dependencies'],
- )
ret = []
- for (language, version), deps in deps_dict.items():
- ret.append((self._prefix, language, version, deps))
+ for _, hook in self.hooks:
+ language = hook['language']
+ version = hook['language_version']
+ deps = hook['additional_dependencies']
+ ret.append((
+ self._prefix_from_deps(language, deps),
+ language, version, deps,
+ ))
return tuple(ret)
def require_installed(self):
if not self.__installed:
- _install_all(self._venvs, self.repo_config['repo'], self.store)
+ _install_all(self._venvs(), self.repo_config['repo'], self.store)
self.__installed = True
def run_hook(self, hook, file_args):
@@ -237,19 +229,6 @@ def hooks(self):
for hook in self.repo_config['hooks']
)
- @cached_property
- def _venvs(self):
- ret = []
- for _, hook in self.hooks:
- language = hook['language']
- version = hook['language_version']
- deps = hook['additional_dependencies']
- ret.append((
- self._prefix_from_deps(language, deps),
- language, version, deps,
- ))
- return tuple(ret)
-
class MetaRepository(LocalRepository):
@cached_property
@@ -303,14 +282,3 @@ def hooks(self):
(hook['id'], _hook(self.manifest_hooks[hook['id']], hook))
for hook in self.repo_config['hooks']
)
-
-
-class _UniqueList(list):
- def __init__(self):
- self._set = set()
-
- def update(self, obj):
- for item in obj:
- if item not in self._set:
- self._set.add(item)
- self.append(item)
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -72,9 +72,9 @@ def _write_sqlite_db(self):
with contextlib.closing(sqlite3.connect(tmpfile)) as db:
db.executescript(
'CREATE TABLE repos ('
- ' repo CHAR(255) NOT NULL,'
- ' ref CHAR(255) NOT NULL,'
- ' path CHAR(255) NOT NULL,'
+ ' repo TEXT NOT NULL,'
+ ' ref TEXT NOT NULL,'
+ ' path TEXT NOT NULL,'
' PRIMARY KEY (repo, ref)'
');',
)
@@ -101,15 +101,17 @@ def require_created(self):
self._create()
self.__created = True
- def _new_repo(self, repo, ref, make_strategy):
+ def _new_repo(self, repo, ref, deps, make_strategy):
self.require_created()
+ if deps:
+ repo = '{}:{}'.format(repo, ','.join(sorted(deps)))
def _get_result():
# Check if we already exist
with sqlite3.connect(self.db_path) as db:
result = db.execute(
'SELECT path FROM repos WHERE repo = ? AND ref = ?',
- [repo, ref],
+ (repo, ref),
).fetchone()
if result:
return result[0]
@@ -137,7 +139,7 @@ def _get_result():
)
return directory
- def clone(self, repo, ref):
+ def clone(self, repo, ref, deps=()):
"""Clone the given url and checkout the specific ref."""
def clone_strategy(directory):
cmd_output(
@@ -151,7 +153,7 @@ def clone_strategy(directory):
env=no_git_env(),
)
- return self._new_repo(repo, ref, clone_strategy)
+ return self._new_repo(repo, ref, deps, clone_strategy)
def make_local(self, deps):
def make_local_strategy(directory):
@@ -172,8 +174,7 @@ def _git_cmd(*args):
_git_cmd('commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')
return self._new_repo(
- 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,
- make_local_strategy,
+ 'local', C.LOCAL_REPO_VERSION, deps, make_local_strategy,
)
@cached_property
| diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -165,12 +165,6 @@ def log_info_mock():
yield mck
[email protected]
-def log_warning_mock():
- with mock.patch.object(logging.getLogger('pre_commit'), 'warning') as mck:
- yield mck
-
-
class FakeStream(object):
def __init__(self):
self.data = io.BytesIO()
diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -433,7 +433,7 @@ def test_venvs(tempdir_factory, store):
path = make_repo(tempdir_factory, 'python_hooks_repo')
config = make_config_from_repo(path)
repo = Repository.create(config, store)
- venv, = repo._venvs
+ venv, = repo._venvs()
assert venv == (mock.ANY, 'python', python.get_default_version(), [])
@@ -443,50 +443,33 @@ def test_additional_dependencies(tempdir_factory, store):
config = make_config_from_repo(path)
config['hooks'][0]['additional_dependencies'] = ['pep8']
repo = Repository.create(config, store)
- venv, = repo._venvs
+ venv, = repo._venvs()
assert venv == (mock.ANY, 'python', python.get_default_version(), ['pep8'])
@pytest.mark.integration
-def test_additional_dependencies_duplicated(
- tempdir_factory, store, log_warning_mock,
-):
- path = make_repo(tempdir_factory, 'ruby_hooks_repo')
- config = make_config_from_repo(path)
- deps = ['thread_safe', 'tins', 'thread_safe']
- config['hooks'][0]['additional_dependencies'] = deps
- repo = Repository.create(config, store)
- venv, = repo._venvs
- assert venv == (mock.ANY, 'ruby', 'default', ['thread_safe', 'tins'])
-
-
[email protected]
-def test_additional_python_dependencies_installed(tempdir_factory, store):
+def test_additional_dependencies_roll_forward(tempdir_factory, store):
path = make_repo(tempdir_factory, 'python_hooks_repo')
- config = make_config_from_repo(path)
- config['hooks'][0]['additional_dependencies'] = ['mccabe']
- repo = Repository.create(config, store)
- repo.require_installed()
- with python.in_env(repo._prefix, 'default'):
- output = cmd_output('pip', 'freeze', '-l')[1]
- assert 'mccabe' in output
+ config1 = make_config_from_repo(path)
+ repo1 = Repository.create(config1, store)
+ repo1.require_installed()
+ (prefix1, _, version1, _), = repo1._venvs()
+ with python.in_env(prefix1, version1):
+ assert 'mccabe' not in cmd_output('pip', 'freeze', '-l')[1]
[email protected]
-def test_additional_dependencies_roll_forward(tempdir_factory, store):
- path = make_repo(tempdir_factory, 'python_hooks_repo')
- config = make_config_from_repo(path)
- # Run the repo once without additional_dependencies
- repo = Repository.create(config, store)
- repo.require_installed()
- # Now run it with additional_dependencies
- config['hooks'][0]['additional_dependencies'] = ['mccabe']
- repo = Repository.create(config, store)
- repo.require_installed()
- # We should see our additional dependency installed
- with python.in_env(repo._prefix, 'default'):
- output = cmd_output('pip', 'freeze', '-l')[1]
- assert 'mccabe' in output
+ # Make another repo with additional dependencies
+ config2 = make_config_from_repo(path)
+ config2['hooks'][0]['additional_dependencies'] = ['mccabe']
+ repo2 = Repository.create(config2, store)
+ repo2.require_installed()
+ (prefix2, _, version2, _), = repo2._venvs()
+ with python.in_env(prefix2, version2):
+ assert 'mccabe' in cmd_output('pip', 'freeze', '-l')[1]
+
+ # should not have affected original
+ with python.in_env(prefix1, version1):
+ assert 'mccabe' not in cmd_output('pip', 'freeze', '-l')[1]
@xfailif_windows_no_ruby
@@ -499,7 +482,8 @@ def test_additional_ruby_dependencies_installed(
config['hooks'][0]['additional_dependencies'] = ['thread_safe', 'tins']
repo = Repository.create(config, store)
repo.require_installed()
- with ruby.in_env(repo._prefix, 'default'):
+ (prefix, _, version, _), = repo._venvs()
+ with ruby.in_env(prefix, version):
output = cmd_output('gem', 'list', '--local')[1]
assert 'thread_safe' in output
assert 'tins' in output
@@ -516,7 +500,8 @@ def test_additional_node_dependencies_installed(
config['hooks'][0]['additional_dependencies'] = ['lodash']
repo = Repository.create(config, store)
repo.require_installed()
- with node.in_env(repo._prefix, 'default'):
+ (prefix, _, version, _), = repo._venvs()
+ with node.in_env(prefix, version):
output = cmd_output('npm', 'ls', '-g')[1]
assert 'lodash' in output
@@ -532,7 +517,8 @@ def test_additional_golang_dependencies_installed(
config['hooks'][0]['additional_dependencies'] = deps
repo = Repository.create(config, store)
repo.require_installed()
- binaries = os.listdir(repo._prefix.path(
+ (prefix, _, _, _), = repo._venvs()
+ binaries = os.listdir(prefix.path(
helpers.environment_dir(golang.ENVIRONMENT_DIR, 'default'), 'bin',
))
# normalize for windows
@@ -598,8 +584,9 @@ class MyKeyboardInterrupt(KeyboardInterrupt):
repo.run_hook(hook, [])
# Should have made an environment, however this environment is broken!
- envdir = 'py_env-{}'.format(python.get_default_version())
- assert repo._prefix.exists(envdir)
+ (prefix, _, version, _), = repo._venvs()
+ envdir = 'py_env-{}'.format(version)
+ assert prefix.exists(envdir)
# However, it should be perfectly runnable (reinstall after botched
# install)
@@ -616,8 +603,8 @@ def test_invalidated_virtualenv(tempdir_factory, store):
# Simulate breaking of the virtualenv
repo.require_installed()
- version = python.get_default_version()
- libdir = repo._prefix.path('py_env-{}'.format(version), 'lib', version)
+ (prefix, _, version, _), = repo._venvs()
+ libdir = prefix.path('py_env-{}'.format(version), 'lib', version)
paths = [
os.path.join(libdir, p) for p in ('site.py', 'site.pyc', '__pycache__')
]
| Same repo but different additional dependencies
I have two Git projects `A` and `B`, and those two use same [pre-commit repo](https://github.com/coldnight/pre-commit-pylint). But those two projects use different additional dependencies:
`.pre-commit-config.yaml` in `A`:
```yaml
- repo: [email protected]:coldnight/pre-commit-pylint.git
sha: 630e2662aabf3236fc62460b163d613c4bd1cfbc
hooks:
- id: pylint-py3k
- id: pylint-score-limit
args:
- --limit=8.5
- --rcfile=./.pylintrc
additional_dependencies:
- enum34; python_version<='3.4'
- mock
```
`.pre-commit-config.yaml` in `B`:
```yaml
- repo: [email protected]:coldnight/pre-commit-pylint.git
sha: 630e2662aabf3236fc62460b163d613c4bd1cfbc
hooks:
- id: pylint-py3k
- id: pylint-score-limit
args:
- --limit=8.5
- --rcfile=./.pylintrc
additional_dependencies:
- enum34; python_version<='3.4'
- requests
```
Here is my problem:
1. First I run `pre-commit` in project `A`, and the environment has installed
2. And then I run `pre-commit` in project `B`, and the environment has installed
3. When back project `A` and run `pre-commit`, the installed environment has removed by above and need another installation(Huge slow!)
Any idea for this? Support different projects that use different home directories?
| Ah yes this makes sense based on how the key for a repository is configured.
It currently creates one git repository for each `(repo, ref)` pair and then validates install state inside that cloned repository. A workaround would be to use a *slightly* different repository name or ref.
There's some [recently added code](https://github.com/pre-commit/pre-commit/blob/7139a47c1ca968a2699e467279677fa77ad68aae/pre_commit/store.py#L133-L139) for local repositories that can probably be borrowed here such that different `additional_dependencies` repositories get their own directories. Notice that here, the key becomes `('local:sorted,joined,deps', 'N/A')` -- perhaps something similar can be done for normal repositories as well!
An aside: pylint is a bit of a beast to get working with pre-commit as most of its checks aren't static analysis but *dynamic* analysis (it needs to be able to import your code to check it). pylint often works better as a `local` / `system` / `script` hook which takes advantage of the currently activated virtualenv (and then has the ability to import and inspect your code directly).
Thanks for your reply!
---
> An aside: pylint is a bit of a beast to get working with pre-commit as most of its checks aren't static analysis but dynamic analysis (it needs to be able to import your code to check it)
I agree with that.
---
> pylint often works better as a local / system / script hook which takes advantage of the currently activated virtualenv (and then has the ability to import and inspect your code directly).
I will give my try.
> pylint often works better as a local / system / script hook which takes advantage of the currently activated virtualenv (and then has the ability to import and inspect your code directly).
It works. Thanks very much, and if there no plan for this, please let me know, I will close this issue.
I do plan to fix this in a better way at some point so I'll keep this issue around -- thanks for pointing it out and I'm glad the workaround works well for you :) | 2018-02-24T22:53:14 |
pre-commit/pre-commit | 713 | pre-commit__pre-commit-713 | [
"658"
] | 8abfb37fdf8eac15940cfdeccb0a39f47b53df62 | diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py
--- a/pre_commit/commands/autoupdate.py
+++ b/pre_commit/commands/autoupdate.py
@@ -106,7 +106,7 @@ def _write_new_config_file(path, output):
f.write(to_write)
-def autoupdate(runner, tags_only, repo=None):
+def autoupdate(runner, tags_only, repos=()):
"""Auto-update the pre-commit config to the latest versions of repos."""
migrate_config(runner, quiet=True)
retv = 0
@@ -120,7 +120,7 @@ def autoupdate(runner, tags_only, repo=None):
is_local_repo(repo_config) or
is_meta_repo(repo_config) or
# Skip updating any repo_configs that aren't for the specified repo
- repo and repo != repo_config['repo']
+ repos and repo_config['repo'] not in repos
):
output_repos.append(repo_config)
continue
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -168,7 +168,8 @@ def main(argv=None):
),
)
autoupdate_parser.add_argument(
- '--repo', help='Only update this repository.',
+ '--repo', dest='repos', action='append', metavar='REPO',
+ help='Only update this repository -- may be specified multiple times.',
)
migrate_config_parser = subparsers.add_parser(
@@ -251,7 +252,7 @@ def main(argv=None):
return autoupdate(
runner,
tags_only=not args.bleeding_edge,
- repo=args.repo,
+ repos=args.repos,
)
elif args.command == 'migrate-config':
return migrate_config(runner)
| diff --git a/tests/commands/autoupdate_test.py b/tests/commands/autoupdate_test.py
--- a/tests/commands/autoupdate_test.py
+++ b/tests/commands/autoupdate_test.py
@@ -138,7 +138,7 @@ def test_autoupdate_out_of_date_repo_with_correct_repo_name(
runner = Runner('.', C.CONFIG_FILE)
before = open(C.CONFIG_FILE).read()
repo_name = 'file://{}'.format(out_of_date_repo.path)
- ret = autoupdate(runner, tags_only=False, repo=repo_name)
+ ret = autoupdate(runner, tags_only=False, repos=(repo_name,))
after = open(C.CONFIG_FILE).read()
assert ret == 0
assert before != after
@@ -158,7 +158,7 @@ def test_autoupdate_out_of_date_repo_with_wrong_repo_name(
runner = Runner('.', C.CONFIG_FILE)
before = open(C.CONFIG_FILE).read()
# It will not update it, because the name doesn't match
- ret = autoupdate(runner, tags_only=False, repo='wrong_repo_name')
+ ret = autoupdate(runner, tags_only=False, repos=('wrong_repo_name',))
after = open(C.CONFIG_FILE).read()
assert ret == 0
assert before == after
| Autoupdate, select, multiple repositories
Brought up from #657
As noted by @asottile, we can do
`pre-commit autoupdate --repo repo1 --repo repo2 --repo repo3`
to update multiple repositories.
That can be implemented pretty easily with `action='append'` -- and changing the same parts as #657
| 2018-02-24T23:43:02 |
|
pre-commit/pre-commit | 718 | pre-commit__pre-commit-718 | [
"663"
] | ac3a37d1a0e3575bddf23fd9babf6e56202b2988 | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -2,15 +2,19 @@
from __future__ import unicode_literals
import io
+import logging
import os.path
import sys
from pre_commit import output
+from pre_commit.util import cmd_output
from pre_commit.util import make_executable
from pre_commit.util import mkdirp
from pre_commit.util import resource_filename
+logger = logging.getLogger(__name__)
+
# This is used to identify the hook file we install
PRIOR_HASHES = (
'4d9958c90bc262f47553e2c073f14cfe',
@@ -36,6 +40,13 @@ def install(
skip_on_missing_conf=False,
):
"""Install the pre-commit hooks."""
+ if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():
+ logger.error(
+ 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
+ 'hint: `git config --unset-all core.hooksPath`',
+ )
+ return 1
+
hook_path = runner.get_hook_path(hook_type)
legacy_path = hook_path + '.legacy'
| diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -66,6 +66,14 @@ def test_install_hooks_directory_not_present(tempdir_factory):
assert os.path.exists(runner.pre_commit_path)
+def test_install_refuses_core_hookspath(tempdir_factory):
+ path = git_dir(tempdir_factory)
+ with cwd(path):
+ cmd_output('git', 'config', '--local', 'core.hooksPath', 'hooks')
+ runner = Runner(path, C.CONFIG_FILE)
+ assert install(runner)
+
+
@xfailif_no_symlink
def test_install_hooks_dead_symlink(
tempdir_factory,
| Handle when `core.hooksPath` is set?
As we found in https://github.com/pre-commit/pre-commit-hooks/issues/250, pre-commit (despite being installed) will be silently skipped if `core.hooksPath` is set.
A few options:
- during `pre-commit install`, check this variable and warn
- "" but error
- install into the directory at `core.hooksPath` (but it may be outside the working dir? probably not the best idea to write to it)
| Or a prompt when running `pre-commit install` along the lines of:
> We noticed that you have set a different directory for the git hooks via the `core.hooksPath` config. Do you want us to remove it so `pre-commit` will work or install it in the specified directory? | 2018-03-03T23:24:53 |
pre-commit/pre-commit | 720 | pre-commit__pre-commit-720 | [
"719"
] | 4088f55ee6919e2566c4573512fa3c745199cff0 | diff --git a/pre_commit/clientlib.py b/pre_commit/clientlib.py
--- a/pre_commit/clientlib.py
+++ b/pre_commit/clientlib.py
@@ -35,10 +35,7 @@ def _make_argparser(filenames_help):
cfgv.Required('id', cfgv.check_string),
cfgv.Required('name', cfgv.check_string),
cfgv.Required('entry', cfgv.check_string),
- cfgv.Required(
- 'language',
- cfgv.check_and(cfgv.check_string, cfgv.check_one_of(all_languages)),
- ),
+ cfgv.Required('language', cfgv.check_one_of(all_languages)),
cfgv.Optional(
'files', cfgv.check_and(cfgv.check_string, cfgv.check_regex), '',
@@ -59,7 +56,7 @@ def _make_argparser(filenames_help):
cfgv.Optional('language_version', cfgv.check_string, 'default'),
cfgv.Optional('log_file', cfgv.check_string, ''),
cfgv.Optional('minimum_pre_commit_version', cfgv.check_string, '0'),
- cfgv.Optional('stages', cfgv.check_array(cfgv.check_string), []),
+ cfgv.Optional('stages', cfgv.check_array(cfgv.check_one_of(C.STAGES)), []),
cfgv.Optional('verbose', cfgv.check_bool, False),
)
MANIFEST_SCHEMA = cfgv.Array(MANIFEST_HOOK_DICT)
diff --git a/pre_commit/constants.py b/pre_commit/constants.py
--- a/pre_commit/constants.py
+++ b/pre_commit/constants.py
@@ -20,3 +20,6 @@
VERSION = pkg_resources.get_distribution('pre-commit').version
VERSION_PARSED = pkg_resources.parse_version(VERSION)
+
+# `manual` is not invoked by any installed git hook. See #719
+STAGES = ('commit', 'commit-msg', 'manual', 'push')
diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -70,9 +70,8 @@ def _add_run_options(parser):
help='Filename to check when running during `commit-msg`',
)
parser.add_argument(
- '--hook-stage', choices=('commit', 'push', 'commit-msg'),
- default='commit',
- help='The stage during which the hook is fired e.g. commit or push.',
+ '--hook-stage', choices=C.STAGES, default='commit',
+ help='The stage during which the hook is fired. One of %(choices)s',
)
parser.add_argument(
'--show-diff-on-failure', action='store_true',
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -529,52 +529,37 @@ def test_lots_of_files(mock_out_store_directory, tempdir_factory):
)
-def test_push_hook(cap_out, repo_with_passing_hook, mock_out_store_directory):
+def test_stages(cap_out, repo_with_passing_hook, mock_out_store_directory):
config = OrderedDict((
('repo', 'local'),
(
- 'hooks', (
- OrderedDict((
- ('id', 'flake8'),
- ('name', 'hook 1'),
- ('entry', "'{}' -m flake8".format(sys.executable)),
- ('language', 'system'),
- ('types', ['python']),
- ('stages', ['commit']),
- )),
- OrderedDict((
- ('id', 'do_not_commit'),
- ('name', 'hook 2'),
- ('entry', 'DO NOT COMMIT'),
- ('language', 'pygrep'),
- ('types', ['text']),
- ('stages', ['push']),
- )),
+ 'hooks', tuple(
+ {
+ 'id': 'do-not-commit-{}'.format(i),
+ 'name': 'hook {}'.format(i),
+ 'entry': 'DO NOT COMMIT',
+ 'language': 'pygrep',
+ 'stages': [stage],
+ }
+ for i, stage in enumerate(('commit', 'push', 'manual'), 1)
),
),
))
add_config_to_repo(repo_with_passing_hook, config)
- open('dummy.py', 'a').close()
- cmd_output('git', 'add', 'dummy.py')
-
- _test_run(
- cap_out,
- repo_with_passing_hook,
- {'hook_stage': 'commit'},
- expected_outputs=[b'hook 1'],
- expected_ret=0,
- stage=False,
- )
+ stage_a_file()
- _test_run(
- cap_out,
- repo_with_passing_hook,
- {'hook_stage': 'push'},
- expected_outputs=[b'hook 2'],
- expected_ret=0,
- stage=False,
- )
+ def _run_for_stage(stage):
+ args = run_opts(hook_stage=stage)
+ ret, printed = _do_run(cap_out, repo_with_passing_hook, args)
+ assert not ret, (ret, printed)
+ # this test should only run one hook
+ assert printed.count(b'hook ') == 1
+ return printed
+
+ assert _run_for_stage('commit').startswith(b'hook 1...')
+ assert _run_for_stage('push').startswith(b'hook 2...')
+ assert _run_for_stage('manual').startswith(b'hook 3...')
def test_commit_msg_hook(cap_out, commit_msg_repo, mock_out_store_directory):
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -180,7 +180,8 @@ def __init__(self, stream):
def get_bytes(self):
"""Get the output as-if no encoding occurred"""
data = self._stream.data.getvalue()
- self._stream.data.truncate(0)
+ self._stream.data.seek(0)
+ self._stream.data.truncate()
return data
def get(self):
| Question: always_skip unless called explicitly
Is there a way to add a hook to `.pre-commit-config.yaml` but have it always skip unless it's called explicitly? Granted, this is a weird use-case where I'm basically just trying to benefit from `pre-commit`'s package management abilities :)
So for example, while doing `git commit` or `pre-commit run` it would be marked as `Skipped` but would run when called like this:
```
pre-commit run <hook>
```
For context, I'm trying to simplify setting up [prmd](https://github.com/interagent/prmd) (a Ruby package) on Python repos. This already works (as part of a script)
`pre-commit run --verbose prmd --files <json>`
where
```
- repo: /path/to/mirrors-prmd
sha: v0.11.7
hooks:
- id: prmd
entry: prmd render
args:
- --template=main/api/doc/templates
```
but now I would like to prevent this hook from running in all other instances :)
As I said, weird use-case... maybe I just need to go to sleep now 😴
| Totally get this usecase 😆
I think there should be a better way to do this but I'll tell you how I currently do this and give some additional ideas and then a feature idea :)
### abusing hook stages
Note: I personally don't use pre-push so if I don't want a hook to run normally I configure it as follows:
```yaml
- id: dont-run-normally
stages: [push]
```
That way during `commit` it doesn't normally run. However when I want to manually run it I can do: `pre-commit run dont-run-normally --hook-stage push` to make it run.
(woops! turns out `--hook-stage` is undocumented on [pre-commit.com](https://pre-commit.com/#pre-commit-run)!)
### try-repo
Another idea is to use [`pre-commit try-repo /path/to/repo`](https://pre-commit.com/#pre-commit-try-repo), probably not the best when you have to remember the incantation each time though :)
### rubyvenv
For ruby specifically, I've been slowly (haven't really made progress in a long time) been trying to factor out the pre-commit ruby support into a separate package (also so I can attempt to tackle #201 (the oldest as-of-now open issue)). There's an incomplete, but for a lot of usecases works fine "ruby virtualenv"ish tool: https://github.com/asottile/rubyvenv -- it needs a lot of polishing, needs to support _building_ ruby, and still needs windows support somehow but otherwise is functional with prebuilt rubies.
### how I think this should work -- abusing hook stages for greater good!
I'd like to make the following work, does this sound reasonable?
the config:
```yaml
- id: dont-run-normally
stages: [manual]
```
the invocation: `pre-commit run dont-run-normally --hook-stage manual`
thoughts?
Thanks @asottile !
I like the idea of `stages: [manual]` 💯 | 2018-03-07T20:42:13 |
pre-commit/pre-commit | 724 | pre-commit__pre-commit-724 | [
"723"
] | 179f11bdce471869b3e2363843e813e22dd916e3 | diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py
--- a/pre_commit/commands/autoupdate.py
+++ b/pre_commit/commands/autoupdate.py
@@ -34,17 +34,17 @@ def _update_repo(repo_config, runner, tags_only):
"""
repo_path = runner.store.clone(repo_config['repo'], repo_config['rev'])
- cmd_output('git', '-C', repo_path, 'fetch')
- tag_cmd = ('git', '-C', repo_path, 'describe', 'origin/master', '--tags')
+ cmd_output('git', 'fetch', cwd=repo_path)
+ tag_cmd = ('git', 'describe', 'origin/master', '--tags')
if tags_only:
tag_cmd += ('--abbrev=0',)
else:
tag_cmd += ('--exact',)
try:
- rev = cmd_output(*tag_cmd)[1].strip()
+ rev = cmd_output(*tag_cmd, cwd=repo_path)[1].strip()
except CalledProcessError:
- tag_cmd = ('git', '-C', repo_path, 'rev-parse', 'origin/master')
- rev = cmd_output(*tag_cmd)[1].strip()
+ tag_cmd = ('git', 'rev-parse', 'origin/master')
+ rev = cmd_output(*tag_cmd, cwd=repo_path)[1].strip()
# Don't bother trying to update if our rev is the same
if rev == repo_config['rev']:
diff --git a/pre_commit/make_archives.py b/pre_commit/make_archives.py
--- a/pre_commit/make_archives.py
+++ b/pre_commit/make_archives.py
@@ -41,7 +41,7 @@ def make_archive(name, repo, ref, destdir):
with tmpdir() as tempdir:
# Clone the repository to the temporary directory
cmd_output('git', 'clone', repo, tempdir)
- cmd_output('git', '-C', tempdir, 'checkout', ref)
+ cmd_output('git', 'checkout', ref, cwd=tempdir)
# We don't want the '.git' directory
# It adds a bunch of size to the archive and we don't use it at
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -144,7 +144,7 @@ def clone_strategy(directory):
env = no_git_env()
def _git_cmd(*args):
- return cmd_output('git', '-C', directory, *args, env=env)
+ return cmd_output('git', *args, cwd=directory, env=env)
_git_cmd('clone', '--no-checkout', repo, '.')
_git_cmd('reset', ref, '--hard')
@@ -163,7 +163,7 @@ def make_local_strategy(directory):
# initialize the git repository so it looks more like cloned repos
def _git_cmd(*args):
- cmd_output('git', '-C', directory, *args, env=env)
+ cmd_output('git', *args, cwd=directory, env=env)
_git_cmd('init', '.')
_git_cmd('config', 'remote.origin.url', '<<unknown>>')
| diff --git a/testing/fixtures.py b/testing/fixtures.py
--- a/testing/fixtures.py
+++ b/testing/fixtures.py
@@ -29,8 +29,8 @@ def git_dir(tempdir_factory):
def make_repo(tempdir_factory, repo_source):
path = git_dir(tempdir_factory)
copy_tree_to_path(get_resource_path(repo_source), path)
- cmd_output('git', '-C', path, 'add', '.')
- cmd_output('git', '-C', path, 'commit', '-m', 'Add hooks')
+ cmd_output('git', 'add', '.', cwd=path)
+ cmd_output('git', 'commit', '-m', 'Add hooks', cwd=path)
return path
@@ -114,15 +114,14 @@ def write_config(directory, config, config_file=C.CONFIG_FILE):
def add_config_to_repo(git_path, config, config_file=C.CONFIG_FILE):
write_config(git_path, config, config_file=config_file)
- cmd_output('git', '-C', git_path, 'add', config_file)
- cmd_output('git', '-C', git_path, 'commit', '-m', 'Add hooks config')
+ cmd_output('git', 'add', config_file, cwd=git_path)
+ cmd_output('git', 'commit', '-m', 'Add hooks config', cwd=git_path)
return git_path
def remove_config_from_repo(git_path, config_file=C.CONFIG_FILE):
- os.unlink(os.path.join(git_path, config_file))
- cmd_output('git', '-C', git_path, 'add', config_file)
- cmd_output('git', '-C', git_path, 'commit', '-m', 'Remove hooks config')
+ cmd_output('git', 'rm', config_file, cwd=git_path)
+ cmd_output('git', 'commit', '-m', 'Remove hooks config', cwd=git_path)
return git_path
diff --git a/tests/commands/autoupdate_test.py b/tests/commands/autoupdate_test.py
--- a/tests/commands/autoupdate_test.py
+++ b/tests/commands/autoupdate_test.py
@@ -62,12 +62,12 @@ def test_autoupdate_old_revision_broken(
path = make_repo(tempdir_factory, 'python_hooks_repo')
config = make_config_from_repo(path, check=False)
- cmd_output('git', '-C', path, 'mv', C.MANIFEST_FILE, 'nope.yaml')
- cmd_output('git', '-C', path, 'commit', '-m', 'simulate old repo')
+ cmd_output('git', 'mv', C.MANIFEST_FILE, 'nope.yaml', cwd=path)
+ cmd_output('git', 'commit', '-m', 'simulate old repo', cwd=path)
# Assume this is the revision the user's old repository was at
rev = git.head_rev(path)
- cmd_output('git', '-C', path, 'mv', 'nope.yaml', C.MANIFEST_FILE)
- cmd_output('git', '-C', path, 'commit', '-m', 'move hooks file')
+ cmd_output('git', 'mv', 'nope.yaml', C.MANIFEST_FILE, cwd=path)
+ cmd_output('git', 'commit', '-m', 'move hooks file', cwd=path)
update_rev = git.head_rev(path)
config['rev'] = rev
@@ -86,7 +86,7 @@ def out_of_date_repo(tempdir_factory):
original_rev = git.head_rev(path)
# Make a commit
- cmd_output('git', '-C', path, 'commit', '--allow-empty', '-m', 'foo')
+ cmd_output('git', 'commit', '--allow-empty', '-m', 'foo', cwd=path)
head_rev = git.head_rev(path)
yield auto_namedtuple(
@@ -221,7 +221,7 @@ def test_loses_formatting_when_not_detectable(
@pytest.fixture
def tagged_repo(out_of_date_repo):
- cmd_output('git', '-C', out_of_date_repo.path, 'tag', 'v1.2.3')
+ cmd_output('git', 'tag', 'v1.2.3', cwd=out_of_date_repo.path)
yield out_of_date_repo
@@ -240,8 +240,7 @@ def test_autoupdate_tagged_repo(
@pytest.fixture
def tagged_repo_with_more_commits(tagged_repo):
- cmd = ('git', '-C', tagged_repo.path, 'commit', '--allow-empty', '-mfoo')
- cmd_output(*cmd)
+ cmd_output('git', 'commit', '--allow-empty', '-mfoo', cwd=tagged_repo.path)
yield tagged_repo
@@ -268,8 +267,8 @@ def hook_disappearing_repo(tempdir_factory):
get_resource_path('manifest_without_foo.yaml'),
os.path.join(path, C.MANIFEST_FILE),
)
- cmd_output('git', '-C', path, 'add', '.')
- cmd_output('git', '-C', path, 'commit', '-m', 'Remove foo')
+ cmd_output('git', 'add', '.', cwd=path)
+ cmd_output('git', 'commit', '-m', 'Remove foo', cwd=path)
yield auto_namedtuple(path=path, original_rev=original_rev)
diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -161,8 +161,8 @@ def test_install_pre_commit_and_run_custom_path(tempdir_factory):
def test_install_in_submodule_and_run(tempdir_factory):
src_path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
parent_path = git_dir(tempdir_factory)
- cmd_output('git', '-C', parent_path, 'submodule', 'add', src_path, 'sub')
- cmd_output('git', '-C', parent_path, 'commit', '-m', 'foo')
+ cmd_output('git', 'submodule', 'add', src_path, 'sub', cwd=parent_path)
+ cmd_output('git', 'commit', '-m', 'foo', cwd=parent_path)
sub_pth = os.path.join(parent_path, 'sub')
with cwd(sub_pth):
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -69,8 +69,8 @@ def _make_conflict():
def in_merge_conflict(tempdir_factory):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
open(os.path.join(path, 'dummy'), 'a').close()
- cmd_output('git', '-C', path, 'add', 'dummy')
- cmd_output('git', '-C', path, 'commit', '-m', 'Add config.')
+ cmd_output('git', 'add', 'dummy', cwd=path)
+ cmd_output('git', 'commit', '-m', 'Add config.', cwd=path)
conflict_path = tempdir_factory.get()
cmd_output('git', 'clone', path, conflict_path)
@@ -83,8 +83,8 @@ def in_merge_conflict(tempdir_factory):
def in_conflicting_submodule(tempdir_factory):
git_dir_1 = git_dir(tempdir_factory)
git_dir_2 = git_dir(tempdir_factory)
- cmd_output('git', '-C', git_dir_2, 'commit', '--allow-empty', '-minit!')
- cmd_output('git', '-C', git_dir_1, 'submodule', 'add', git_dir_2, 'sub')
+ cmd_output('git', 'commit', '--allow-empty', '-minit!', cwd=git_dir_2)
+ cmd_output('git', 'submodule', 'add', git_dir_2, 'sub', cwd=git_dir_1)
with cwd(os.path.join(git_dir_1, 'sub')):
_make_conflict()
yield
diff --git a/tests/make_archives_test.py b/tests/make_archives_test.py
--- a/tests/make_archives_test.py
+++ b/tests/make_archives_test.py
@@ -17,14 +17,14 @@ def test_make_archive(tempdir_factory):
git_path = git_dir(tempdir_factory)
# Add a files to the git directory
open(os.path.join(git_path, 'foo'), 'a').close()
- cmd_output('git', '-C', git_path, 'add', '.')
- cmd_output('git', '-C', git_path, 'commit', '-m', 'foo')
+ cmd_output('git', 'add', '.', cwd=git_path)
+ cmd_output('git', 'commit', '-m', 'foo', cwd=git_path)
# We'll use this rev
head_rev = git.head_rev(git_path)
# And check that this file doesn't exist
open(os.path.join(git_path, 'bar'), 'a').close()
- cmd_output('git', '-C', git_path, 'add', '.')
- cmd_output('git', '-C', git_path, 'commit', '-m', 'bar')
+ cmd_output('git', 'add', '.', cwd=git_path)
+ cmd_output('git', 'commit', '-m', 'bar', cwd=git_path)
# Do the thing
archive_path = make_archives.make_archive(
diff --git a/tests/staged_files_only_test.py b/tests/staged_files_only_test.py
--- a/tests/staged_files_only_test.py
+++ b/tests/staged_files_only_test.py
@@ -199,7 +199,7 @@ def submodule_with_commits(tempdir_factory):
def checkout_submodule(rev):
- cmd_output('git', '-C', 'sub', 'checkout', rev)
+ cmd_output('git', 'checkout', rev, cwd='sub')
@pytest.fixture
| Older versions of git no longer supported
Since #714 I am no longer able to run `pre-commit` on RHEL7 hosts. The version of `git` provided by the OS vendor does not support the `-C` option.
Was this intentional? If not, would you consider re-adding support for older versions?
My environment:
```
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)
$ git --version
git version 1.8.3.1
```
I understand that version of `git` is 5+ years old, but it is the latest version provided by RedHat. If nothing else, the documentation should state which version of `git` is required.
Thanks
| Ooh bummer, didn't intend to remove old git support. I'll see how feasible it is to restore that.
fwiw, `-C` was added in git 1.9, it's not much work to add back 1.8 support (though I'm not sure how I'll test it) -- let me prototype what that looks like :) | 2018-03-12T20:12:11 |
pre-commit/pre-commit | 756 | pre-commit__pre-commit-756 | [
"755"
] | 97fb49a533de9a378d20f0a41e79df118362e534 | diff --git a/pre_commit/languages/python_venv.py b/pre_commit/languages/python_venv.py
--- a/pre_commit/languages/python_venv.py
+++ b/pre_commit/languages/python_venv.py
@@ -1,14 +1,46 @@
from __future__ import unicode_literals
+import os.path
+
from pre_commit.languages import python
+from pre_commit.util import CalledProcessError
from pre_commit.util import cmd_output
ENVIRONMENT_DIR = 'py_venv'
+def orig_py_exe(exe): # pragma: no cover (platform specific)
+ """A -mvenv virtualenv made from a -mvirtualenv virtualenv installs
+ packages to the incorrect location. Attempt to find the _original_ exe
+ and invoke `-mvenv` from there.
+
+ See:
+ - https://github.com/pre-commit/pre-commit/issues/755
+ - https://github.com/pypa/virtualenv/issues/1095
+ - https://bugs.python.org/issue30811
+ """
+ try:
+ prefix_script = 'import sys; print(sys.real_prefix)'
+ _, prefix, _ = cmd_output(exe, '-c', prefix_script)
+ prefix = prefix.strip()
+ except CalledProcessError:
+ # not created from -mvirtualenv
+ return exe
+
+ if os.name == 'nt':
+ expected = os.path.join(prefix, 'python.exe')
+ else:
+ expected = os.path.join(prefix, 'bin', os.path.basename(exe))
+
+ if os.path.exists(expected):
+ return expected
+ else:
+ return exe
+
+
def make_venv(envdir, python):
- cmd_output(python, '-mvenv', envdir, cwd='/')
+ cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')
get_default_version = python.get_default_version
| venv tests break virtualenv's `pip` when run from a `-mvirtualenv` virtualenv
Here's a reproduction, not exactly sure what's happening here:
```
$ tox -e py36 -r --notest
GLOB sdist-make: /home/asottile/workspace/pre-commit/setup.py
py36 create: /home/asottile/workspace/pre-commit/.tox/py36
py36 installdeps: -rrequirements-dev.txt
py36 inst: /home/asottile/workspace/pre-commit/.tox/dist/pre_commit-1.10.0.zip
py36 installed: You are using pip version 9.0.1, however version 10.0.1 is available.,You should consider upgrading via the 'pip install --upgrade pip' command.,aspy.yaml==1.1.1,atomicwrites==1.1.5,attrs==18.1.0,cached-property==1.4.2,cfgv==1.0.0,coverage==4.5.1,flake8==3.5.0,identify==1.0.18,mccabe==0.6.1,mock==2.0.0,more-itertools==4.2.0,nodeenv==1.3.0,pbr==4.0.3,pluggy==0.6.0,-e [email protected]:pre-commit/pre-commit@97fb49a533de9a378d20f0a41e79df118362e534#egg=pre_commit,py==1.5.3,pycodestyle==2.3.1,pyflakes==1.6.0,pytest==3.6.0,pytest-env==0.6.2,PyYAML==3.12,six==1.11.0,toml==0.9.4,virtualenv==16.0.0
___________________________________ summary ____________________________________
py36: skipped tests
congratulations :)
$ head -1 .tox/py36/bin/pip
#!/home/asottile/workspace/pre-commit/.tox/py36/bin/python3.6
$ .tox/py36/bin/pytest tests -k venv
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini
plugins: env-0.6.2
collected 500 items / 492 deselected
tests/repository_test.py .. [ 25%]
tests/commands/install_uninstall_test.py . [ 37%]
tests/languages/all_test.py ..... [100%]
=================== 8 passed, 492 deselected in 4.12 seconds ===================
$ head -1 .tox/py36/bin/pip
#!/home/asottile/workspace/pre-commit/.tox/py36/bin/python3.6
$ tox -e py36 -- tests -k venv
GLOB sdist-make: /home/asottile/workspace/pre-commit/setup.py
py36 inst-nodeps: /home/asottile/workspace/pre-commit/.tox/dist/pre_commit-1.10.0.zip
py36 installed: You are using pip version 9.0.1, however version 10.0.1 is available.,You should consider upgrading via the 'pip install --upgrade pip' command.,aspy.yaml==1.1.1,atomicwrites==1.1.5,attrs==18.1.0,cached-property==1.4.2,cfgv==1.0.0,coverage==4.5.1,flake8==3.5.0,identify==1.0.18,mccabe==0.6.1,mock==2.0.0,more-itertools==4.2.0,nodeenv==1.3.0,pbr==4.0.3,pluggy==0.6.0,pre-commit==1.10.0,py==1.5.3,pycodestyle==2.3.1,pyflakes==1.6.0,pytest==3.6.0,pytest-env==0.6.2,PyYAML==3.12,six==1.11.0,toml==0.9.4,virtualenv==16.0.0
py36 runtests: PYTHONHASHSEED='93802395'
py36 runtests: commands[0] | coverage erase
py36 runtests: commands[1] | coverage run -m pytest tests -k venv
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini
plugins: env-0.6.2
collected 500 items / 492 deselected
tests/repository_test.py .. [ 25%]
tests/commands/install_uninstall_test.py . [ 37%]
tests/languages/all_test.py ..... [100%]
=================== 8 passed, 492 deselected in 4.32 seconds ===================
py36 runtests: commands[2] | coverage report --fail-under 99
Name Stmts Miss Branch BrPart Cover Missing
---------------------------------------------------------------------------------------------
...
17 files skipped due to complete coverage.
ERROR: InvocationError: '/home/asottile/workspace/pre-commit/.tox/py36/bin/coverage report --fail-under 99'
___________________________________ summary ____________________________________
ERROR: py36: commands failed
$ head -1 .tox/py36/bin/pip
#!/tmp/pytest-of-asottile/pytest-3/test_python_venv0/0/.pre-commit/repo5xcuq11q/py_venv-python3.6/bin/python3.6
```
| Ok, I can factor `tox` out of the situation:
```
$ . .tox/py36/bin/activate
(py36) $ pytest tests -k venv
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini
plugins: env-0.6.2
collected 500 items / 492 deselected
tests/repository_test.py .. [ 25%]
tests/commands/install_uninstall_test.py . [ 37%]
tests/languages/all_test.py ..... [100%]
=================== 8 passed, 492 deselected in 4.19 seconds ===================
(py36) $ head -1 .tox/py36/bin/pip
#!/tmp/pytest-of-asottile/pytest-4/test_python_venv0/0/.pre-commit/reposmsza1fi/py_venv-python3.6/bin/python3.6
```
@ojii any ideas here? (`-mvenv` :spider:s)
This seems to be the minimal reproduction:
```
#!/usr/bin/env bash
set -euxo pipefail
tox -e py36 -r --notest
PATH=$PWD/.tox/py36/bin:$PATH
.tox/py36/bin/pytest tests/repository_test.py -k test_python_venv
head -1 .tox/py36/bin/pip
```
It seems making a `-mvenv` virtualenv from a `-mvirtualenv` virtualenv just doesn't work great:
```
$ virtualenv venv -ppython3.6
Running virtualenv with interpreter /usr/bin/python3.6
Using base prefix '/usr'
New python executable in /tmp/t/venv/bin/python3.6
Also creating executable in /tmp/t/venv/bin/python
Installing setuptools, pip, wheel...done.
$ . venv/bin/activate
(venv) $ python3.6 -m venv venv3
(venv) $ head -1 $(which pip)
#!/tmp/t/venv/bin/python3.6
(venv) $ ls venv
bin include lib pip-selfcheck.json
(venv) $ head -n1 venv*/bin/pip
#!/tmp/t/venv/bin/python3.6
(venv) $ ls venv
venv/ venv3/
(venv) $ ls venv3/bin/
activate activate.csh activate.fish python python3 python3.6
(venv) $ ls -al venv3/bin/python3.6
lrwxrwxrwx 1 asottile asottile 25 May 28 11:36 venv3/bin/python3.6 -> /tmp/t/venv/bin/python3.6
```
- pypa/virtualenv issue: https://github.com/pypa/virtualenv/issues/1095
- bpo issue: https://bugs.python.org/issue30811 | 2018-05-28T22:16:08 |
|
pre-commit/pre-commit | 785 | pre-commit__pre-commit-785 | [
"782"
] | 6853f4aa4c8d7e411839bacc66876baea443186a | diff --git a/pre_commit/parse_shebang.py b/pre_commit/parse_shebang.py
--- a/pre_commit/parse_shebang.py
+++ b/pre_commit/parse_shebang.py
@@ -42,16 +42,21 @@ def find_executable(exe, _environ=None):
return None
-def normexe(orig_exe):
- if os.sep not in orig_exe:
- exe = find_executable(orig_exe)
+def normexe(orig):
+ def _error(msg):
+ raise ExecutableNotFoundError('Executable `{}` {}'.format(orig, msg))
+
+ if os.sep not in orig and (not os.altsep or os.altsep not in orig):
+ exe = find_executable(orig)
if exe is None:
- raise ExecutableNotFoundError(
- 'Executable `{}` not found'.format(orig_exe),
- )
+ _error('not found')
return exe
+ elif not os.access(orig, os.X_OK):
+ _error('not found')
+ elif os.path.isdir(orig):
+ _error('is a directory')
else:
- return orig_exe
+ return orig
def normalize_cmd(cmd):
| diff --git a/tests/parse_shebang_test.py b/tests/parse_shebang_test.py
--- a/tests/parse_shebang_test.py
+++ b/tests/parse_shebang_test.py
@@ -85,6 +85,22 @@ def test_normexe_does_not_exist():
assert excinfo.value.args == ('Executable `i-dont-exist-lol` not found',)
+def test_normexe_does_not_exist_sep():
+ with pytest.raises(OSError) as excinfo:
+ parse_shebang.normexe('./i-dont-exist-lol')
+ assert excinfo.value.args == ('Executable `./i-dont-exist-lol` not found',)
+
+
+def test_normexe_is_a_directory(tmpdir):
+ with tmpdir.as_cwd():
+ tmpdir.join('exe').ensure_dir()
+ exe = os.path.join('.', 'exe')
+ with pytest.raises(OSError) as excinfo:
+ parse_shebang.normexe(exe)
+ msg, = excinfo.value.args
+ assert msg == 'Executable `{}` is a directory'.format(exe)
+
+
def test_normexe_already_full_path():
assert parse_shebang.normexe(sys.executable) == sys.executable
@@ -107,14 +123,14 @@ def test_normalize_cmd_PATH():
def test_normalize_cmd_shebang(in_tmpdir):
- python = distutils.spawn.find_executable('python')
- path = write_executable(python.replace(os.sep, '/'))
+ python = distutils.spawn.find_executable('python').replace(os.sep, '/')
+ path = write_executable(python)
assert parse_shebang.normalize_cmd((path,)) == (python, path)
def test_normalize_cmd_PATH_shebang_full_path(in_tmpdir):
- python = distutils.spawn.find_executable('python')
- path = write_executable(python.replace(os.sep, '/'))
+ python = distutils.spawn.find_executable('python').replace(os.sep, '/')
+ path = write_executable(python)
with bin_on_path():
ret = parse_shebang.normalize_cmd(('run',))
assert ret == (python, os.path.abspath(path))
| documention regarding adding new python based hooks needs improvement
Apparently we need some kind of how-to or mini tutorial on how to add a new hook to pre-commit as the basic documentation does not help someone without previous pre-commit knowledge.
I wanted to add support for `bashate`, a shell script linter written in python, available on pypi and that installs a shell script with the same name that can be used just like other linters.
Initially I went to https://pre-commit.com/#new-hooks which didn't give me enough info. So, i looked for other linters based on python and I found yamllint, which pointed me to https://github.com/adrienverge/yamllint/blob/master/.pre-commit-hooks.yaml
So the idea was to add the hook definition directly to the linter package. In this case I had to fork bashate in order to test the new, hook. So I ended up creating https://github.com/ssbarnea/bashate/blob/master/.pre-commit-hooks.yaml -- folowing the same model used in yamllint.
Now, I wanted to add and test the hook on one of the repos I maintain so I did create https://github.com/pycontribs/jira/blob/feature/pre-commit/.pre-commit-config.yaml#L25
When I tried to run it using `pre-commit run bashate --all`, it failed with this error:
```
Bashate..................................................................An unexpected error has occurred: OSError: [Errno 2] No such file or directory
Check the log at /Users/ssbarnea/.cache/pre-commit/pre-commit.log
An unexpected error has occurred: OSError: [Errno 2] No such file or directory
Traceback (most recent call last):
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/error_handler.py", line 47, in error_handler
yield
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/main.py", line 258, in main
return run(runner, args)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/commands/run.py", line 270, in run
return _run_hooks(runner.config, repo_hooks, args, environ)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/commands/run.py", line 199, in _run_hooks
retval |= _run_single_hook(filenames, hook, repo, args, skips, cols)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/commands/run.py", line 110, in _run_single_hook
hook, tuple(filenames) if hook['pass_filenames'] else (),
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/repository.py", line 207, in run_hook
return languages[language_name].run_hook(prefix, hook, file_args)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/languages/script.py", line 16, in run_hook
return xargs(cmd, file_args)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/xargs.py", line 63, in xargs
*run_cmd, encoding=None, retcode=None
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/site-packages/pre_commit/util.py", line 167, in cmd_output
proc = subprocess.Popen(cmd, **popen_kwargs)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/Users/ssbarnea/.pyenv/versions/2.7.14/lib/python2.7/subprocess.py", line 1025, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
```
At the moment I have no idea what I did wrong, probably something simple. Still, we need to make it easier to integrate new linters into pre-commit.
| that configuration looks completely correct to me, I suspect there's some sort of regression as well as `OSError` isn't supposed to escape.
If you could provide more platform / environment information that would be extremely helpful :)
(an aside: `types: [file, shell]` is redundant as `shell` will only be applied to file object -- not broken but just extra)
The other bit being, your configuration works for me:
```
$ pre-commit try-repo https://github.com/ssbarnea/bashate --all-files
[INFO] Initializing environment for https://github.com/ssbarnea/bashate.
===============================================================================
Using config:
===============================================================================
repos:
- repo: https://github.com/ssbarnea/bashate
rev: 3b9aa602d8bab522e803d1f05f223b908f610731
hooks:
- id: bashate
===============================================================================
[INFO] Installing environment for https://github.com/ssbarnea/bashate.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
bashate..................................................................Passed
```
I think I was able to find some extra info, apparently pre-commit does some kind of weird caching which is not updated when you update master. Shortly it was downloading a previous commit which didn't had the .pre-commit-config.yaml file.
Even manually removing the cached repo from ~/.cache/pre-commit/repoqiSIRh/ did not made pre-commit check-out the last master. Instead it did checkout the same older commit.
After removing the entire `~/.cache/pre-commit` folder it finally started to checkout the latest commit.
ah, I suspect you're hitting https://pre-commit.com/#using-the-latest-version-for-a-repository ?
either way, this shouldn't stacktrace and should produce a readable error message!
I also suspect this sub-heading should be promoted more in the "creating new hooks" section: https://pre-commit.com/#developing-hooks-interactively
For instance, this is what I expect it to produce:
```
root@3b2c4f333b6a:/t# pre-commit try-repo /bashate
[INFO] Initializing environment for /bashate.
===============================================================================
Using config:
===============================================================================
repos:
- repo: /bashate
rev: 0f94116426fe545ea7168ed6696e6e2d59605972
hooks:
- id: bashate
===============================================================================
[INFO] Installing environment for /bashate.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
bashate..................................................................Failed
hookid: bashate
Executable `broken` not found
``` | 2018-07-04T21:16:30 |
pre-commit/pre-commit | 797 | pre-commit__pre-commit-797 | [
"794"
] | df1b720054d711a6b89b3faf5df4de0e8b1be7e2 | diff --git a/pre_commit/languages/python_venv.py b/pre_commit/languages/python_venv.py
--- a/pre_commit/languages/python_venv.py
+++ b/pre_commit/languages/python_venv.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import os.path
+import sys
from pre_commit.languages import python
from pre_commit.util import CalledProcessError
@@ -10,6 +11,13 @@
ENVIRONMENT_DIR = 'py_venv'
+def get_default_version(): # pragma: no cover (version specific)
+ if sys.version_info < (3,):
+ return 'python3'
+ else:
+ return python.get_default_version()
+
+
def orig_py_exe(exe): # pragma: no cover (platform specific)
"""A -mvenv virtualenv made from a -mvirtualenv virtualenv installs
packages to the incorrect location. Attempt to find the _original_ exe
@@ -43,6 +51,5 @@ def make_venv(envdir, python):
cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')
-get_default_version = python.get_default_version
_interface = python.py_interface(ENVIRONMENT_DIR, make_venv)
in_env, healthy, run_hook, install_environment = _interface
| python_venv language fails to use python3 interpreter and is using python2.7 instead
Apparently pre-commit failed to use python3 interpreter when I tried to add a hook and thus failed because venv module was not installed on default python2.7!
```
$ pre-commit try-repo ../python-license-check [19:55:27]
[INFO] Initializing environment for ../python-license-check.
===============================================================================
Using config:
===============================================================================
repos:
- repo: ../python-license-check
rev: 4048cf3844dbbf45690c153a7da7f532585ec87c
hooks:
- id: liccheck
===============================================================================
[INFO] Installing environment for ../python-license-check.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: ('/Users/ssbarnea/.pyenv/versions/2.7.14/bin/python2.7', '-mvenv', '/var/folders/br/99tfdvcs3vvfwdk69z7f0xmc0000gn/T/tmpayl0P5/repoHa7_qe/py_venv-python2.7')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
/Users/ssbarnea/.pyenv/versions/2.7.14/bin/python2.7: No module named venv
Check the log at /Users/ssbarnea/.cache/pre-commit/pre-commit.log
FAIL: 1
ssbarnea@smac: ~/os/jira master ⚡ $ cat ../python-license-check/.pre-commit-hooks.yaml [19:55:34]
- id: liccheck
name: Validates dependency licenses for Python packages
description: This validator validates a pre-commit hooks manifest file
entry: liccheck -s setup.cfg -r requirements.txt
language: python_venv
```
Based on the documentation I was expecting to see pre-commit using the `python3` executable for calling venv module.
| pre-commit doesn't really do any attempt at fishing out installed executables. It tries by default the executable you're using.
For `language: python_venv` when running pre-commit under python2, you'll need to tell pre-commit where to find a suitable executable via `language_version`:
```python
- id: liccheck
language: python_venv
language_version: python3
...
```
Note that for hooks which are being distributed, it is suggested to use `language: python` over `language: python_venv` as it is more portable.
defaulting `language: python3` would potentially lead to undesirable results if someone is expecting a newer / older `python3.x` version so I don't think (?) it's an appropriate default here | 2018-07-17T23:49:09 |
|
pre-commit/pre-commit | 803 | pre-commit__pre-commit-803 | [
"772"
] | f2da2c435c1123c4edc4ca9701c245cc25b0a50d | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -256,7 +256,7 @@ def run(runner, store, args, environ=os.environ):
for _, hook in repo.hooks:
if (
(not args.hook or hook['id'] == args.hook) and
- not hook['stages'] or args.hook_stage in hook['stages']
+ (not hook['stages'] or args.hook_stage in hook['stages'])
):
repo_hooks.append((repo, hook))
| diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -762,3 +762,34 @@ def test_include_exclude_does_search_instead_of_match(some_filenames):
def test_include_exclude_exclude_removes_files(some_filenames):
ret = _filter_by_include_exclude(some_filenames, '', r'\.py$')
assert ret == ['.pre-commit-hooks.yaml']
+
+
+def test_args_hook_only(cap_out, store, repo_with_passing_hook):
+ config = OrderedDict((
+ ('repo', 'local'),
+ (
+ 'hooks', (
+ OrderedDict((
+ ('id', 'flake8'),
+ ('name', 'flake8'),
+ ('entry', "'{}' -m flake8".format(sys.executable)),
+ ('language', 'system'),
+ ('stages', ['commit']),
+ )), OrderedDict((
+ ('id', 'do_not_commit'),
+ ('name', 'Block if "DO NOT COMMIT" is found'),
+ ('entry', 'DO NOT COMMIT'),
+ ('language', 'pygrep'),
+ )),
+ ),
+ ),
+ ))
+ add_config_to_repo(repo_with_passing_hook, config)
+ stage_a_file()
+ ret, printed = _do_run(
+ cap_out,
+ store,
+ repo_with_passing_hook,
+ run_opts(hook='do_not_commit'),
+ )
+ assert b'flake8' not in printed
| `stages: [commit]` hooks will run with `pre-commit run otherhookid`
minor logic bug, good new-contributor ticket
Easy to reproduce on pre-commit itself:
```diff
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index a146bd2..7bb382d 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -3,6 +3,7 @@ repos:
rev: v1.2.3
hooks:
- id: trailing-whitespace
+ stages: [commit]
- id: end-of-file-fixer
- id: autopep8-wrapper
- id: check-docstring-first
```
```console
$ pre-commit run end-of-file-fixer --all-files
Trim Trailing Whitespace.................................................Passed
Fix End of Files.........................................................Passed
```
(it should have only run `end-of-file-fixer` but also run `trailing-whitespace` due to a logic error).
| 2018-07-20T01:49:48 |
|
pre-commit/pre-commit | 812 | pre-commit__pre-commit-812 | [
"807"
] | cf691e85c89dbe16dce7e0a729649b2e19d4d9ad | diff --git a/pre_commit/languages/all.py b/pre_commit/languages/all.py
--- a/pre_commit/languages/all.py
+++ b/pre_commit/languages/all.py
@@ -2,6 +2,7 @@
from pre_commit.languages import docker
from pre_commit.languages import docker_image
+from pre_commit.languages import fail
from pre_commit.languages import golang
from pre_commit.languages import node
from pre_commit.languages import pcre
@@ -54,6 +55,7 @@
languages = {
'docker': docker,
'docker_image': docker_image,
+ 'fail': fail,
'golang': golang,
'node': node,
'pcre': pcre,
diff --git a/pre_commit/languages/fail.py b/pre_commit/languages/fail.py
new file mode 100644
--- /dev/null
+++ b/pre_commit/languages/fail.py
@@ -0,0 +1,15 @@
+from __future__ import unicode_literals
+
+from pre_commit.languages import helpers
+
+
+ENVIRONMENT_DIR = None
+get_default_version = helpers.basic_get_default_version
+healthy = helpers.basic_healthy
+install_environment = helpers.no_install
+
+
+def run_hook(prefix, hook, file_args):
+ out = hook['entry'].encode('UTF-8') + b'\n\n'
+ out += b'\n'.join(f.encode('UTF-8') for f in file_args) + b'\n'
+ return 1, out, b''
| diff --git a/tests/repository_test.py b/tests/repository_test.py
--- a/tests/repository_test.py
+++ b/tests/repository_test.py
@@ -589,6 +589,29 @@ def test_local_rust_additional_dependencies(store):
assert _norm_out(ret[1]) == b"Hello World!\n"
+def test_fail_hooks(store):
+ config = {
+ 'repo': 'local',
+ 'hooks': [{
+ 'id': 'fail',
+ 'name': 'fail',
+ 'language': 'fail',
+ 'entry': 'make sure to name changelogs as .rst!',
+ 'files': r'changelog/.*(?<!\.rst)$',
+ }],
+ }
+ repo = Repository.create(config, store)
+ (_, hook), = repo.hooks
+ ret = repo.run_hook(hook, ('changelog/1234.bugfix', 'changelog/wat'))
+ assert ret[0] == 1
+ assert ret[1] == (
+ b'make sure to name changelogs as .rst!\n'
+ b'\n'
+ b'changelog/1234.bugfix\n'
+ b'changelog/wat\n'
+ )
+
+
def test_reinstall(tempdir_factory, store, log_info_mock):
path = make_repo(tempdir_factory, 'python_hooks_repo')
config = make_config_from_repo(path)
| Create `language: fail` for filename-only checks
For example, it would be useful here:
https://github.com/pytest-dev/pytest/pull/3765/files
v1 could just print the filenames and exit nonzero, potentially with a `--message` option.
| +1, here's another case where this could be useful:
https://github.com/ocf/ocfweb/blob/f166a3159da560eb871ebd3a375606ba5a240912/.pre-commit-config.yaml#L67-L71 | 2018-08-11T01:12:15 |
pre-commit/pre-commit | 832 | pre-commit__pre-commit-832 | [
"831"
] | 4b0a22a8ba297a42aa64cc292e7f235a326793d1 | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -31,16 +31,13 @@ def get_root():
def get_git_dir(git_root):
- def _git_dir(opt):
- return os.path.normpath(os.path.join(
- git_root,
- cmd_output('git', 'rev-parse', opt, cwd=git_root)[1].strip(),
- ))
-
- try:
- return _git_dir('--git-common-dir')
- except CalledProcessError: # pragma: no cover (git < 2.5)
- return _git_dir('--git-dir')
+ opts = ('--git-common-dir', '--git-dir')
+ _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)
+ for line, opt in zip(out.splitlines(), opts):
+ if line != opt: # pragma: no branch (git < 2.5)
+ return os.path.normpath(os.path.join(git_root, line))
+ else:
+ raise AssertionError('unreachable: no git dir')
def get_remote_url(git_root):
| git.get_git_dir() appears to break with older versions of git
When you install the current pre-commit (v1.11.0) on a system using an older git, eg 2.1.4, the `git rev-parse --git-common-dir` ultimately issued by `git.get_git_dir()` yields the result `--git-common-dir` instead of `.git` like you'd expect. This causes pre-commit to get installed in a directory literally named "--git-common-dir", and thus not work.
Happened to encounter using a docker image for an older version of python, which used an older git. Examples to demo the issue:
```
$ docker run python:2.7.9 sh -c "git init foo && cd foo && git rev-parse --git-common-dir && git --version"
Initialized empty Git repository in /foo/.git/
--git-common-dir
git version 2.1.4
$ docker run python:2.7.15 sh -c "git init foo && cd foo && git rev-parse --git-common-dir && git --version"
Initialized empty Git repository in /foo/.git/
.git
git version 2.11.0
```
In git.get_git_dir() as it stands, it appears that the older-git behavior is anticipated. It looks like older gits are expected to throw a `CalledProcessError` when `git rev-parse --git-common-dir` is called, and fall back to `git rev-parse --git-dir`. But evidently that's not working as intended?
| yikes! Thanks for the report I'll get a fix for this
Looks like this was depending on the PARSEOPT mode of `rev-parse` which isn't the default
This is a regression introduced in [v1.10.5](https://github.com/pre-commit/pre-commit/releases/tag/v1.10.5)
actually hmm, parseopt seems insufficient as well | 2018-09-22T15:57:48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.