hash
stringlengths 40
40
| diff
stringlengths 131
26.7k
| message
stringlengths 7
694
| project
stringlengths 5
67
| split
stringclasses 1
value | diff_languages
stringlengths 2
24
|
---|---|---|---|---|---|
a7412d56ea02e917ec6cb92effc4ccb28c3f46bd
|
diff --git a/spec/attack_spec.rb b/spec/attack_spec.rb
index <HASH>..<HASH> 100644
--- a/spec/attack_spec.rb
+++ b/spec/attack_spec.rb
@@ -34,7 +34,7 @@ describe 'server attacks' do
NATS.start(:uri => TEST_SERVER, :autostart => false, :reconnect => false) do
NATS.publish('foo', BIG_MSG) { NATS.stop }
end
- end.to raise_error NATS::ServerError
+ end.to raise_error
NATS.connected?.should be_false
end
|
Just care about error, on slow machine could be FP before server
|
nats-io_ruby-nats
|
train
|
rb
|
5815d6d2267530afde87427ea22dc5701ccbc629
|
diff --git a/dwave/cloud/solver.py b/dwave/cloud/solver.py
index <HASH>..<HASH> 100644
--- a/dwave/cloud/solver.py
+++ b/dwave/cloud/solver.py
@@ -156,9 +156,9 @@ class Solver(object):
... solver = client.get_solver()
... u, v = next(iter(solver.edges))
... computation = solver.sample_ising({u: -1, v: 1},{}, num_reads=5) # doctest: +SKIP
+ ... for i in range(5):
+ ... print(computation.samples[i][u], computation.samples[i][v])
...
- >>> for i in range(5): # doctest: +SKIP
- ... print(computation.samples[i][u], computation.samples[i][v])
...
(1, -1)
(1, -1)
@@ -194,9 +194,9 @@ class Solver(object):
... u, v = next(iter(solver.edges))
... Q = {(u, u): -1, (u, v): 0, (v, u): 2, (v, v): -1}
... computation = solver.sample_qubo(Q, num_reads=5)
+ ... for i in range(5):
+ ... print(computation.samples[i][u], computation.samples[i][v])
...
- >>> for i in range(5): # doctest: +SKIP
- ... print(computation.samples[i][u], computation.samples[i][v])
...
(0, 1)
(1, 0)
|
Fix for Radomir's feedback
|
dwavesystems_dwave-cloud-client
|
train
|
py
|
bb884b7cc6deb3a3caf14bd5d1d1f29a7e5c7c6f
|
diff --git a/app/controllers/stats_controller.rb b/app/controllers/stats_controller.rb
index <HASH>..<HASH> 100644
--- a/app/controllers/stats_controller.rb
+++ b/app/controllers/stats_controller.rb
@@ -4,6 +4,6 @@ class StatsController < ApplicationController
@number_of_users = User.count
@number_of_downloads = GemDownload.total_count
@most_downloaded = Rubygem.by_downloads.limit(10).includes(:gem_download).to_a
- @most_downloaded_count = @most_downloaded.first.gem_download.count
+ @most_downloaded_count = @most_downloaded.first && @most_downloaded.first.gem_download.count
end
end
diff --git a/test/functional/stats_controller_test.rb b/test/functional/stats_controller_test.rb
index <HASH>..<HASH> 100644
--- a/test/functional/stats_controller_test.rb
+++ b/test/functional/stats_controller_test.rb
@@ -42,6 +42,14 @@ class StatsControllerTest < ActionController::TestCase
end
end
+ context "on GET to index with no downloads" do
+ setup do
+ get :index
+ end
+
+ should respond_with :success
+ end
+
context "on GET to index with multiple gems" do
setup do
rg1 = create(:rubygem, downloads: 10, number: "1")
|
Prevent stats page error when no downloads
If the stats page was visited while there were no downloads in the
database, a NoMethodError would be thrown due to
`@most_downloaded.first` being nil. This commit prevents this by
checking it is available before accessing it.
While this issue wouldn't occur in production it could in development
after following the standard instructions for setting up a development
environment in CONTRIBUTING.md.
|
rubygems_rubygems.org
|
train
|
rb,rb
|
bc58442664db3a77e50c07db7a348728fda23e1a
|
diff --git a/lib/phusion_passenger/railz/application_spawner.rb b/lib/phusion_passenger/railz/application_spawner.rb
index <HASH>..<HASH> 100644
--- a/lib/phusion_passenger/railz/application_spawner.rb
+++ b/lib/phusion_passenger/railz/application_spawner.rb
@@ -300,7 +300,7 @@ private
if File.exist?('config/preinitializer.rb')
require 'config/preinitializer'
end
- require 'config/environment'
+ require File.expand_path('config/environment')
if ActionController::Base.page_cache_directory.blank?
ActionController::Base.page_cache_directory = "#{RAILS_ROOT}/public"
end
@@ -387,3 +387,4 @@ end
end # module Railz
end # module PhusionPassenger
+
|
Fix loading of config/environment.rb on Ruby <I>.
|
phusion_passenger
|
train
|
rb
|
807919d3307c7dcffd2b39b3a81ea035bd5d28cf
|
diff --git a/raiden/raiden_service.py b/raiden/raiden_service.py
index <HASH>..<HASH> 100644
--- a/raiden/raiden_service.py
+++ b/raiden/raiden_service.py
@@ -503,7 +503,7 @@ class RaidenService(Runnable):
def set_node_network_state(self, node_address, network_state):
state_change = ActionChangeNodeNetworkState(node_address, network_state)
- self.wal.log_and_dispatch(state_change)
+ self.handle_state_change(state_change)
def start_health_check_for(self, node_address):
# This function is a noop during initialization. It can be called
@@ -698,7 +698,7 @@ class RaidenService(Runnable):
def leave_all_token_networks(self):
state_change = ActionLeaveAllNetworks()
- self.wal.log_and_dispatch(state_change)
+ self.handle_state_change(state_change)
def close_and_settle(self):
log.info('raiden will close and settle all channels now')
|
Dispatch all state changes to handle_state_change
To ensure all state changes are logged, we should use the
handle_state_change method
|
raiden-network_raiden
|
train
|
py
|
63169b139a38f01342f07e7ff56b285a4c986e1d
|
diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py
index <HASH>..<HASH> 100644
--- a/aiohttp/__init__.py
+++ b/aiohttp/__init__.py
@@ -1,4 +1,4 @@
-__version__ = '3.0.0b1'
+__version__ = '3.0.0b2'
# This relies on each of the submodules having an __all__ variable.
|
Bump to <I>b2
|
aio-libs_aiohttp
|
train
|
py
|
e4079605993907138aff2a47f61a176a0d3a530d
|
diff --git a/aikif/cls_log.py b/aikif/cls_log.py
index <HASH>..<HASH> 100644
--- a/aikif/cls_log.py
+++ b/aikif/cls_log.py
@@ -28,7 +28,9 @@ class Log:
self.logFileSource = self.log_folder + os.sep + 'source.log'
self.logFileCommand = self.log_folder + os.sep + 'command.log'
self.logFileResult = self.log_folder + os.sep + 'result.log'
+ ensure_dir(self.logFileCommand) # need to pass file not the folder for this to work
+
def __str__(self):
return self.log_folder
@@ -46,7 +48,7 @@ class Log:
usr = GetUserName()
hst = GetHostName()
#print('_log : os.path.dirname(fname) = ', os.path.dirname(fname))
- ensure_dir(os.path.dirname(fname))
+ #ensure_dir(os.path.dirname(fname))
if prg == '':
prg = 'cls_log.log' # GetModuleName()
@@ -177,6 +179,7 @@ class LogSummary:
def ensure_dir(f):
""" NOTE - not sure if this works exactly - needs a separate test """
d = os.path.dirname(f)
+
if not os.path.exists(d):
os.makedirs(d)
|
fix to make ensure_dir actually work
|
acutesoftware_AIKIF
|
train
|
py
|
a4327b9fb07f1ea678e92c1f06aeb22c129827e2
|
diff --git a/configure.py b/configure.py
index <HASH>..<HASH> 100755
--- a/configure.py
+++ b/configure.py
@@ -297,6 +297,7 @@ if platform.is_msvc():
# We never have strings or arrays larger than 2**31.
'/wd4267',
'/DNOMINMAX', '/D_CRT_SECURE_NO_WARNINGS',
+ '/D_HAS_EXCEPTIONS=0',
'/DNINJA_PYTHON="%s"' % options.with_python]
if options.bootstrap:
# In bootstrap mode, we have no ninja process to catch /showIncludes
|
Set _HAS_EXCEPTIONS=0 on MSVC
|
ninja-build_ninja
|
train
|
py
|
b9d558700965bc95be0736560bdba4245d446a2d
|
diff --git a/lib/nucleon/action/node/provision.rb b/lib/nucleon/action/node/provision.rb
index <HASH>..<HASH> 100644
--- a/lib/nucleon/action/node/provision.rb
+++ b/lib/nucleon/action/node/provision.rb
@@ -68,7 +68,7 @@ class Provision < Nucleon.plugin_class(:nucleon, :cloud_action)
collection.each do |name, provisioner|
if supported_profiles = provisioner.supported_profiles(node, profiles)
- supported_profiles.each do |profile|
+ provisioner.profile_dependencies(node, supported_profiles).each do |profile|
info('profile', { :provider => yellow(provider), :profile => green(profile.to_s) })
end
unless settings[:check_profiles]
|
Showing the dependent profiles when running provision node action provider.
|
coralnexus_corl
|
train
|
rb
|
3a023faec8474943d2e47402875d668d506eca55
|
diff --git a/GlobalFunctions.php b/GlobalFunctions.php
index <HASH>..<HASH> 100644
--- a/GlobalFunctions.php
+++ b/GlobalFunctions.php
@@ -640,9 +640,8 @@ function kfApiExport( $data = array( 'krApiExport' => 'Example' ), $format = '',
break;
default:
- if ( function_exists( 'http_response_code' ) ) {
- http_response_code( 400 /* Bad Request */ );
- }
+ // HTTP 400 Bad Request
+ http_response_code( 400 );
header( 'Content-Type: text/plain; charset=utf-8', /*replace=*/true );
echo 'Invalid format.';
exit;
|
Make use of http_response_code unconditional
BaseTool now comes with a polyfill for that as of e<I>.
|
Krinkle_toollabs-base
|
train
|
php
|
5809b118c20b0d2463be14904558d47dfbffd4da
|
diff --git a/src/Locators/Locators.php b/src/Locators/Locators.php
index <HASH>..<HASH> 100644
--- a/src/Locators/Locators.php
+++ b/src/Locators/Locators.php
@@ -47,7 +47,9 @@ class Locators
* @var array
* @since 3.7.4.2
*/
- public $frontEndLoginSuccess = ['xpath' => "//form[@class='mod-login-logout form-vertical']/div[@class='mod-login-logout__button logout-button']"];
+ public $frontEndLoginSuccess = [
+ 'xpath' => "//form[@class='mod-login-logout form-vertical']/div[@class='mod-login-logout__button logout-button']"
+ ];
/**
* Locator for the Logout Button
|
Fix line length code style issue (#<I>)
|
joomla-projects_joomla-browser
|
train
|
php
|
059c661c2f7957a3a561c439c4529363b7eb88f8
|
diff --git a/lib/travis/enqueue/services/enqueue_jobs.rb b/lib/travis/enqueue/services/enqueue_jobs.rb
index <HASH>..<HASH> 100644
--- a/lib/travis/enqueue/services/enqueue_jobs.rb
+++ b/lib/travis/enqueue/services/enqueue_jobs.rb
@@ -34,20 +34,21 @@ module Travis
private
def enqueue_all
- jobs.group_by(&:owner).each do |owner, jobs|
- next unless owner
- limit = Limit.new(owner, jobs)
- enqueue(limit.queueable)
- reports[owner.login] = limit.report
+ grouped_jobs = jobs.group_by(&:owner)
+
+ Metriks.timer('enqueue.total').time do
+ grouped_jobs.each do |owner, jobs|
+ next unless owner
+ limit = Limit.new(owner, jobs)
+ enqueue(limit.queueable)
+ reports[owner.login] = limit.report
+ end
end
end
def enqueue(jobs)
- _jobs = jobs
-
-
- Metriks.timer('enqueue.publish_and_enqueue_total').time do
- _jobs.each do |job|
+ Metriks.timer('enqueue.enqueue_per_owner').time do
+ jobs.each do |job|
publish(job)
Metriks.timer('enqueue.enqueue_job').time do
|
Fix measuring totals on enqueuing
Previous commit introduced total enqueue time in a place which is per
owner enqueue time.
|
travis-ci_travis-core
|
train
|
rb
|
5e181ed0aeb829510f1c739a7dcc8328d2675258
|
diff --git a/actionpack/lib/action_dispatch/routing/route_set.rb b/actionpack/lib/action_dispatch/routing/route_set.rb
index <HASH>..<HASH> 100644
--- a/actionpack/lib/action_dispatch/routing/route_set.rb
+++ b/actionpack/lib/action_dispatch/routing/route_set.rb
@@ -673,15 +673,18 @@ module ActionDispatch
RESERVED_OPTIONS.each { |ro| path_options.delete ro }
path, params = generate(path_options, recall)
- params.merge!(options[:params] || {})
-
- ActionDispatch::Http::URL.url_for(options.merge!({
- :path => path,
- :script_name => script_name,
- :params => params,
- :user => user,
- :password => password
- }))
+
+ if options.key? :params
+ params.merge! options[:params]
+ end
+
+ options[:path] = path
+ options[:script_name] = script_name
+ options[:params] = params
+ options[:user] = user
+ options[:password] = password
+
+ ActionDispatch::Http::URL.url_for(options)
end
def call(env)
|
fewer hash allocations when calling url_for
|
rails_rails
|
train
|
rb
|
d6f47017ad891c2cc3b17cf6360cc46550ac504e
|
diff --git a/guides/rails_guides/generator.rb b/guides/rails_guides/generator.rb
index <HASH>..<HASH> 100644
--- a/guides/rails_guides/generator.rb
+++ b/guides/rails_guides/generator.rb
@@ -189,7 +189,7 @@ module RailsGuides
anchors = Set.new
html.scan(/<h\d\s+id="([^"]+)/).flatten.each do |anchor|
if anchors.member?(anchor)
- puts "*** DUPLICATE ID: #{anchor}, please make sure that there're no headings with the same name at the same level."
+ puts "*** DUPLICATE ID: #{anchor}, please make sure that there are no headings with the same name at the same level."
else
anchors << anchor
end
|
Fix spelling
Change `there're` to `there are`
`there're` is not in the dictionary and we use `there are` in many places
<URL>
|
rails_rails
|
train
|
rb
|
4611646b5093cce8b19c4132cd9bace81eddbaf4
|
diff --git a/core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java b/core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java
index <HASH>..<HASH> 100644
--- a/core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java
+++ b/core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java
@@ -412,7 +412,7 @@ public class JarAnalyzer extends AbstractFileTypeAnalyzer {
newDependency.setName(String.format("%s:%s", groupId, pom.getArtifactId()));
newDependency.setPackagePath(String.format("%s:%s:%s", groupId, pom.getArtifactId(), version));
}
- newDependency.setDisplayFileName(String.format("%s (%s)", dependency.getDisplayFileName(), newDependency.getPackagePath()));
+ newDependency.setDisplayFileName(String.format("%s (shaded: %s)", dependency.getDisplayFileName(), newDependency.getPackagePath()));
newDependency.setVersion(version);
setPomEvidence(newDependency, pom, null);
if (dependency.getProjectReferences().size() > 0) {
|
minor update to display name for shaded jars
|
jeremylong_DependencyCheck
|
train
|
java
|
6728b9ec5e5b7830d8aa8d08f6f05f59a30d120b
|
diff --git a/endpoints_management/__init__.py b/endpoints_management/__init__.py
index <HASH>..<HASH> 100644
--- a/endpoints_management/__init__.py
+++ b/endpoints_management/__init__.py
@@ -16,7 +16,7 @@ from __future__ import absolute_import
from . import auth, config, control, gen
-__version__ = '1.2.0'
+__version__ = '1.2.1'
USER_AGENT = u'ESP'
SERVICE_AGENT = u'EF_PYTHON/' + __version__
|
Bump subminor version (<I> -> <I>)
|
cloudendpoints_endpoints-management-python
|
train
|
py
|
e892829155e6c1f2b777eca55845fd8d1014bf36
|
diff --git a/pkg/minikube/node/start.go b/pkg/minikube/node/start.go
index <HASH>..<HASH> 100644
--- a/pkg/minikube/node/start.go
+++ b/pkg/minikube/node/start.go
@@ -136,7 +136,7 @@ func Start(starter Starter, apiServer bool) (*kubeconfig.Settings, error) {
wg.Add(1)
go func() {
if err := CacheAndLoadImagesInConfig(); err != nil {
- out.FailureT("Unable to push cached images: {{error}}", out.V{"error": err})
+ out.FailureT("Unable to push cached images: {{.error}}", out.V{"error": err})
}
wg.Done()
}()
|
Missing period character in the error template
|
kubernetes_minikube
|
train
|
go
|
7347ff09930fb3005bdf049c9d9f3598bd2b0d5d
|
diff --git a/app/models/message.rb b/app/models/message.rb
index <HASH>..<HASH> 100644
--- a/app/models/message.rb
+++ b/app/models/message.rb
@@ -60,7 +60,7 @@ class Message
end
scope :page, lambda {|number| skip(self.get_offset(number))}
- scope :default_scope, order_by({"_id" => "-1"}).not_deleted.limit(LIMIT)
+ scope :default_scope, order_by({"created_at" => "-1"}).not_deleted.limit(LIMIT)
scope :time_range, lambda {|from, to| where(:created_at => {"$gte" => from}).where(:created_at => {"$lte" => to})}
def self.find_by_id(_id)
|
Sort messages by created_at by default.
Keeps messages in correct order when they are
coming from several servers in the same second.
|
Graylog2_graylog2-server
|
train
|
rb
|
514366008b265c65114308b1e794356967d3a0a2
|
diff --git a/packages/openneuro-server/datalad/snapshots.js b/packages/openneuro-server/datalad/snapshots.js
index <HASH>..<HASH> 100644
--- a/packages/openneuro-server/datalad/snapshots.js
+++ b/packages/openneuro-server/datalad/snapshots.js
@@ -11,6 +11,13 @@ const snapshotKey = (datasetId, tag) => {
return `openneuro:snapshot:${datasetId}:${tag}`
}
+/**
+ * Get a list of all snapshot tags available for a dataset
+ *
+ * This is equivalent to `git tag` on the repository
+ *
+ * @param {string} datasetId Dataset accession number
+ */
export const getSnapshots = datasetId => {
const url = `${uri}/datasets/${datasetId}/snapshots`
return request
@@ -21,6 +28,11 @@ export const getSnapshots = datasetId => {
})
}
+/**
+ * Get the contents of a snapshot (files, git metadata) from datalad-service
+ * @param {string} datasetId Dataset accession number
+ * @param {string} tag Tag name to retrieve
+ */
export const getSnapshot = (datasetId, tag) => {
const url = `${uri}/datasets/${datasetId}/snapshots/${tag}`
const key = snapshotKey(datasetId, tag)
|
Server: Add some documentation comments to snapshot functions.
|
OpenNeuroOrg_openneuro
|
train
|
js
|
cffcd1120bbc02eb4a6e9b702df9297158171ff2
|
diff --git a/src/scene/scene_worker.js b/src/scene/scene_worker.js
index <HASH>..<HASH> 100644
--- a/src/scene/scene_worker.js
+++ b/src/scene/scene_worker.js
@@ -302,6 +302,7 @@ const SceneWorker = Object.assign(self, {
// Info to return with each feature
let subset = {
type: feature.type,
+ id: feature.id,
properties: Object.assign({}, feature.properties, {
$source: context.source,
$layer: context.layer,
|
include feature.id with data returned from scene.queryFeatures()
|
tangrams_tangram
|
train
|
js
|
80658e8824cb2ca9daebc9ac4121aeb2688f1ce6
|
diff --git a/lib/varDiff.js b/lib/varDiff.js
index <HASH>..<HASH> 100644
--- a/lib/varDiff.js
+++ b/lib/varDiff.js
@@ -74,7 +74,7 @@ var varDiff = module.exports = function varDiff(ports){
return;
var calcInfo = portsCalcInfo[stratumPort];
- var options = ports[stratumPort];
+ var options = ports[stratumPort].varDiff;
var lastTs;
var lastRtc;
|
Update varDiff.js
options where not defined.. turned out its under key in ports object
|
zone117x_node-stratum-pool
|
train
|
js
|
1633fc0742f5d1c97f9eebd93f00be946c31a8e9
|
diff --git a/lib/active_model/associations/version.rb b/lib/active_model/associations/version.rb
index <HASH>..<HASH> 100644
--- a/lib/active_model/associations/version.rb
+++ b/lib/active_model/associations/version.rb
@@ -1,5 +1,5 @@
module ActiveModel
module Associations
- VERSION = "0.0.1"
+ VERSION = "0.0.2"
end
end
|
bump up version to <I>
|
joker1007_activemodel-associations
|
train
|
rb
|
9f14363a17f49cd48a93bfd5520244aca89a0a4e
|
diff --git a/lib/queue_classic/database.rb b/lib/queue_classic/database.rb
index <HASH>..<HASH> 100644
--- a/lib/queue_classic/database.rb
+++ b/lib/queue_classic/database.rb
@@ -48,10 +48,6 @@ module QC
execute("SELECT * FROM pg_stat_activity WHERE datname = '#{@name}' AND application_name = 'queue_classic'")
end
- def silence_warnings
- execute("SET client_min_messages TO 'warning'")
- end
-
def execute(sql)
connection.exec(sql)
end
@@ -77,7 +73,6 @@ module QC
@name = @db_params.path.gsub("/","")
@@connection = connect
@@connection.exec("SET application_name = 'queue_classic'")
- silence_warnings unless VERBOSE
end
@@connection
end
|
this can be done via psql console
|
QueueClassic_queue_classic
|
train
|
rb
|
3a683dd14520a15b4feb786de56cef3a95c63ba9
|
diff --git a/statefulj-framework/statefulj-framework-core/src/main/java/org/statefulj/framework/core/StatefulFactory.java b/statefulj-framework/statefulj-framework-core/src/main/java/org/statefulj/framework/core/StatefulFactory.java
index <HASH>..<HASH> 100644
--- a/statefulj-framework/statefulj-framework-core/src/main/java/org/statefulj/framework/core/StatefulFactory.java
+++ b/statefulj-framework/statefulj-framework-core/src/main/java/org/statefulj/framework/core/StatefulFactory.java
@@ -162,7 +162,7 @@ public class StatefulFactory implements BeanDefinitionRegistryPostProcessor, App
//
if (isStatefulFSM) {
- // Determine the controllerId - either explicit or derviced
+ // Determine the controllerId - either explicit or derived
//
controllerId = getControllerId(fsmAnnotation);
if (StringUtils.isEmpty(controllerId)) {
|
Fix misspelling - I do that a lot
|
statefulj_statefulj
|
train
|
java
|
724ef00d52d22a0db30711e41e3e763d5d5c8a19
|
diff --git a/client/modules/crm/src/views/lead/convert.js b/client/modules/crm/src/views/lead/convert.js
index <HASH>..<HASH> 100644
--- a/client/modules/crm/src/views/lead/convert.js
+++ b/client/modules/crm/src/views/lead/convert.js
@@ -100,7 +100,8 @@ Espo.define('crm:views/lead/convert', 'view', function (Dep) {
model.set(data[scope] || {}, {silent: true});
- this.createView(scope, 'views/record/edit', {
+ var convertEntityViewName = this.getMetadata().get(['clientDefs', scope, 'recordViews', 'edit']) || 'views/record/edit';
+ this.createView(scope, convertEntityViewName, {
model: model,
el: '#main .edit-container-' + Espo.Utils.toDom(scope),
buttonsPosition: false,
|
clientDefs view of entities for convert (#<I>)
|
espocrm_espocrm
|
train
|
js
|
eb777906dc82425c43372067157897eb82d0ac0b
|
diff --git a/engineer/engine.py b/engineer/engine.py
index <HASH>..<HASH> 100644
--- a/engineer/engine.py
+++ b/engineer/engine.py
@@ -76,6 +76,15 @@ def build(args=None):
mirror_folder(s, t)
logger.debug("Copied static files to %s." % relpath(t))
+ # Copy 'raw' content to output cache - first pass
+ # This first pass ensures that any static content - JS/LESS/CSS - that
+ # is needed by site-specific pages (like template pages) is available
+ # during the build
+ if settings.CONTENT_DIR.exists():
+ mirror_folder(settings.CONTENT_DIR,
+ settings.OUTPUT_CACHE_DIR,
+ delete_orphans=False)
+
# Copy theme static content to output dir
logger.debug("Copying theme static files to output cache.")
try:
@@ -194,7 +203,7 @@ def build(args=None):
file.write(feed_content)
logger.debug("Output %s." % relpath(file.name))
- # Copy 'raw' content to output cache
+ # Copy 'raw' content to output cache - second/final pass
if settings.CONTENT_DIR.exists():
mirror_folder(settings.CONTENT_DIR,
settings.OUTPUT_CACHE_DIR,
|
Copy raw content twice instead of only once.
Without this change, static content in a site's raw content folder
can't benefit from the Engineer build pipeline. With this change in
place, site-specific raw content can be compressed, preprocessed,
etc. just like theme content and built-in Engineer resources.
|
tylerbutler_engineer
|
train
|
py
|
a7a96d70de246482ca423d877bf2787c1bc61b89
|
diff --git a/bindings/vue/vue-onsenui/src/mixins/derive.js b/bindings/vue/vue-onsenui/src/mixins/derive.js
index <HASH>..<HASH> 100644
--- a/bindings/vue/vue-onsenui/src/mixins/derive.js
+++ b/bindings/vue/vue-onsenui/src/mixins/derive.js
@@ -3,8 +3,8 @@ import ons from 'onsenui';
import { camelize, eventToHandler, handlerToProp, capitalize } from '../internal/util';
/* Private */
+const dbb = 'onDeviceBackButton';
const _setupDBB = component => {
- const dbb = 'onDeviceBackButton';
// Call original handler or parent handler by default
const handler = component[dbb] || (component.$el[dbb] && component.$el[dbb]._callback) || (e => e.callParentHandler());
@@ -25,6 +25,7 @@ const _setupDBB = component => {
/* Public */
// Device Back Button Handler
const deriveDBB = {
+ emits: [handlerToProp(dbb)],
mounted() {
_setupDBB(this);
},
|
fix(vue): Add deviceBackButton event to emits key
|
OnsenUI_OnsenUI
|
train
|
js
|
4ca9b27828d62015cc4bbe1ff3e03cdf56774772
|
diff --git a/dirsync/syncer.py b/dirsync/syncer.py
index <HASH>..<HASH> 100644
--- a/dirsync/syncer.py
+++ b/dirsync/syncer.py
@@ -160,7 +160,7 @@ class Syncer(object):
left.add(path)
anc_dirs = re_path[:-1].split('/')
anc_dirs_path = ''
- for ad in anc_dirs:
+ for ad in anc_dirs[1:]:
anc_dirs_path = os.path.join(anc_dirs_path, ad)
left.add(anc_dirs_path)
|
fix(syncer): fix range issue introduced in <I>efc<I>
|
tkhyn_dirsync
|
train
|
py
|
3c19b3a1eb58a2d966205492deb2776952d19159
|
diff --git a/Swat/SwatUIParent.php b/Swat/SwatUIParent.php
index <HASH>..<HASH> 100644
--- a/Swat/SwatUIParent.php
+++ b/Swat/SwatUIParent.php
@@ -1,6 +1,4 @@
<?php
-require_once('Swat/SwatWidget.php');
-
/**
* Interface for widgets that are parents for other widgets.
*
|
Small change.
svn commit r<I>
|
silverorange_swat
|
train
|
php
|
9338202167e7b3fa6c2ead9079aba1058066e768
|
diff --git a/isort/isort.py b/isort/isort.py
index <HASH>..<HASH> 100644
--- a/isort/isort.py
+++ b/isort/isort.py
@@ -111,12 +111,7 @@ class SortImports(object):
if self.config['line_ending']:
self.line_separator = self.config['line_ending']
else:
- if '\r\n' in file_contents:
- self.line_separator = '\r\n'
- elif '\r' in file_contents:
- self.line_separator = '\r'
- else:
- self.line_separator = '\n'
+ self.line_separator = utils.infer_line_separator(file_contents)
self.in_lines = file_contents.split(self.line_separator)
self.original_length = len(self.in_lines)
diff --git a/isort/utils.py b/isort/utils.py
index <HASH>..<HASH> 100644
--- a/isort/utils.py
+++ b/isort/utils.py
@@ -52,3 +52,12 @@ def difference(a: Iterable[Any], b: Container[Any]) -> List[Any]:
if item not in b:
d.append(item)
return d
+
+
+def infer_line_separator(file_contents: str) -> str:
+ if '\r\n' in file_contents:
+ return '\r\n'
+ elif '\r' in file_contents:
+ return '\r'
+ else:
+ return '\n'
|
extract infer_line_separator method
|
timothycrosley_isort
|
train
|
py,py
|
05ec99eed9813a497426432fb9ecc38379bc4b5e
|
diff --git a/railties/lib/rails/commands/application.rb b/railties/lib/rails/commands/application.rb
index <HASH>..<HASH> 100644
--- a/railties/lib/rails/commands/application.rb
+++ b/railties/lib/rails/commands/application.rb
@@ -8,6 +8,6 @@ ARGV << "--help" if ARGV.empty?
require 'rubygems' if ARGV.include?("--dev")
require 'rails/generators'
-require 'generators/rails/app/app_generator'
+require 'rails/generators/rails/app/app_generator'
Rails::Generators::AppGenerator.start
\ No newline at end of file
|
bin/rails should use the new app generator path.
|
rails_rails
|
train
|
rb
|
aff35ad6698a78d641f45e81c4d9d1e2daa85cc8
|
diff --git a/spec/savon/operation_spec.rb b/spec/savon/operation_spec.rb
index <HASH>..<HASH> 100644
--- a/spec/savon/operation_spec.rb
+++ b/spec/savon/operation_spec.rb
@@ -41,17 +41,9 @@ describe Savon::Operation do
describe "#call" do
it "returns a response object" do
- expect(new_operation.call).to be_a(Savon::Response)
- end
-
- it "does not set the local :message_tag option if it is already specified" do
- response = new_operation.call(:message_tag => "doAuthenticate")
- expect(response.locals[:message_tag]).to eq("doAuthenticate")
+ operation = Savon::Operation.create(:authenticate, wsdl, globals)
+ expect(operation.call).to be_a(Savon::Response)
end
end
- def new_operation
- Savon::Operation.create(:authenticate, wsdl, globals)
- end
-
end
|
removed a spec left from the Operation refactoring
|
savonrb_savon
|
train
|
rb
|
a4f2ce7fa7e96e112b9253a37b76e369a2655438
|
diff --git a/src/services/spy-api.js b/src/services/spy-api.js
index <HASH>..<HASH> 100644
--- a/src/services/spy-api.js
+++ b/src/services/spy-api.js
@@ -133,7 +133,8 @@ angular.module('duScroll.spyAPI', ['duScroll.scrollContainerAPI'])
var addSpy = function(spy) {
var context = getContextForSpy(spy);
- getContextForSpy(spy).spies.push(spy);
+ if (!context) return;
+ context.spies.push(spy);
if (!context.container || !isElementInDocument(context.container)) {
if(context.container) {
context.container.off('scroll', context.handler);
|
Prevent error when scope disappears too quickly.
|
oblador_angular-scroll
|
train
|
js
|
0f6e84bca374bd229d4ffcae5f9edc468a48ed56
|
diff --git a/lib/utils/template_generator.js b/lib/utils/template_generator.js
index <HASH>..<HASH> 100644
--- a/lib/utils/template_generator.js
+++ b/lib/utils/template_generator.js
@@ -64,7 +64,7 @@ class TemplateGenerator {
}
}
} catch (e) {
- this.downloadFailed();
+ return this.downloadFailed();
}
utils.extractZip(tmpFilePath, fspath, {
map: file => {
|
return in the catch branch so the control flow is more clear
|
embark-framework_embark
|
train
|
js
|
7a3700e4bc507339924b3768489a7edfdfbd5082
|
diff --git a/install.js b/install.js
index <HASH>..<HASH> 100644
--- a/install.js
+++ b/install.js
@@ -129,7 +129,7 @@ var dependencies = Q.allSettled([
var flags = ['-DTHREADSAFE=ON', '-DBUILD_CLAR=OFF'];
// Windows flags.
- if (process.platform.indexOf('win') > -1) {
+ if (process.platform === 'win32') {
flags.push.apply(flags, [
'-DSTDCALL=OFF',
'-DBUILD_SHARED_LIBS=OFF',
|
Fix erroneous OS detection for installation in OS X.
|
nodegit_nodegit
|
train
|
js
|
f93d64e1638c8109fabea4ed1a5757b4047f1462
|
diff --git a/Eloquent/Model.php b/Eloquent/Model.php
index <HASH>..<HASH> 100644
--- a/Eloquent/Model.php
+++ b/Eloquent/Model.php
@@ -2396,10 +2396,18 @@ abstract class Model implements ArrayAccess, Arrayable, Jsonable, JsonSerializab
*
* @param int $options
* @return string
+ *
+ * @throws \InvalidArgumentException
*/
public function toJson($options = 0)
{
- return json_encode($this->jsonSerialize(), $options);
+ $json = json_encode($this->jsonSerialize(), $options);
+
+ if (JSON_ERROR_NONE !== json_last_error()) {
+ throw new InvalidArgumentException(json_last_error_msg());
+ }
+
+ return $json;
}
/**
|
Model::toJson() silently fails on error
I had not UTF-8 chars into a legacy database and this method was silently failing.
I think it's a good impovement to throw an InvalidArgumentException
|
illuminate_database
|
train
|
php
|
ee184267483c5c1b819f6e7ee893782a8167f2d1
|
diff --git a/rubygem/lib/zeus/rails.rb b/rubygem/lib/zeus/rails.rb
index <HASH>..<HASH> 100644
--- a/rubygem/lib/zeus/rails.rb
+++ b/rubygem/lib/zeus/rails.rb
@@ -24,14 +24,6 @@ require 'zeus/m'
module Zeus
class Rails < Plan
- def deprecated
- puts "Zeus 0.11.0 changed zeus.json. You'll have to rm zeus.json && zeus init."
- end
- alias_method :spec_helper, :deprecated
- alias_method :testrb, :deprecated
- alias_method :rspec, :deprecated
-
-
def after_fork
reconnect_activerecord
restart_girl_friday
|
Remove deprecation warning from <I>
|
burke_zeus
|
train
|
rb
|
8738bf5fb28ec906f47a46008455d31af2a6e6c7
|
diff --git a/sandbox/namespace_linux.go b/sandbox/namespace_linux.go
index <HASH>..<HASH> 100644
--- a/sandbox/namespace_linux.go
+++ b/sandbox/namespace_linux.go
@@ -5,7 +5,6 @@ import (
"net"
"os"
"os/exec"
- "path/filepath"
"runtime"
"sync"
"syscall"
@@ -48,9 +47,6 @@ func createBasePath() {
panic("Could not create net namespace path directory")
}
- // cleanup any stale namespace files if any
- cleanupNamespaceFiles()
-
// Start the garbage collection go routine
go removeUnusedPaths()
}
@@ -155,24 +151,6 @@ func createNetworkNamespace(path string, osCreate bool) (*Info, error) {
return info, nil
}
-func cleanupNamespaceFiles() {
- filepath.Walk(prefix, func(path string, info os.FileInfo, err error) error {
- stat, err := os.Stat(path)
- if err != nil {
- return err
- }
-
- if stat.IsDir() {
- return filepath.SkipDir
- }
-
- syscall.Unmount(path, syscall.MNT_DETACH)
- os.Remove(path)
-
- return nil
- })
-}
-
func unmountNamespaceFile(path string) {
if _, err := os.Stat(path); err == nil {
syscall.Unmount(path, syscall.MNT_DETACH)
|
Removee the init time cleanup of namespace files
Removing this as this may cause problems when
multiple instances are e running.
|
docker_libnetwork
|
train
|
go
|
0d483e750ea6b60537484fd059364e9a01f76ef6
|
diff --git a/src/Parser.php b/src/Parser.php
index <HASH>..<HASH> 100644
--- a/src/Parser.php
+++ b/src/Parser.php
@@ -675,6 +675,9 @@ class Parser
private function decodeHeader($input)
{
//return (is_array($input)) ? iconv_mime_decode($input[0], 2, 'UTF-8') : iconv_mime_decode($input, 2, 'UTF-8');
+ if(is_array($input))
+ $input = $input[0];
+
$resp = imap_utf8(trim($input));
if(preg_match("/=\?/", $resp))
|
Debug HHVM Issue #<I>
|
php-mime-mail-parser_php-mime-mail-parser
|
train
|
php
|
96d58ce02e66237a9aaef5f3a4ec226478aa2eec
|
diff --git a/src/notebook/components/notebook.js b/src/notebook/components/notebook.js
index <HASH>..<HASH> 100644
--- a/src/notebook/components/notebook.js
+++ b/src/notebook/components/notebook.js
@@ -73,9 +73,18 @@ class Notebook extends React.Component {
const belowFold = (cellTop + cellHeight) > (viewportOffset + viewportHeight);
const aboveFold = cellTop < viewportOffset;
- if (aboveFold || belowFold) {
+ if (aboveFold) {
document.body.scrollTop = cellTop;
}
+
+ if (belowFold) {
+ if (cellHeight > viewportHeight) {
+ document.body.scrollTop = cellTop;
+ } else {
+ const offset = viewportHeight - cellHeight;
+ document.body.scrollTop = cellTop - offset;
+ }
+ }
}
}
|
Move cells only as far as they need to go when below fold. Anchor cells that are larger than the viewport to the top.
|
nteract_nteract
|
train
|
js
|
e473b9743bc12d8a900de04148140b8dcebab704
|
diff --git a/gwpy/timeseries/io/cache.py b/gwpy/timeseries/io/cache.py
index <HASH>..<HASH> 100644
--- a/gwpy/timeseries/io/cache.py
+++ b/gwpy/timeseries/io/cache.py
@@ -34,6 +34,9 @@ from ...io.cache import cache_segments
from .. import (TimeSeries, TimeSeriesList, TimeSeriesDict,
StateVector, StateVectorDict)
+# set maximum number of channels with which to still use lalframe
+MAX_LALFRAME_CHANNELS = 4
+
def read_cache(cache, channel, start=None, end=None, resample=None,
gap='raise', nproc=1, **kwargs):
@@ -113,9 +116,9 @@ def read_cache(cache, channel, start=None, end=None, resample=None,
else:
raise ValueError(msg)
- # if reading one channel, try to use lalframe, its faster
- if (isinstance(channel, str) or
- (isinstance(channel, (list, tuple)) and len(channel) == 1)):
+ # if reading a small number of channels, try to use lalframe, its faster
+ if (isinstance(channel, str) or (isinstance(channel, (list, tuple)) and
+ len(channel) <= MAX_LALFRAME_CHANNELS)):
try:
from lalframe import frread
except ImportError:
|
TimeSeries.read(cache): added channel count check
- if <= 4 channels, try and use lalframe anyway
- serial programming with lalframe is still faster than frameCPP.py
for a small number
- might need to revisit the choice of '4'
|
gwpy_gwpy
|
train
|
py
|
e24b2163be76b72ace9ceefb95e839cd99f762f3
|
diff --git a/playback/templates/my_cnf_xenial.py b/playback/templates/my_cnf_xenial.py
index <HASH>..<HASH> 100644
--- a/playback/templates/my_cnf_xenial.py
+++ b/playback/templates/my_cnf_xenial.py
@@ -12,4 +12,5 @@ wsrep_cluster_address="{{ wsrep_cluster_address }}"
wsrep_node_name="{{ wsrep_node_name }}"
wsrep_node_address="{{ wsrep_node_address }}"
wsrep_provider=/usr/lib/libgalera_smm.so
+binlog_format=ROW
"""
|
Only binlog_format = 'ROW' is currently supported.
|
jiasir_playback
|
train
|
py
|
eb8513e00900bd0e3b343756eec23c2484260e60
|
diff --git a/pyrogram/client/methods/messages/send_video.py b/pyrogram/client/methods/messages/send_video.py
index <HASH>..<HASH> 100644
--- a/pyrogram/client/methods/messages/send_video.py
+++ b/pyrogram/client/methods/messages/send_video.py
@@ -75,9 +75,10 @@ class SendVideo(BaseClient):
Video height.
thumb (``str``, *optional*):
- Video thumbnail.
- Pass a file path as string to send an image that exists on your local machine.
- Thumbnail should have 90 or less pixels of width and 90 or less pixels of height.
+ Thumbnail of the video sent.
+ The thumbnail should be in JPEG format and less than 200 KB in size.
+ A thumbnail's width and height should not exceed 90 pixels.
+ Thumbnails can't be reused and can be only uploaded as a new file.
supports_streaming (``bool``, *optional*):
Pass True, if the uploaded video is suitable for streaming.
|
Update send_video docstrings
Add a more detailed "thumb" description
|
pyrogram_pyrogram
|
train
|
py
|
52f96fd2993f90841d41b0ccfb01583a74fed731
|
diff --git a/lib/redfish/tasks/jdbc_resource.rb b/lib/redfish/tasks/jdbc_resource.rb
index <HASH>..<HASH> 100644
--- a/lib/redfish/tasks/jdbc_resource.rb
+++ b/lib/redfish/tasks/jdbc_resource.rb
@@ -19,7 +19,7 @@ module Redfish
private
- attribute :connectionpoolid, :kind_of => String, :required => true, :identity_field => true
+ attribute :connectionpoolid, :kind_of => String, :identity_field => true
attribute :name, :kind_of => String, :required => true, :identity_field => true
attribute :enabled, :type => :boolean, :default => true
attribute :description, :kind_of => String, :default => ''
@@ -27,6 +27,7 @@ module Redfish
attribute :deployment_order, :kind_of => Fixnum, :default => 100
action :create do
+ raise 'connectionpoolid property not set' unless self.connectionpoolid
create(resource_property_prefix)
end
|
connectionpoolid only required during create
|
realityforge_redfish
|
train
|
rb
|
1fe53efb79f3d29254c7e5df629d0c18ba3037a4
|
diff --git a/wallace/models.py b/wallace/models.py
index <HASH>..<HASH> 100644
--- a/wallace/models.py
+++ b/wallace/models.py
@@ -1038,6 +1038,7 @@ class Transmission(Base):
def mark_received(self):
self.receive_time = timenow()
+ self.status = "received"
def __repr__(self):
"""The string representation of a transmission."""
|
Transmission: fix mark_received
|
berkeley-cocosci_Wallace
|
train
|
py
|
d70db61b9887dc3889af9f4b14948b9238f02371
|
diff --git a/lib/rabl/builder.rb b/lib/rabl/builder.rb
index <HASH>..<HASH> 100644
--- a/lib/rabl/builder.rb
+++ b/lib/rabl/builder.rb
@@ -199,7 +199,7 @@ module Rabl
# Evaluate conditions given a symbol/proc/lambda/variable to evaluate
def call_condition_proc(condition, object)
# This will evaluate lambda, proc & symbol and call it with 1 argument
- return condition.to_proc.call(object) if condition.respond_to?(:to_proc)
+ return condition.to_proc.call(object) if condition.is_a?(Proc) || condition.is_a?(Symbol)
# Else we send directly the object
condition
end
diff --git a/test/builder_test.rb b/test/builder_test.rb
index <HASH>..<HASH> 100644
--- a/test/builder_test.rb
+++ b/test/builder_test.rb
@@ -353,5 +353,10 @@ context "Rabl::Builder" do
scope = Rabl::Builder.new(ArbObj.new)
scope.send(:resolve_condition, { :if => nil })
end.equals(nil)
+
+ asserts "that it can use a hash variable on if condition and return true" do
+ scope = Rabl::Builder.new(ArbObj.new)
+ scope.send(:resolve_condition, { :if => { some: 'data' } })
+ end.equals(nil)
end
end
|
[#<I>] Compatibility with ruby <I>
|
nesquena_rabl
|
train
|
rb,rb
|
07fc2ef5ac07e75a11a85a1f5db1c6f551451f95
|
diff --git a/java/messaging/messaging-common/src/test/java/io/joynr/messaging/routing/AddressManagerTest.java b/java/messaging/messaging-common/src/test/java/io/joynr/messaging/routing/AddressManagerTest.java
index <HASH>..<HASH> 100644
--- a/java/messaging/messaging-common/src/test/java/io/joynr/messaging/routing/AddressManagerTest.java
+++ b/java/messaging/messaging-common/src/test/java/io/joynr/messaging/routing/AddressManagerTest.java
@@ -151,7 +151,7 @@ public class AddressManagerTest {
@Test
public void testGetLocalMulticastParticipantAddresses() {
- createAddressManager(NO_PRIMARY_GLOBAL_TRANSPORT, multicastAddressCalculator);
+ createAddressManager(PRIMARY_GLOBAL_TRANSPORT_MQTT, multicastAddressCalculator);
when(joynrMessage.isReceivedFromGlobal()).thenReturn(true);
when(joynrMessage.getSender()).thenReturn("from");
|
[Java] Make test more consistent
There should be a global transport when the message is received from
global.
Change-Id: I<I>c<I>c<I>e<I>c<I>c7a
|
bmwcarit_joynr
|
train
|
java
|
13e73b626f513c5e5eb51f551d97c825d82d5ba5
|
diff --git a/lib/URLHelper.php b/lib/URLHelper.php
index <HASH>..<HASH> 100644
--- a/lib/URLHelper.php
+++ b/lib/URLHelper.php
@@ -351,7 +351,7 @@ class URLHelper {
* @param string $needle ex: http://example.org/wp-content
* @return string
*/
- public static function remove_url_component($haystack, $needle) {
+ public static function remove_url_component( $haystack, $needle ) {
$haystack = str_replace($needle, '', $haystack);
$needle = self::swap_protocol($needle);
return str_replace($needle, '', $haystack);
|
Scrutinizer Auto-Fixes
This commit consists of patches automatically generated for this project on <URL>
|
timber_timber
|
train
|
php
|
49b7e324120395e906e32e181392606f3dcf110f
|
diff --git a/scripts/install-dependencies.rb b/scripts/install-dependencies.rb
index <HASH>..<HASH> 100755
--- a/scripts/install-dependencies.rb
+++ b/scripts/install-dependencies.rb
@@ -8,7 +8,7 @@
# Assumptions:
# 1. You've got one or more Android SDKs and Google APIs installed locally.
# 2. Your ANDROID_HOME environment variable points to the Android SDK install directory.
-# 3. You have installed the Android Repository and Google Repository libraries from the SDK installer.
+# 3. You have installed the Android Support Repository and Google Repository libraries from the SDK installer.
#
require 'tmpdir'
|
Changed "Android Repository" to "Android Support Repository."
|
robolectric_robolectric
|
train
|
rb
|
858d086995cd41cec11af8301e3037639cce6128
|
diff --git a/tests/eventlisten.py b/tests/eventlisten.py
index <HASH>..<HASH> 100644
--- a/tests/eventlisten.py
+++ b/tests/eventlisten.py
@@ -51,8 +51,8 @@ def listen(sock_dir, node):
Attach to the pub socket and grab messages
'''
event = salt.utils.event.SaltEvent(
+ node,
sock_dir,
- node
)
while True:
ret = event.get_event(full=True)
|
Args got flipped in eventlisten
|
saltstack_salt
|
train
|
py
|
acbee3a72d3ad7f6b808384e41e76728d1c0b8aa
|
diff --git a/Extensions/BitBucket/Src/Tasks/DownloadArtifactsBitbucket/downloadBitbucket.js b/Extensions/BitBucket/Src/Tasks/DownloadArtifactsBitbucket/downloadBitbucket.js
index <HASH>..<HASH> 100644
--- a/Extensions/BitBucket/Src/Tasks/DownloadArtifactsBitbucket/downloadBitbucket.js
+++ b/Extensions/BitBucket/Src/Tasks/DownloadArtifactsBitbucket/downloadBitbucket.js
@@ -32,11 +32,13 @@ var options = {
https.request(options, function (rs) {
var result;
+ var response = '';
rs.on('data', function (data) {
tl.debug("repository details:" + data)
- result = JSON.parse(data);
+ response += data
});
rs.on('end', function () {
+ result = JSON.parse(response);
tl.debug("result:" + JSON.stringify(result));
var sch = new scw.SourceControlWrapper(result.scm);
@@ -131,4 +133,4 @@ function getCaseInsensitiveKeyMatch(data, paramName) {
});
return keyName;
-}
\ No newline at end of file
+}
|
Update downloadBitbucket.js (#<I>)
|
Microsoft_azure-pipelines-extensions
|
train
|
js
|
bb0ea3c6f814e9bd53e2963d6c9706e8eda75872
|
diff --git a/Gruntfile.js b/Gruntfile.js
index <HASH>..<HASH> 100644
--- a/Gruntfile.js
+++ b/Gruntfile.js
@@ -140,8 +140,7 @@ module.exports = function(grunt) {
},
"babel": {
options: {
- sourceMap: false,
- presets: ['es2015', 'react']
+ sourceMap: false
},
build: {
files: [{
|
Remove bable 6 presets.
|
GriddleGriddle_Griddle
|
train
|
js
|
b5541fc596166a77784bb3f39f168e85ea5aaff4
|
diff --git a/src/index.js b/src/index.js
index <HASH>..<HASH> 100644
--- a/src/index.js
+++ b/src/index.js
@@ -6,8 +6,8 @@ export function sideEffect(state: any, ...effects) {
export function sideEffectTimeout(state: any, timeout: number, ...effects) {
if (!state.meta) {
state.meta =
- { sideEffects: effects.map(eff => (dispatch, getState) =>
- setTimeout(eff(dispatch, getState), timeout))
+ { sideEffects: effects.map(eff => (dispatch, getState) => {
+ setTimeout(eff(dispatch, getState), timeout) })
}
}
return state
@@ -18,7 +18,7 @@ export function sideEffectMiddleware({ dispatch, getState }) {
var sideEffects = action && action.meta && action.meta.sideEffects
var result = next(action);
if (sideEffects) {
- sideEffects.forEach((effect) => effect(dispatch, getState))
+ sideEffects.forEach((effect) => { effect(dispatch, getState) })
}
return result
}
|
don't return a value from side effects
|
gregwebs_redux-side-effect
|
train
|
js
|
dbbbf06f4663d0ddd338296918e25db03f76f934
|
diff --git a/lib/serverspec/setup.rb b/lib/serverspec/setup.rb
index <HASH>..<HASH> 100644
--- a/lib/serverspec/setup.rb
+++ b/lib/serverspec/setup.rb
@@ -288,6 +288,9 @@ set :ssh_options, options
<%- end -%>
<%- end -%>
+# Set environment variables
+# set :env, 'LANG' => 'C', 'LC_MESSAGES' => 'C'
+
<% if @backend_type == 'WinRM'-%>
user = <username>
pass = <password>
|
Add how to set environment variables by a comment
|
mizzy_serverspec
|
train
|
rb
|
51b5612dc77ee9305da2ed833bb35f2a680ee423
|
diff --git a/src/OrderBy/Common/OrderBySimpleField.php b/src/OrderBy/Common/OrderBySimpleField.php
index <HASH>..<HASH> 100644
--- a/src/OrderBy/Common/OrderBySimpleField.php
+++ b/src/OrderBy/Common/OrderBySimpleField.php
@@ -27,8 +27,11 @@ class OrderBySimpleField implements OrderByInterface
// use alias
$orderByFields = FieldOptionsHelper::normalize($context->getParentContext()->getDefinition()->getMeta('pagination')['order_by'] ?? ['*']);
- if (isset($orderByFields[$column])){
+ if (isset($orderByFields[$column])) {
$column = $orderByFields[$column];
+ } else {
+ $field = $context->getNode()->getField($column);
+ $column = $field->getOriginName();
}
$qb->addOrderBy("{$alias}.$column", $orderBy->getDirection());
|
fix orderBy using a field with different name
|
ynloultratech_graphql-bundle
|
train
|
php
|
e0ee06e0c118985b55989a767cbb8401534918e2
|
diff --git a/web/concrete/core/controllers/single_pages/dashboard/composer/write.php b/web/concrete/core/controllers/single_pages/dashboard/composer/write.php
index <HASH>..<HASH> 100644
--- a/web/concrete/core/controllers/single_pages/dashboard/composer/write.php
+++ b/web/concrete/core/controllers/single_pages/dashboard/composer/write.php
@@ -81,11 +81,8 @@ class Concrete5_Controller_Dashboard_Composer_Write extends Controller {
Cache::disableLocalCache();
if ($entry->isComposerDraft()) {
- $pkr = new MovePagePageWorkflowRequest();
- $pkr->setRequestedPage($entry);
- $pkr->setRequestedTargetPage($parent);
- $pkr->setRequesterUserID($u->getUserID());
- $pkr->trigger();
+
+ $entry->move($parent);
$v = CollectionVersion::get($entry, 'RECENT');
$pkr = new ApprovePagePageWorkflowRequest();
|
making composer honor workflow
Former-commit-id: aa4ca<I>e6a<I>fe<I>e6ed<I>a5e<I>ca3a<I>
|
concrete5_concrete5
|
train
|
php
|
dc9357f61dc38cb172d208ba0a1bbbb475c12864
|
diff --git a/discord_test.go b/discord_test.go
index <HASH>..<HASH> 100644
--- a/discord_test.go
+++ b/discord_test.go
@@ -1,12 +1,10 @@
-package discordgo_test
+package discordgo
import (
"os"
"runtime"
"testing"
"time"
-
- . "github.com/bwmarrin/discordgo"
)
//////////////////////////////////////////////////////////////////////////////
|
Changes to make these work in same folder.
|
bwmarrin_discordgo
|
train
|
go
|
f44a10aa8ac30869a25929e5c1ce524c285240f9
|
diff --git a/topdown/resolver.go b/topdown/resolver.go
index <HASH>..<HASH> 100644
--- a/topdown/resolver.go
+++ b/topdown/resolver.go
@@ -50,6 +50,9 @@ func (t *resolverTrie) Resolve(ctx context.Context, ref ast.Ref, input *ast.Term
if err != nil {
return nil, err
}
+ if result.Value == nil {
+ return nil, nil
+ }
return result.Value.Find(ref[i+1:])
}
}
@@ -62,6 +65,9 @@ func (t *resolverTrie) mktree(ctx context.Context, in resolver.Input) (ast.Value
if err != nil {
return nil, err
}
+ if result.Value == nil {
+ return nil, nil
+ }
return result.Value, nil
}
obj := ast.NewObject()
@@ -70,7 +76,9 @@ func (t *resolverTrie) mktree(ctx context.Context, in resolver.Input) (ast.Value
if err != nil {
return nil, err
}
- obj.Insert(ast.NewTerm(k), ast.NewTerm(v))
+ if v != nil {
+ obj.Insert(ast.NewTerm(k), ast.NewTerm(v))
+ }
}
return obj, nil
}
|
topdown: Fix resolvers to handle undefined results
The resolvers were not handling undefined results which was causing a
null pointer dereference.
|
open-policy-agent_opa
|
train
|
go
|
9a83765db55b3b2dd9e967eae3aaf1760857c580
|
diff --git a/lib/generate/index.js b/lib/generate/index.js
index <HASH>..<HASH> 100644
--- a/lib/generate/index.js
+++ b/lib/generate/index.js
@@ -90,7 +90,10 @@ function handleFileSwagger(profile, profileKey) {
*/
function handleUrlSwagger(profile, profileKey) {
console.log(chalk`{green [${profileKey}]} Input swagger URL : {cyan ${profile.url}}`);
- needle.get(profile.url, (err, resp, body) => {
+ const options = {
+ rejectUnauthorized: false
+ };
+ needle.get(profile.url, options, (err, resp, body) => {
if (err) {
console.log(chalk`{red Cannot read swagger URL '{bold ${profile.url}}'.}`);
console.log(chalk.red(err));
|
Temporarily allowing certificate errors.
|
swagen_swagen
|
train
|
js
|
d6d6969f1f3feb543a810e0f0762b528d25c0049
|
diff --git a/lib/nugrant/version.rb b/lib/nugrant/version.rb
index <HASH>..<HASH> 100644
--- a/lib/nugrant/version.rb
+++ b/lib/nugrant/version.rb
@@ -1,3 +1,3 @@
module Nugrant
- VERSION = "0.0.10"
+ VERSION = "0.0.10.dev"
end
|
Bump to <I>.dev
|
maoueh_nugrant
|
train
|
rb
|
f922175820822190fb36d05f2bf2c239d1c3f722
|
diff --git a/src/main/java/edu/jhu/autodiff/erma/ErmaBp.java b/src/main/java/edu/jhu/autodiff/erma/ErmaBp.java
index <HASH>..<HASH> 100644
--- a/src/main/java/edu/jhu/autodiff/erma/ErmaBp.java
+++ b/src/main/java/edu/jhu/autodiff/erma/ErmaBp.java
@@ -272,6 +272,7 @@ public class ErmaBp extends AbstractFgInferencer implements Module<Beliefs>, FgI
}
private void forwardGlobalFacToVar(GlobalFactor globalFac) {
+ if (globalFac.getVars().size() == 0) { return; }
log.trace("Creating messages for global factor.");
// Since this is a global factor, we pass the incoming messages to it,
// and efficiently marginalize over the variables.
@@ -501,6 +502,7 @@ public class ErmaBp extends AbstractFgInferencer implements Module<Beliefs>, FgI
}
private void backwardGlobalFactorToVar(GlobalFactor globalFac) {
+ if (globalFac.getVars().size() == 0) { return; }
FgNode node = fg.getNode(globalFac);
VarTensor[] inMsgs = getMsgs(node, msgs, CUR_MSG, IN_MSG);
VarTensor[] inMsgsAdj = getMsgs(node, msgsAdj, CUR_MSG, IN_MSG);
|
Bug fix: incorrectly putting GlobalFactors on the schedule in the numerator (clamped) case
|
mgormley_pacaya
|
train
|
java
|
bc76fa2ec467d08a88249cf7b81d30252a454ee2
|
diff --git a/src/Geo/Form/GeoText.php b/src/Geo/Form/GeoText.php
index <HASH>..<HASH> 100644
--- a/src/Geo/Form/GeoText.php
+++ b/src/Geo/Form/GeoText.php
@@ -148,7 +148,17 @@ class GeoText extends Text implements ViewPartialProviderInterface, ElementPrepa
list($lon,$lat) = explode(',', $v, 2);
$latLon[]=$lat.','.$lon;
}
- $value['data'] = ['coordinates'=>[ (float) $lat, (float) $lon] ,'type'=>'Point', 'region' => '' ,'postalcode' =>'', 'country' => 'DE'];
+
+ $value['data'] = [
+ 'coordinates'=>[
+ (float) $lat,
+ (float) $lon
+ ],
+ 'type'=>'Point',
+ 'city' => substr($value['name'],0,strrpos($value['name'], ',' )),
+ 'region' => substr($value['name'],strrpos($value['name'], ',' )+2),
+ 'postalcode' =>'',
+ 'country' => 'DE'];
}
if (!is_array($value)) {
$value = explode('|', $value, 2);
|
[Geo] stores name of a city and the regions (e.g. Hessen) if the geo plugin is used
|
yawik_geo
|
train
|
php
|
04058304e68fe88cb52402e6134ab15fa9563ad1
|
diff --git a/AlphaTwirl/ProgressBar/ProgressMonitor.py b/AlphaTwirl/ProgressBar/ProgressMonitor.py
index <HASH>..<HASH> 100755
--- a/AlphaTwirl/ProgressBar/ProgressMonitor.py
+++ b/AlphaTwirl/ProgressBar/ProgressMonitor.py
@@ -15,8 +15,12 @@ class ProgressMonitor(object):
def __init__(self, presentation):
self.queue = Queue(presentation = presentation)
+ def begin(self): pass
+
def monitor(self): pass
+ def end(self): pass
+
def createReporter(self):
reporter = ProgressReporter(self.queue)
return reporter
|
add new methods in ProgressMonitor
|
alphatwirl_alphatwirl
|
train
|
py
|
5c3b5143dc3d2cd2f22d8d6abecd5fb3b9eab020
|
diff --git a/grade/edit/tree/lib.php b/grade/edit/tree/lib.php
index <HASH>..<HASH> 100644
--- a/grade/edit/tree/lib.php
+++ b/grade/edit/tree/lib.php
@@ -136,8 +136,11 @@ class grade_edit_tree {
if (!$is_category_item && ($icon = $this->gtree->get_edit_icon($element, $this->gpr, true))) {
$actionsmenu->add($icon);
}
-
- if ($this->show_calculations && ($icon = $this->gtree->get_calculation_icon($element, $this->gpr, true))) {
+ // MDL-49281 if grade_item already has calculation, it should be editable even if global setting is off.
+ $type = $element['type'];
+ $iscalculated = ($type == 'item' or $type == 'courseitem' or $type == 'categoryitem') && $object->is_calculated();
+ $icon = $this->gtree->get_calculation_icon($element, $this->gpr, true);
+ if ($iscalculated || ($this->show_calculations && $icon)) {
$actionsmenu->add($icon);
}
|
MDL-<I> grades: Calculation settings upgraded
|
moodle_moodle
|
train
|
php
|
59008b5fc363bd7287e2e1526c94c74650085c78
|
diff --git a/index.js b/index.js
index <HASH>..<HASH> 100644
--- a/index.js
+++ b/index.js
@@ -3,7 +3,7 @@ const workshopper = require('workshopper-adventure')
, learnsass = workshopper({
title : 'Learn SASS'
, appDir : __dirname
- , languages : ['en', 'it', 'pt-br', 'kr']
+ , languages : ['en', 'it', 'pt-br', 'ko']
, footer : {
file: path.join(__dirname, 'footer.{lang}.md')
}
|
fix typo in index.js for korean
|
workshopper_learn-sass
|
train
|
js
|
3f55e5a3acfe0e77b03f2ccaacd900da5464aab2
|
diff --git a/layers.go b/layers.go
index <HASH>..<HASH> 100644
--- a/layers.go
+++ b/layers.go
@@ -4,6 +4,7 @@ import (
"bytes"
"compress/gzip"
"encoding/json"
+ "fmt"
"io"
"io/ioutil"
"os"
@@ -651,6 +652,12 @@ func (r *layerStore) Mount(id, mountLabel string, uidMaps, gidMaps []idtools.IDM
if mountLabel == "" {
mountLabel = layer.MountLabel
}
+
+ if (uidMaps != nil || gidMaps != nil) && !r.driver.SupportsShifting() {
+ if !reflect.DeepEqual(uidMaps, layer.UIDMap) || !reflect.DeepEqual(gidMaps, layer.GIDMap) {
+ return "", fmt.Errorf("cannot mount layer %v: shifting not enabled", layer.ID)
+ }
+ }
mountpoint, err := r.driver.Get(id, mountLabel, uidMaps, gidMaps)
if mountpoint != "" && err == nil {
if layer.MountPoint != "" {
|
shifting: raise an error if the container needs shifting
|
containers_storage
|
train
|
go
|
d2044099bab3ee769dbf999a7fa199599e9f3b92
|
diff --git a/djangocms_text_ckeditor/static/djangocms_text_ckeditor/js/cms.ckeditor.js b/djangocms_text_ckeditor/static/djangocms_text_ckeditor/js/cms.ckeditor.js
index <HASH>..<HASH> 100644
--- a/djangocms_text_ckeditor/static/djangocms_text_ckeditor/js/cms.ckeditor.js
+++ b/djangocms_text_ckeditor/static/djangocms_text_ckeditor/js/cms.ckeditor.js
@@ -55,7 +55,10 @@ $(document).ready(function () {
this.options = $.extend(false, {
'settings': settings
}, this.options, options);
-
+ // applying ckeditor to textareas
+ if (CKEDITOR.env.mobile) {
+ CKEDITOR.env.isCompatible = false;
+ }
// add additional plugins (autoloads plugins.js)
CKEDITOR.plugins.addExternal('cmsplugins', settings.static_url + '/ckeditor_plugins/cmsplugins/');
|
Disable ckeditor in mobile
|
divio_djangocms-text-ckeditor
|
train
|
js
|
a92f2ff573e9a812235b11431928ee0d20f243d5
|
diff --git a/library/blur/network/connection.rb b/library/blur/network/connection.rb
index <HASH>..<HASH> 100644
--- a/library/blur/network/connection.rb
+++ b/library/blur/network/connection.rb
@@ -27,7 +27,7 @@ module Blur
sslsocket = OpenSSL::SSL::SSLSocket.new @socket, context
sslsocket.sync = true
sslsocket.connect #_nonblock
- socket = sslsocket
+ @socket = sslsocket
end
def terminate
|
Set socket to sslsocket for reals this time
|
mkroman_blur
|
train
|
rb
|
cc5a29a3672d3649188f222e78033b0af56ed34d
|
diff --git a/pkg/service/service.go b/pkg/service/service.go
index <HASH>..<HASH> 100644
--- a/pkg/service/service.go
+++ b/pkg/service/service.go
@@ -262,6 +262,20 @@ func (s *Service) UpsertService(params *lb.SVC) (bool, lb.ID, error) {
option.EnableSVCSourceRangeCheck)
}
+ // In case we do DSR + IPIP, then it's required that the backends use
+ // the same destination port as the frontend service.
+ if option.Config.NodePortMode == option.NodePortModeDSR &&
+ option.Config.LoadBalancerDSRDispatch == option.DSRDispatchIPIP &&
+ params.Type != lb.SVCTypeClusterIP {
+ for _, b := range params.Backends {
+ if b.Port != params.Frontend.L3n4Addr.Port {
+ err := fmt.Errorf("Unable to upsert service due to frontend/backend port mismatch under DSR with IPIP: %d vs %d",
+ params.Frontend.L3n4Addr.Port, b.Port)
+ return false, lb.ID(0), err
+ }
+ }
+ }
+
// If needed, create svcInfo and allocate service ID
svc, new, prevSessionAffinity, prevLoadBalancerSourceRanges, err :=
s.createSVCInfoIfNotExist(params)
|
cilium: error out in svc upsert on frontend/backend ports mismatch on IPIP
Follow-up to address the case where frontend port != backend port in
DSR + IPIP mode, since IPIP assumes that both must match. Error out
with a user error so these cases don't accidentally get added to the
service table. Fwiw, I left ClusterIP aside since this is only to be
xlated on the local node directly but never for N-S traffic.
|
cilium_cilium
|
train
|
go
|
e3d85b7e40073b05e2588583e9d8db11366c2f7b
|
diff --git a/python/pyspark/mllib/classification.py b/python/pyspark/mllib/classification.py
index <HASH>..<HASH> 100644
--- a/python/pyspark/mllib/classification.py
+++ b/python/pyspark/mllib/classification.py
@@ -66,7 +66,8 @@ class LogisticRegressionModel(LinearModel):
if margin > 0:
prob = 1 / (1 + exp(-margin))
else:
- prob = 1 - 1 / (1 + exp(margin))
+ exp_margin = exp(margin)
+ prob = exp_margin / (1 + exp_margin)
return 1 if prob > 0.5 else 0
|
Avoid numerical instability
This avoids basically doing 1 - 1, for example:
```python
>>> from math import exp
>>> margin = -<I>
>>> 1 - 1 / (1 + exp(margin))
<I>
>>> exp(margin) / (1 + exp(margin))
<I>e-<I>
>>>
```
|
apache_spark
|
train
|
py
|
25c3abfa0fbaa72a28c3e5a539ed8ac7b147c151
|
diff --git a/tests/executors/test_debug_executor.py b/tests/executors/test_debug_executor.py
index <HASH>..<HASH> 100644
--- a/tests/executors/test_debug_executor.py
+++ b/tests/executors/test_debug_executor.py
@@ -45,7 +45,7 @@ class TestDebugExecutor:
task_instance_mock.job_id = job_id
executor = DebugExecutor()
- executor.running = set([ti_key])
+ executor.running = {ti_key}
succeeded = executor._run_task(task_instance_mock)
assert succeeded
@@ -86,7 +86,7 @@ class TestDebugExecutor:
executor = DebugExecutor()
executor.tasks_to_run = [ti]
- executor.running = set([ti.key])
+ executor.running = {ti.key}
executor.end()
ti.set_state.assert_called_once_with(State.UPSTREAM_FAILED)
|
[AIRFLOW-XXXX] Replace conversion from list to set (#<I>)
|
apache_airflow
|
train
|
py
|
06458c59beec54a2465f6e5418a233bd06171357
|
diff --git a/staging/src/k8s.io/csi-translation-lib/plugins/in_tree_volume_test.go b/staging/src/k8s.io/csi-translation-lib/plugins/in_tree_volume_test.go
index <HASH>..<HASH> 100644
--- a/staging/src/k8s.io/csi-translation-lib/plugins/in_tree_volume_test.go
+++ b/staging/src/k8s.io/csi-translation-lib/plugins/in_tree_volume_test.go
@@ -680,7 +680,7 @@ func TestTranslateAllowedTopologies(t *testing.T) {
t.Logf("Running test: %v", tc.name)
gotTop, err := translateAllowedTopologies(tc.topology, GCEPDTopologyKey)
if err != nil {
- t.Errorf("Unexpected error: %w", err)
+ t.Errorf("Unexpected error: %v", err)
}
if !reflect.DeepEqual(gotTop, tc.expectedToplogy) {
|
fix the problem of using% w incorrectly
|
kubernetes_kubernetes
|
train
|
go
|
7d357bb65b72aa8b8e77bf79c7f70e760822071d
|
diff --git a/lib/active_scaffold/data_structures/association.rb b/lib/active_scaffold/data_structures/association.rb
index <HASH>..<HASH> 100644
--- a/lib/active_scaffold/data_structures/association.rb
+++ b/lib/active_scaffold/data_structures/association.rb
@@ -50,7 +50,7 @@ module ActiveScaffold::DataStructures
end
def through?
- @association.options[:through] if @type == :active_record
+ @association.options[:through].present? if @type == :active_record
end
# polymorphic belongs_to
|
through? must return boolean
|
activescaffold_active_scaffold
|
train
|
rb
|
975bbbc4fb0022ad6a5ac90b76611a51929476d5
|
diff --git a/cli.js b/cli.js
index <HASH>..<HASH> 100644
--- a/cli.js
+++ b/cli.js
@@ -21,6 +21,14 @@ function printUsage() {
process.exit(1);
}
+function printInitWarning() {
+ console.log([
+ 'Looks like React Native project already exists in the current',
+ 'folder. Run this command from a different folder or remove node_modules/react-native'
+ ].join('\n'));
+ process.exit(1);
+}
+
function run() {
var args = process.argv.slice(2);
if (args.length === 0) {
@@ -41,6 +49,9 @@ function run() {
case 'bundle':
bundle.init(args);
break;
+ case 'init':
+ printInitWarning();
+ break;
default:
console.error('Command `%s` unrecognized', args[0]);
printUsage();
|
Warn if `init` is called from existing project
|
react-native-community_cli
|
train
|
js
|
b4f21ad90725966e045a7bf61588a272a22ff49d
|
diff --git a/jaraco/mongodb/monitor-index-creation.py b/jaraco/mongodb/monitor-index-creation.py
index <HASH>..<HASH> 100644
--- a/jaraco/mongodb/monitor-index-creation.py
+++ b/jaraco/mongodb/monitor-index-creation.py
@@ -6,7 +6,8 @@ from jaraco.mongodb import helper
def is_index_op(op):
- return op.get('query', {}).get('createIndexes')
+ cmd = op.get('query') or op.get('command') or {}
+ return cmd.get('createIndexes')
def get_args():
|
Update technique to find createIndexes in 'query' or 'command'. Fixes #<I>.
|
jaraco_jaraco.mongodb
|
train
|
py
|
cfec1fa6e34631403a3fcabd556ec954e14f7d29
|
diff --git a/dwave/system/package_info.py b/dwave/system/package_info.py
index <HASH>..<HASH> 100644
--- a/dwave/system/package_info.py
+++ b/dwave/system/package_info.py
@@ -15,7 +15,7 @@
# =============================================================================
__all__ = ['__version__', '__author__', '__authoremail__', '__description__']
-__version__ = '0.9.6'
+__version__ = '0.9.7'
__author__ = 'D-Wave Systems Inc.'
__authoremail__ = '[email protected]'
__description__ = 'All things D-Wave System.'
|
Update version <I> -> <I>
New Features
------------
* Embedding composites no longer block
Fixes
-----
* `MockDWaveSampler` now has the correct topology
* Fix rounding in anneal schedule validation
|
dwavesystems_dwave-system
|
train
|
py
|
a616bbe4ecafa262a71113c4db13652a1a9497b7
|
diff --git a/src/basis.js b/src/basis.js
index <HASH>..<HASH> 100644
--- a/src/basis.js
+++ b/src/basis.js
@@ -3045,11 +3045,11 @@
}
function checkParents(){
- if (getParent('head') && getParent('body'))
- clearInterval(timer);
+ if (timer && getParent('head') && getParent('body'))
+ timer = clearInterval(timer);
}
- if (document)
+ if (document && (!getParent('head') || !getParent('body')))
{
timer = setInterval(checkParents, 5);
ready(checkParents);
|
set checkers for head & body, only if head or body is not available
|
basisjs_basisjs
|
train
|
js
|
1f8d371f341de67df8afaaa67e0d2ebf33478133
|
diff --git a/simuvex/plugins/symbolic_memory.py b/simuvex/plugins/symbolic_memory.py
index <HASH>..<HASH> 100644
--- a/simuvex/plugins/symbolic_memory.py
+++ b/simuvex/plugins/symbolic_memory.py
@@ -53,6 +53,18 @@ class SimSymbolicMemory(SimMemory): #pylint:disable=abstract-method
SimMemory.set_state(self, s)
self.mem.state = s
+ def _ana_getstate(self):
+ d = self.__dict__.copy()
+ d['concrete'] = {}
+ for addr in self.mem:
+ b = self.mem[addr]
+ if isinstance(b, str):
+ d['concrete'][addr] = ord(b)
+ elif isinstance(b, SimMemoryObject):
+ b = b.bytes_at(addr, 1)
+ d['concrete'][addr] = self.state.se.any_int(b)
+ return d
+
#
# Symbolicizing!
#
|
serialized SimSymbolicMemory now has concrete repr
|
angr_angr
|
train
|
py
|
1b731b9ab8c875613f0ea847080d9bf3a481ed03
|
diff --git a/tests/ProxyManagerTestAsset/ClassWithMagicMethods.php b/tests/ProxyManagerTestAsset/ClassWithMagicMethods.php
index <HASH>..<HASH> 100644
--- a/tests/ProxyManagerTestAsset/ClassWithMagicMethods.php
+++ b/tests/ProxyManagerTestAsset/ClassWithMagicMethods.php
@@ -65,6 +65,7 @@ class ClassWithMagicMethods
*/
public function __sleep()
{
+ return [];
}
/**
|
Correcting `__sleep` behavior (should always return an array)
|
Ocramius_ProxyManager
|
train
|
php
|
944b07e14b5aa67e28e87f009e3f5968b73c86d7
|
diff --git a/includes/session.php b/includes/session.php
index <HASH>..<HASH> 100644
--- a/includes/session.php
+++ b/includes/session.php
@@ -34,7 +34,7 @@ if (!defined('WT_SCRIPT_NAME')) {
// Identify ourself
define('WT_WEBTREES', 'webtrees');
define('WT_VERSION', '1.2.1');
-define('WT_VERSION_RELEASE', 'svn'); // 'svn', 'beta', 'rc1', '', etc.
+define('WT_VERSION_RELEASE', ''); // 'svn', 'beta', 'rc1', '', etc.
define('WT_VERSION_TEXT', trim(WT_VERSION.' '.WT_VERSION_RELEASE));
define('WT_WEBTREES_URL', 'http://webtrees.net');
define('WT_WEBTREES_WIKI', 'http://wiki.webtrees.net');
|
Update version for <I> release
|
fisharebest_webtrees
|
train
|
php
|
1fd177039d066e7af3f6afb190ebad52a96308f1
|
diff --git a/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/KafkaIndexTask.java b/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/KafkaIndexTask.java
index <HASH>..<HASH> 100644
--- a/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/KafkaIndexTask.java
+++ b/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/KafkaIndexTask.java
@@ -1063,10 +1063,10 @@ public class KafkaIndexTask extends AbstractTask implements ChatHandler
.emit();
// wait for being killed by supervisor
try {
- Thread.sleep(Long.MAX_VALUE);
+ pause(-1);
}
catch (InterruptedException e) {
- throw new RuntimeException("Got interrupted while waiting to be killed");
+ throw new RuntimeException("Got interrupted while pausing task");
}
} else {
log.makeAlert("Failed to send reset request for partitions [%s]", partitionOffsetMap.keySet()).emit();
|
fix auto reset - pause task instead of putting thread to sleep (#<I>)
|
apache_incubator-druid
|
train
|
java
|
dda1998d943219dbe533a83f82a417606d874dd9
|
diff --git a/setup.py b/setup.py
index <HASH>..<HASH> 100644
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,7 @@ setup(
long_description=readme + '\n\n' + changes + '\n\n',
author='David LaBissoniere',
author_email='[email protected]',
- url="https://kazoo.readthedocs.org",
+ url="https://blockade.readthedocs.org",
packages=find_packages(),
include_package_data=True,
install_requires=requires,
|
Fix url in setup.py
|
worstcase_blockade
|
train
|
py
|
9b3515604a656f685775750e6fa09e0adbf64400
|
diff --git a/qa/large-data-tests/src/test/java/org/camunda/bpm/qa/largedata/optimize/OptimizeApiPageSizeTest.java b/qa/large-data-tests/src/test/java/org/camunda/bpm/qa/largedata/optimize/OptimizeApiPageSizeTest.java
index <HASH>..<HASH> 100644
--- a/qa/large-data-tests/src/test/java/org/camunda/bpm/qa/largedata/optimize/OptimizeApiPageSizeTest.java
+++ b/qa/large-data-tests/src/test/java/org/camunda/bpm/qa/largedata/optimize/OptimizeApiPageSizeTest.java
@@ -53,11 +53,6 @@ public class OptimizeApiPageSizeTest {
generator.generateData();
}
- @AfterClass
- public static void tearDown() {
- TestHelper.assertAndEnsureCleanDbAndCache(processEngineRule.getProcessEngine(), false);
- }
-
@Test
@Parameters(method = "optimizeServiceFunctions")
public void databaseCanCopeWithPageSize(TestScenario scenario) {
|
chore(qa): revert cleanup in large data test
- the cleanup takes too long to finish due to the amount of data
Related to CAM-<I>
|
camunda_camunda-bpm-platform
|
train
|
java
|
84838428887ef3f00aa07edf271a94eecb32f1f6
|
diff --git a/src/Symfony/Component/HttpClient/CurlHttpClient.php b/src/Symfony/Component/HttpClient/CurlHttpClient.php
index <HASH>..<HASH> 100644
--- a/src/Symfony/Component/HttpClient/CurlHttpClient.php
+++ b/src/Symfony/Component/HttpClient/CurlHttpClient.php
@@ -55,7 +55,7 @@ final class CurlHttpClient implements HttpClientInterface, LoggerAwareInterface
*
* @see HttpClientInterface::OPTIONS_DEFAULTS for available options
*/
- public function __construct(array $defaultOptions = [], int $maxHostConnections = 6, int $maxPendingPushes = 0)
+ public function __construct(array $defaultOptions = [], int $maxHostConnections = 6, int $maxPendingPushes = 50)
{
if (!\extension_loaded('curl')) {
throw new \LogicException('You cannot use the "Symfony\Component\HttpClient\CurlHttpClient" as the "curl" extension is not installed.');
|
Re-enable push support for HttpClient
|
symfony_symfony
|
train
|
php
|
ff9c1321dc2d7014d7a516a7d939e8e0e66bf1ed
|
diff --git a/src/Symfony/Component/Config/Resource/DirectoryResource.php b/src/Symfony/Component/Config/Resource/DirectoryResource.php
index <HASH>..<HASH> 100644
--- a/src/Symfony/Component/Config/Resource/DirectoryResource.php
+++ b/src/Symfony/Component/Config/Resource/DirectoryResource.php
@@ -210,7 +210,7 @@ class DirectoryResource implements ResourceInterface
*/
public function getId()
{
- return md5($this->resource.$this->pattern);
+ return md5('d'.$this->resource.$this->pattern);
}
public function serialize()
diff --git a/src/Symfony/Component/Config/Resource/FileResource.php b/src/Symfony/Component/Config/Resource/FileResource.php
index <HASH>..<HASH> 100644
--- a/src/Symfony/Component/Config/Resource/FileResource.php
+++ b/src/Symfony/Component/Config/Resource/FileResource.php
@@ -97,7 +97,7 @@ class FileResource implements ResourceInterface
*/
public function getId()
{
- return md5($this->resource);
+ return md5('f'.$this->resource);
}
public function serialize()
|
[Config] added type prefixes to resource ids
Makes sure that directory and the file resources
with the same name will have different ids
|
symfony_symfony
|
train
|
php,php
|
4583695e284676742c3857cc5953c2909da343a6
|
diff --git a/generators/app/templates/client/app/components/remote-unique/remote-unique.directive.js b/generators/app/templates/client/app/components/remote-unique/remote-unique.directive.js
index <HASH>..<HASH> 100644
--- a/generators/app/templates/client/app/components/remote-unique/remote-unique.directive.js
+++ b/generators/app/templates/client/app/components/remote-unique/remote-unique.directive.js
@@ -30,7 +30,7 @@
var ignore;
attrs.$observe('remoteUniqueIgnore', function (newValue) {
- ignore = newValue;
+ if (typeof ignore === 'undefined') ignore = newValue;
});
ctrl.$parsers.unshift(validateRemoteUnique);
|
fix(remoteUnique): fix ignore, ignore 1st val
|
michaelkrone_generator-material-app
|
train
|
js
|
f74032d5720eba5e82c99ea541d1c216b8887453
|
diff --git a/grimoire/ocean/elastic.py b/grimoire/ocean/elastic.py
index <HASH>..<HASH> 100644
--- a/grimoire/ocean/elastic.py
+++ b/grimoire/ocean/elastic.py
@@ -151,7 +151,10 @@ class ElasticOcean(object):
"value":self.perceval_backend.origin}
offset = self.elastic.get_last_offset("offset", filter_)
- logging.info("Incremental from: %i offset", offset)
+ if offset:
+ logging.info("Incremental from: %i offset", offset)
+ else:
+ logging.info("Not incremental")
task_init = datetime.now()
|
[ocean] Fix log message when working with offset
|
chaoss_grimoirelab-elk
|
train
|
py
|
0a7f2ab8f441d75a74e527cf5fa90c7790bd92cb
|
diff --git a/core/src/main/java/cucumber/runtime/model/CucumberScenario.java b/core/src/main/java/cucumber/runtime/model/CucumberScenario.java
index <HASH>..<HASH> 100644
--- a/core/src/main/java/cucumber/runtime/model/CucumberScenario.java
+++ b/core/src/main/java/cucumber/runtime/model/CucumberScenario.java
@@ -30,7 +30,7 @@ public class CucumberScenario extends CucumberTagStatement {
}
/**
- * This method is called when Cucumber is run from the CLI, but not when run from JUnit
+ * This method is called when Cucumber is run from the CLI or JUnit
*/
@Override
public void run(Formatter formatter, Reporter reporter, Runtime runtime) {
|
Fixed comment
The run method is called from cucumber.runtime.junit.ExecutionUnitRunner
|
cucumber_cucumber-jvm
|
train
|
java
|
70de56bcbcba9d098813aaae0055c34ebc963aae
|
diff --git a/pyghmi/redfish/command.py b/pyghmi/redfish/command.py
index <HASH>..<HASH> 100644
--- a/pyghmi/redfish/command.py
+++ b/pyghmi/redfish/command.py
@@ -550,6 +550,9 @@ class Command(object):
}
yield ('System', sysinfo)
self._hwnamemap = {}
+ memurl = self.sysinfo.get('Memory', {}).get('@odata.id', None)
+ cpurl = self.sysinfo.get('Processors', {}).get('@odata.id', None)
+ list(self._do_bulk_requests([memurl, cpurl]))
adpurls = self._get_adp_urls()
cpurls = self._get_cpu_urls()
memurls = self._get_mem_urls()
|
Remove a round trip delay for inventory
Enable potential concurrency for CPU and Memory
parent urls.
Change-Id: I<I>f<I>efb<I>db3c<I>cd6f<I>b<I>aa
|
openstack_pyghmi
|
train
|
py
|
2935bebb2ce3902843b912222c3d9e6f860ef1a1
|
diff --git a/azure-storage-common/azure/storage/common/storageclient.py b/azure-storage-common/azure/storage/common/storageclient.py
index <HASH>..<HASH> 100644
--- a/azure-storage-common/azure/storage/common/storageclient.py
+++ b/azure-storage-common/azure/storage/common/storageclient.py
@@ -321,6 +321,9 @@ class StorageClient(object):
except AzureException as ex:
# only parse the strings used for logging if logging is at least enabled for CRITICAL
+ exception_str_in_one_line = ''
+ status_code = ''
+ timestamp_and_request_id = ''
if logger.isEnabledFor(logging.CRITICAL):
exception_str_in_one_line = str(ex).replace('\n', '')
status_code = retry_context.response.status if retry_context.response is not None else 'Unknown'
|
fix local variabled referenced before assigned error
|
Azure_azure-storage-python
|
train
|
py
|
94a78c51616b0e1450a886d11caf033c18f84114
|
diff --git a/src/python/pants/option/migrate_config.py b/src/python/pants/option/migrate_config.py
index <HASH>..<HASH> 100644
--- a/src/python/pants/option/migrate_config.py
+++ b/src/python/pants/option/migrate_config.py
@@ -34,6 +34,7 @@ migrations = {
('scala-compile', 'no_warning_args'): ('compile.scala', 'no_warning_args'),
('scala-compile', 'runtime-deps'): ('compile.scala', 'runtime-deps'),
('scala-compile', 'use_nailgun'): ('compile.scala', 'use_nailgun'),
+ ('scala-compile', 'args'): ('compile.scala', 'args'),
('javadoc-gen', 'include_codegen'): ('gen.javadoc', 'include_codegen'),
('scaladoc-gen', 'include_codegen'): ('gen.scaladoc', 'include_codegen'),
|
Add missing stanza in the migration script.
Reviewed at <URL>
|
pantsbuild_pants
|
train
|
py
|
2a6e9d32277f7083109f2ecceba7840de93760df
|
diff --git a/src/Illuminate/Queue/CallQueuedHandler.php b/src/Illuminate/Queue/CallQueuedHandler.php
index <HASH>..<HASH> 100644
--- a/src/Illuminate/Queue/CallQueuedHandler.php
+++ b/src/Illuminate/Queue/CallQueuedHandler.php
@@ -30,6 +30,13 @@ class CallQueuedHandler
protected $container;
/**
+ * Indicates if the unique job lock has been released.
+ *
+ * @var bool
+ */
+ protected $uniqueLockReleased = false;
+
+ /**
* Create a new handler instance.
*
* @param \Illuminate\Contracts\Bus\Dispatcher $dispatcher
@@ -171,7 +178,7 @@ class CallQueuedHandler
*/
protected function ensureUniqueJobLockIsReleased($command)
{
- if (! $command instanceof ShouldBeUnique) {
+ if (! ($command instanceof ShouldBeUnique) || $this->uniqueLockReleased) {
return;
}
@@ -186,6 +193,8 @@ class CallQueuedHandler
$cache->lock(
'laravel_unique_job:'.get_class($command).$uniqueId
)->forceRelease();
+
+ $this->uniqueLockReleased = true;
}
/**
|
Ensure unique job lock is released only once in lifecycle
|
laravel_framework
|
train
|
php
|
56412f4441f0954d3181eafbfb7b60b361e05d11
|
diff --git a/activerecord/lib/active_record/associations/has_many_association.rb b/activerecord/lib/active_record/associations/has_many_association.rb
index <HASH>..<HASH> 100644
--- a/activerecord/lib/active_record/associations/has_many_association.rb
+++ b/activerecord/lib/active_record/associations/has_many_association.rb
@@ -92,13 +92,17 @@ module ActiveRecord
end
def count_records
- if has_cached_counter?
+ count = if has_cached_counter?
@owner.send(:read_attribute, cached_counter_attribute_name)
elsif @options[:counter_sql]
@association_class.count_by_sql(@counter_sql)
else
@association_class.count(@counter_sql)
end
+
+ @target = [] and loaded if count == 0
+
+ return count
end
def has_cached_counter?
|
Optimize counting of has_many associations by setting the association to empty if the count is 0
git-svn-id: <URL>
|
rails_rails
|
train
|
rb
|
4a2ad130a57612b4c7f03b4a63df64ab22c6ecc7
|
diff --git a/resources/lang/it-IT/forms.php b/resources/lang/it-IT/forms.php
index <HASH>..<HASH> 100644
--- a/resources/lang/it-IT/forms.php
+++ b/resources/lang/it-IT/forms.php
@@ -155,7 +155,7 @@ return [
'days-of-incidents' => 'Quanti giorni di segnalazioni mostrare?',
'time_before_refresh' => 'Frequenza di aggiornamento della pagina di stato (in secondi).',
'banner' => 'Immagine del banner',
- 'banner-help' => 'È consigliabile caricare file larghi non più di 930px.',
+ 'banner-help' => "È consigliabile caricare file larghi non più di 930px.",
'subscribers' => 'Permettere alle persone di iscriversi alle notifiche via email?',
'suppress_notifications_in_maintenance' => 'Suppress notifications when incident occurs during maintenance period?',
'skip_subscriber_verification' => 'Skip verifica degli utenti? (Attenzione, potreste ricevere spam)',
|
New translations forms.php (Italian)
|
CachetHQ_Cachet
|
train
|
php
|
ba9f439a34447aa0eb1bfda941dfbfe1940cc768
|
diff --git a/core/src/main/java/com/capitalone/dashboard/event/BuildEventListener.java b/core/src/main/java/com/capitalone/dashboard/event/BuildEventListener.java
index <HASH>..<HASH> 100644
--- a/core/src/main/java/com/capitalone/dashboard/event/BuildEventListener.java
+++ b/core/src/main/java/com/capitalone/dashboard/event/BuildEventListener.java
@@ -68,6 +68,7 @@ public class BuildEventListener extends HygieiaMongoEventListener<Build> {
failedBuildCommit.addNewPipelineProcessedTimestamp(PipelineStageType.Build.name(), successfulBuild.getTimestamp());
pipeline.addCommit(PipelineStageType.Build.name(), failedBuildCommit);
}
+ pipeline.getFailedBuilds().remove(b);
}
}
}
|
Need to empty the failed builds bucket once a passed build comes through and we move the commits otherwise the bucket never clears up.
|
Hygieia_Hygieia
|
train
|
java
|
c965d231b1d32aa9ee380e915a1bf8cf5ebf48ab
|
diff --git a/lib/Doctrine/ORM/Tools/SchemaTool.php b/lib/Doctrine/ORM/Tools/SchemaTool.php
index <HASH>..<HASH> 100644
--- a/lib/Doctrine/ORM/Tools/SchemaTool.php
+++ b/lib/Doctrine/ORM/Tools/SchemaTool.php
@@ -145,7 +145,7 @@ class SchemaTool
$this->_gatherRelationsSql($class, $table, $schema);
// Add the discriminator column
- $discrColumnDef = $this->_getDiscriminatorColumnDefinition($class, $table);
+ $this->addDiscriminatorColumnDefinition($class, $table);
// Aggregate all the information from all classes in the hierarchy
foreach ($class->parentClasses as $parentClassName) {
@@ -177,7 +177,7 @@ class SchemaTool
// Add the discriminator column only to the root table
if ($class->name == $class->rootEntityName) {
- $discrColumnDef = $this->_getDiscriminatorColumnDefinition($class, $table);
+ $this->addDiscriminatorColumnDefinition($class, $table);
} else {
// Add an ID FK column to child tables
/* @var Doctrine\ORM\Mapping\ClassMetadata $class */
@@ -267,7 +267,7 @@ class SchemaTool
* @return array The portable column definition of the discriminator column as required by
* the DBAL.
*/
- private function _getDiscriminatorColumnDefinition($class, $table)
+ private function addDiscriminatorColumnDefinition($class, $table)
{
$discrColumn = $class->discriminatorColumn;
|
Rename method and refactor code a bit
|
doctrine_orm
|
train
|
php
|
56c0ffbdd090ff452d977a77c7b8d975402e6b17
|
diff --git a/lib/simple_record.rb b/lib/simple_record.rb
index <HASH>..<HASH> 100644
--- a/lib/simple_record.rb
+++ b/lib/simple_record.rb
@@ -28,7 +28,7 @@ require 'sdb/active_sdb'
module SimpleRecord
- VERSION = '1.0.9'
+ VERSION = '1.0.16'
class Base < RightAws::ActiveSdb::Base
@@ -189,6 +189,10 @@ module SimpleRecord
send(:define_method, arg) do
options2 = @@belongs_to_map[arg]
class_name = options2[:class_name] || arg.to_s[0...1].capitalize + arg.to_s[1...arg.to_s.length]
+
+ # Camelize classnames with underscores (ie my_model.rb --> MyModel)
+ class_name = class_name.camelize
+
# puts "attr=" + @attributes[arg_id].inspect
# puts 'val=' + @attributes[arg_id][0].inspect unless @attributes[arg_id].nil?
ret = nil
|
- Added camelize to class so that belongs_to associations work with underscored domains (ie my_user_domain --> MyUserDomain)
|
appoxy_simple_record
|
train
|
rb
|
f2cfd9a1a8b5113b26ad04cc3c02033e7b8df75b
|
diff --git a/admin_modules.php b/admin_modules.php
index <HASH>..<HASH> 100644
--- a/admin_modules.php
+++ b/admin_modules.php
@@ -71,8 +71,6 @@ case 'delete_module':
WT_DB::prepare("DELETE FROM `##module_setting` WHERE module_name=?")->execute(array($module_name));
WT_DB::prepare("DELETE FROM `##module_privacy` WHERE module_name=?")->execute(array($module_name));
WT_DB::prepare("DELETE FROM `##module` WHERE module_name=?")->execute(array($module_name));
- unset($modules[$module_name]);
- unset($module_status[$module_name]);
header('Location: ' . WT_SERVER_NAME . WT_SCRIPT_PATH . 'admin_modules.php');
exit;
diff --git a/library/WT/Module.php b/library/WT/Module.php
index <HASH>..<HASH> 100644
--- a/library/WT/Module.php
+++ b/library/WT/Module.php
@@ -459,7 +459,6 @@ abstract class WT_Module {
}
}
}
- uasort($modules, create_function('$x,$y', 'return WT_I18N::strcasecmp((string)$x, (string)$y);'));
return $modules;
}
|
Do not need to sort modules during internal processing
|
fisharebest_webtrees
|
train
|
php,php
|
e0ed95da1e86f5fc26de5ca2360c0be426a8ad9b
|
diff --git a/crypt-data/src/test/java/de/alpharogroup/crypto/factories/CertFactoryTest.java b/crypt-data/src/test/java/de/alpharogroup/crypto/factories/CertFactoryTest.java
index <HASH>..<HASH> 100644
--- a/crypt-data/src/test/java/de/alpharogroup/crypto/factories/CertFactoryTest.java
+++ b/crypt-data/src/test/java/de/alpharogroup/crypto/factories/CertFactoryTest.java
@@ -99,8 +99,8 @@ public class CertFactoryTest
Security.addProvider(new BouncyCastleProvider());
- final PrivateKey privateKey = PrivateKeyReader.readPemPrivateKey(privatekeyPemFile,
- SecurityProvider.BC);
+ final PrivateKey privateKey = PrivateKeyReader.readPemPrivateKey(privatekeyPemFile);
+
final File publickeyPemDir = new File(PathFinder.getSrcTestResourcesDir(), "pem");
final File publickeyPemFile = new File(publickeyPemDir, "public.pem");
|
Update CertFactoryTest.java
|
astrapi69_mystic-crypt
|
train
|
java
|
33e77c02bdb5b3b9c02f5f53d40175bbd52993b1
|
diff --git a/jobs/create-group-version-branch.js b/jobs/create-group-version-branch.js
index <HASH>..<HASH> 100644
--- a/jobs/create-group-version-branch.js
+++ b/jobs/create-group-version-branch.js
@@ -206,7 +206,7 @@ module.exports = async function (
if (!satisfies && openPR) {
const dependencyNameAndVersion = `${depName}-${latestDependencyVersion}`
- if (!openPR.comments.includes(dependencyNameAndVersion)) {
+ if (!openPR.comments || !openPR.comments.includes(dependencyNameAndVersion)) {
hasNewUpdates = true
await upsert(repositories, openPR._id, {
comments: [...(openPR.comments || []), dependencyNameAndVersion]
|
fix(create-group-version-branch): comments on PR docs might not exist yet
|
greenkeeperio_greenkeeper
|
train
|
js
|
Subsets and Splits
Java Commits in Train Set
Queries for all entries where the diff_languages column is 'java', providing a filtered dataset but without deeper analysis.
Java Commits Test Data
Returns a subset of 5000 entries from the dataset where the programming language difference is Java, providing basic filtering for exploration.
Java Commits Sample
Retrieves the first 1,000 records where the 'diff_languages' column is 'java', providing limited insight into the specific data entries.
Java Commits Validation Sample
Retrieves a sample of entries from the validation dataset where the diff languages are Java, providing limited insight into specific Java-related data points.
Java Commits in Validation
This query retrieves a limited sample of entries from the validation dataset where the programming language difference is Java, providing basic filtering with minimal insight.
Java Commits Sample
This query retrieves a sample of 100 records where the 'diff_languages' is 'java', providing basic filtering but limited analytical value.
Java Commits Sample
Retrieves 100 samples where the language difference is Java, providing basic filtering but minimal analytical value.
Java Commits Sample
Retrieves 10 samples where the diff_languages column is 'java', providing basic examples of data entries with this specific language.
Java Commits Validation Sample
Retrieves 1,000 records where the differences in languages are marked as Java, providing a snapshot of that specific subset but limited to raw data.
Java Commits Sample
This query retrieves 1000 random samples from the dataset where the programming language is Java, offering limited insight beyond raw data.