instance_id
stringlengths
17
39
repo
stringclasses
8 values
issue_id
stringlengths
14
34
pr_id
stringlengths
14
34
linking_methods
sequencelengths
1
3
base_commit
stringlengths
40
40
merge_commit
stringlengths
0
40
hints_text
sequencelengths
0
106
resolved_comments
sequencelengths
0
119
created_at
unknown
labeled_as
sequencelengths
0
7
problem_title
stringlengths
7
174
problem_statement
stringlengths
0
55.4k
gold_files
sequencelengths
0
10
gold_files_postpatch
sequencelengths
1
10
test_files
sequencelengths
0
60
gold_patch
stringlengths
220
5.83M
test_patch
stringlengths
386
194k
split_random
stringclasses
3 values
split_time
stringclasses
3 values
issue_start_time
timestamp[ns]
issue_created_at
unknown
issue_by_user
stringlengths
3
21
split_repo
stringclasses
3 values
netty/netty/5175_5192
netty/netty
netty/netty/5175
netty/netty/5192
[ "timestamp(timedelta=22.0, similarity=0.876440567581754)" ]
4d2e91a10dd6b2b3a1ec25a26e2f64d9c4212fe9
e3cd7aacd7b1ad23b806310b40f6b3b4420297db
[ "At the moment not through this interface. What type of socket channel are you dealing with as there is likely a way to do it using an implementation of this interface.\n", "NioSocketChannel, EpollSocketChannel or OioSocketChannel\n", "I see you want it all ... stand by for a PR.\n", "Fixed by https://github.com/netty/netty/pull/5269\n" ]
[ "@normanmaurer - making `isInputShutdown()` abstract and removing this method is an API change ... i can be more strict with the API but this will change will only be for 4.1, and its OIO... let me know if you think I should spend more effort here to preserve the API\n", "this is also an API change, but IMO was \"broken\" before (see the commit message)\n", "@trustin WDYT ?\n", "@trustin WDYT ?\n", "@Scottmitch does this somehow change the behaviour of our current impl and so may break stuff ?\n", "javadocs for protected methods please\n", "same question as before \n", "setSuccess() ?\n", "should we log t if cause is not null ?\n", "should we log t if cause is not null ?\n", "setSuccess()\n", "I think this is not correct as voidPromise() will fail when you try to add listeners to it. So better just return a `newFailedFuture()` ?\n", "same as below\n", "done\n", "done\n", "done\n", "done\n", "done\n", "yes I debated this but no-one is currently attaching to the future ... it only returns a ChannelFuture so it doesn't conflict with the existing `ChannelFuture shutdownInput()` from the base interface.\n\nHowever I can change this in case future code changes i guess\n", "@normanmaurer - previously `shutdownInput` set state internally and did not attempt to set the channel state or change the state of the underlying transport. Now that we do interact with the underlying transport we either close the underlying channel or shutdown channel (if the user enabled this behavior). NIO already [checks the input is open](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/sun/nio/ch/SocketChannelImpl.java#300) before reading anything. This is also now more consistent with EPOLL.\n", "done\n", "This should be `UnsupportedOperationException` \n", "This should be `UnsupportedOperationException` \n", "debug ?\n", "debug ?\n", "factory out the above code into a new method `shutdownOutput0(...)` and delegate to it.\n", "See above and delegate to `shutdownOutput0(...)`\n", "same as above\n", "same as above\n", "same as above\n", "+1\n", "+1\n", "sure\n", "sure\n", "I'm not sure it is necessary here. It is makes sense where this is currently done in `NioSocketChannel` because it can be called in a few spots ... but here this method just calls itself (so no duplication). The `shutdown` method is different because it must treat the exceptions differently so we can't really share must there.\n", "see https://github.com/netty/netty/pull/5192#discussion_r62754131\n", "see https://github.com/netty/netty/pull/5192#discussion_r62754131\n", "see https://github.com/netty/netty/pull/5192#discussion_r62754131\n", "This code is different than above. We treat exceptions differently here.\n", "see https://github.com/netty/netty/pull/5192#discussion_r62543513\n", "I think it's fine because it was broken before and it affects the transport implementations only.\n", "notify -> shutdown the input and notify\n", "Isn't `inEventLoop()` relatively an expensive operation? I'd prefer not letting it called twice when we can avoid.\n", "Sgtm I will factor it out\n" ]
"2016-04-30T02:37:08Z"
[ "feature" ]
SocketChannel.shutdownInput()?
Is there a way to shutdownInput? I see that there is SocketChannel.shutdownOutput(), but no shutdownInput().
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java", "transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java", "transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java", "transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java", "transport/src/main/java/io/netty/channel/socket/DuplexChannel.java", "transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java", "transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java", "transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java", "transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java", "transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java", "transport/src/main/java/io/netty/channel/socket/DuplexChannel.java", "transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java", "transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java" ]
[ "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java", "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java" ]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java index bc22b37370e..8cdbba542ef 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java @@ -530,7 +530,7 @@ protected Object filterOutboundMessage(Object msg) { "unsupported message type: " + StringUtil.simpleClassName(msg) + EXPECTED_TYPES); } - protected void shutdownOutput0(final ChannelPromise promise) { + private void shutdownOutput0(final ChannelPromise promise) { try { fd().shutdown(false, true); promise.setSuccess(); @@ -539,14 +539,37 @@ protected void shutdownOutput0(final ChannelPromise promise) { } } + private void shutdownInput0(final ChannelPromise promise) { + try { + fd().shutdown(true, false); + promise.setSuccess(); + } catch (Throwable cause) { + promise.setFailure(cause); + } + } + + private void shutdown0(final ChannelPromise promise) { + try { + fd().shutdown(true, true); + promise.setSuccess(); + } catch (Throwable cause) { + promise.setFailure(cause); + } + } + + @Override + public boolean isOutputShutdown() { + return fd().isOutputShutdown(); + } + @Override public boolean isInputShutdown() { return fd().isInputShutdown(); } @Override - public boolean isOutputShutdown() { - return fd().isOutputShutdown(); + public boolean isShutdown() { + return fd().isShutdown(); } @Override @@ -580,6 +603,68 @@ public void run() { return promise; } + @Override + public ChannelFuture shutdownInput() { + return shutdownInput(newPromise()); + } + + @Override + public ChannelFuture shutdownInput(final ChannelPromise promise) { + Executor closeExecutor = ((EpollStreamUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdownInput0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } + } + return promise; + } + + @Override + public ChannelFuture shutdown() { + return shutdown(newPromise()); + } + + @Override + public ChannelFuture shutdown(final ChannelPromise promise) { + Executor closeExecutor = ((EpollStreamUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdown0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } + } + return promise; + } + @Override protected void doClose() throws Exception { try { diff --git a/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java b/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java index 07628a944ae..7ff98e06ef9 100644 --- a/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java +++ b/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java @@ -18,6 +18,7 @@ import gnu.io.CommPort; import gnu.io.CommPortIdentifier; import gnu.io.SerialPort; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelPromise; import io.netty.channel.oio.OioByteStreamChannel; import io.netty.util.internal.OneTimeTask; @@ -25,7 +26,14 @@ import java.net.SocketAddress; import java.util.concurrent.TimeUnit; -import static io.netty.channel.rxtx.RxtxChannelOption.*; +import static io.netty.channel.rxtx.RxtxChannelOption.BAUD_RATE; +import static io.netty.channel.rxtx.RxtxChannelOption.DATA_BITS; +import static io.netty.channel.rxtx.RxtxChannelOption.DTR; +import static io.netty.channel.rxtx.RxtxChannelOption.PARITY_BIT; +import static io.netty.channel.rxtx.RxtxChannelOption.READ_TIMEOUT; +import static io.netty.channel.rxtx.RxtxChannelOption.RTS; +import static io.netty.channel.rxtx.RxtxChannelOption.STOP_BITS; +import static io.netty.channel.rxtx.RxtxChannelOption.WAIT_TIME; /** * A channel to a serial device using the RXTX library. @@ -129,6 +137,16 @@ protected void doClose() throws Exception { } } + @Override + protected boolean isInputShutdown() { + return !open; + } + + @Override + protected ChannelFuture shutdownInput() { + return newFailedFuture(new UnsupportedOperationException("shutdownInput")); + } + private final class RxtxUnsafe extends AbstractUnsafe { @Override public void connect( diff --git a/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java b/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java index 20adae249ba..62dbb9ffd7e 100644 --- a/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java +++ b/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java @@ -17,10 +17,10 @@ import com.barchart.udt.TypeUDT; import com.barchart.udt.nio.SocketChannelUDT; - import io.netty.buffer.ByteBuf; import io.netty.channel.Channel; import io.netty.channel.ChannelException; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelMetadata; import io.netty.channel.FileRegion; import io.netty.channel.RecvByteBufAllocator; @@ -34,7 +34,7 @@ import java.net.InetSocketAddress; import java.net.SocketAddress; -import static java.nio.channels.SelectionKey.*; +import static java.nio.channels.SelectionKey.OP_CONNECT; /** * Byte Channel Connector for UDT Streams. @@ -149,6 +149,11 @@ protected int doWriteBytes(final ByteBuf byteBuf) throws Exception { return byteBuf.readBytes(javaChannel(), expectedWrittenBytes); } + @Override + protected ChannelFuture shutdownInput() { + return newFailedFuture(new UnsupportedOperationException("shutdownInput")); + } + @Override protected long doWriteFileRegion(FileRegion region) throws Exception { throw new UnsupportedOperationException(); diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java index 69f99b15758..cc59b84d8a7 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java @@ -19,6 +19,7 @@ import io.netty.buffer.ByteBufAllocator; import io.netty.channel.Channel; import io.netty.channel.ChannelConfig; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelOption; import io.netty.channel.ChannelOutboundBuffer; import io.netty.channel.ChannelPipeline; @@ -52,6 +53,11 @@ protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) { super(parent, ch, SelectionKey.OP_READ); } + /** + * Shutdown the input side of the channel. + */ + protected abstract ChannelFuture shutdownInput(); + @Override protected AbstractNioUnsafe newUnsafe() { return new NioByteUnsafe(); @@ -60,10 +66,10 @@ protected AbstractNioUnsafe newUnsafe() { protected class NioByteUnsafe extends AbstractNioUnsafe { private void closeOnRead(ChannelPipeline pipeline) { - SelectionKey key = selectionKey(); - setInputShutdown(); if (isOpen()) { if (Boolean.TRUE.equals(config().getOption(ChannelOption.ALLOW_HALF_CLOSURE))) { + shutdownInput(); + SelectionKey key = selectionKey(); key.interestOps(key.interestOps() & ~readInterestOp); pipeline.fireUserEventTriggered(ChannelInputShutdownEvent.INSTANCE); } else { diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java index 582e6ae6faf..8de66aba187 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java @@ -60,7 +60,6 @@ public abstract class AbstractNioChannel extends AbstractChannel { private final SelectableChannel ch; protected final int readInterestOp; volatile SelectionKey selectionKey; - private volatile boolean inputShutdown; boolean readPending; private final Runnable clearReadPendingRunnable = new Runnable() { @Override @@ -197,20 +196,6 @@ private void clearReadPending0() { ((AbstractNioUnsafe) unsafe()).removeReadOp(); } - /** - * Return {@code true} if the input of this {@link Channel} is shutdown - */ - protected boolean isInputShutdown() { - return inputShutdown; - } - - /** - * Shutdown the input of this {@link Channel}. - */ - void setInputShutdown() { - inputShutdown = true; - } - /** * Special {@link Unsafe} sub-type which allows to access the underlying {@link SelectableChannel} */ @@ -422,10 +407,6 @@ protected void doDeregister() throws Exception { @Override protected void doBeginRead() throws Exception { // Channel.read() or ChannelHandlerContext.read() was called - if (inputShutdown) { - return; - } - final SelectionKey selectionKey = this.selectionKey; if (!selectionKey.isValid()) { return; diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java index 1d606e075b1..1c7f92aa2b1 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java @@ -33,6 +33,7 @@ * {@link AbstractNioChannel} base class for {@link Channel}s that operate on messages. */ public abstract class AbstractNioMessageChannel extends AbstractNioChannel { + boolean inputShutdown; /** * @see {@link AbstractNioChannel#AbstractNioChannel(Channel, SelectableChannel, int)} @@ -46,6 +47,14 @@ protected AbstractNioUnsafe newUnsafe() { return new NioMessageUnsafe(); } + @Override + protected void doBeginRead() throws Exception { + if (inputShutdown) { + return; + } + super.doBeginRead(); + } + private final class NioMessageUnsafe extends AbstractNioUnsafe { private final List<Object> readBuf = new ArrayList<Object>(); @@ -98,7 +107,7 @@ public void read() { } if (closed) { - setInputShutdown(); + inputShutdown = true; if (isOpen()) { close(voidPromise()); } diff --git a/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java b/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java index 8c10cf2b206..15da4099b6b 100644 --- a/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java +++ b/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java @@ -19,6 +19,7 @@ import io.netty.buffer.ByteBufAllocator; import io.netty.channel.Channel; import io.netty.channel.ChannelConfig; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelMetadata; import io.netty.channel.ChannelOption; import io.netty.channel.ChannelOutboundBuffer; @@ -40,8 +41,6 @@ public abstract class AbstractOioByteChannel extends AbstractOioChannel { " (expected: " + StringUtil.simpleClassName(ByteBuf.class) + ", " + StringUtil.simpleClassName(FileRegion.class) + ')'; - private volatile boolean inputShutdown; - /** * @see AbstractOioByteChannel#AbstractOioByteChannel(Channel) */ @@ -49,39 +48,27 @@ protected AbstractOioByteChannel(Channel parent) { super(parent); } - protected boolean isInputShutdown() { - return inputShutdown; - } - @Override public ChannelMetadata metadata() { return METADATA; } /** - * Check if the input was shutdown and if so return {@code true}. The default implementation sleeps also for - * {@link #SO_TIMEOUT} milliseconds to simulate some blocking. + * Determine if the input side of this channel is shutdown. + * @return {@code true} if the input side of this channel is shutdown. */ - protected boolean checkInputShutdown() { - if (inputShutdown) { - try { - Thread.sleep(SO_TIMEOUT); - } catch (InterruptedException e) { - // ignore - } - return true; - } - return false; - } + protected abstract boolean isInputShutdown(); - void setInputShutdown() { - inputShutdown = true; - } + /** + * Shutdown the input side of this channel. + * @return A channel future that will complete when the shutdown is complete. + */ + protected abstract ChannelFuture shutdownInput(); private void closeOnRead(ChannelPipeline pipeline) { - setInputShutdown(); if (isOpen()) { if (Boolean.TRUE.equals(config().getOption(ChannelOption.ALLOW_HALF_CLOSURE))) { + shutdownInput(); pipeline.fireUserEventTriggered(ChannelInputShutdownEvent.INSTANCE); } else { unsafe().close(unsafe().voidPromise()); diff --git a/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java index d34ec36bff1..fc35b0d0158 100644 --- a/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java @@ -32,6 +32,17 @@ public interface DuplexChannel extends Channel { */ boolean isInputShutdown(); + /** + * @see Socket#shutdownInput() + */ + ChannelFuture shutdownInput(); + + /** + * Will notify the given {@link ChannelPromise} + * @see Socket#shutdownInput() + */ + ChannelFuture shutdownInput(ChannelPromise promise); + /** * @see Socket#isOutputShutdown() */ @@ -48,4 +59,22 @@ public interface DuplexChannel extends Channel { * Will notify the given {@link ChannelPromise} */ ChannelFuture shutdownOutput(ChannelPromise promise); + + /** + * Determine if both the input and output of this channel have been shutdown. + */ + boolean isShutdown(); + + /** + * Will shutdown the input and output sides of this channel. + * @return will be completed when both shutdown operations complete. + */ + ChannelFuture shutdown(); + + /** + * Will shutdown the input and output sides of this channel. + * @param promise will be completed when both shutdown operations complete. + * @return will be completed when both shutdown operations complete. + */ + ChannelFuture shutdown(ChannelPromise promise); } diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java index 86253ab8510..dc1cbe1cae2 100644 --- a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java @@ -31,6 +31,8 @@ import io.netty.channel.socket.SocketChannelConfig; import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.internal.OneTimeTask; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; import java.io.IOException; import java.net.InetSocketAddress; @@ -46,7 +48,7 @@ * {@link io.netty.channel.socket.SocketChannel} which uses NIO selector based implementation. */ public class NioSocketChannel extends AbstractNioByteChannel implements io.netty.channel.socket.SocketChannel { - + private static final InternalLogger logger = InternalLoggerFactory.getInstance(NioSocketChannel.class); private static final ChannelMetadata METADATA = new ChannelMetadata(false, 16); private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider(); @@ -124,9 +126,20 @@ public boolean isActive() { return ch.isOpen() && ch.isConnected(); } + @Override + public boolean isOutputShutdown() { + return javaChannel().socket().isOutputShutdown() || !isActive(); + } + @Override public boolean isInputShutdown() { - return super.isInputShutdown(); + return javaChannel().socket().isInputShutdown() || !isActive(); + } + + @Override + public boolean isShutdown() { + Socket socket = javaChannel().socket(); + return socket.isInputShutdown() && socket.isOutputShutdown() || !isActive(); } @Override @@ -139,11 +152,6 @@ public InetSocketAddress remoteAddress() { return (InetSocketAddress) super.remoteAddress(); } - @Override - public boolean isOutputShutdown() { - return javaChannel().socket().isOutputShutdown() || !isActive(); - } - @Override public ChannelFuture shutdownOutput() { return shutdownOutput(newPromise()); @@ -175,6 +183,68 @@ public void run() { return promise; } + @Override + public ChannelFuture shutdownInput() { + return shutdownInput(newPromise()); + } + + @Override + public ChannelFuture shutdownInput(final ChannelPromise promise) { + Executor closeExecutor = ((NioSocketChannelUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdownInput0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } + } + return promise; + } + + @Override + public ChannelFuture shutdown() { + return shutdown(newPromise()); + } + + @Override + public ChannelFuture shutdown(final ChannelPromise promise) { + Executor closeExecutor = ((NioSocketChannelUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdown0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } + } + return promise; + } + private void shutdownOutput0(final ChannelPromise promise) { try { javaChannel().socket().shutdownOutput(); @@ -184,6 +254,41 @@ private void shutdownOutput0(final ChannelPromise promise) { } } + private void shutdownInput0(final ChannelPromise promise) { + try { + javaChannel().socket().shutdownInput(); + promise.setSuccess(); + } catch (Throwable t) { + promise.setFailure(t); + } + } + + private void shutdown0(final ChannelPromise promise) { + Socket socket = javaChannel().socket(); + Throwable cause = null; + try { + socket.shutdownOutput(); + } catch (Throwable t) { + cause = t; + } + try { + socket.shutdownInput(); + } catch (Throwable t) { + if (cause == null) { + promise.setFailure(t); + } else { + logger.debug("Exception suppressed because a previous exception occurred.", t); + promise.setFailure(cause); + } + return; + } + if (cause == null) { + promise.setSuccess(); + } else { + promise.setFailure(cause); + } + } + @Override protected SocketAddress localAddress0() { return javaChannel().socket().getLocalSocketAddress(); diff --git a/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java index 3b34e213ecc..231d9fcd74a 100644 --- a/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java @@ -38,11 +38,9 @@ /** * A {@link SocketChannel} which is using Old-Blocking-IO */ -public class OioSocketChannel extends OioByteStreamChannel - implements SocketChannel { +public class OioSocketChannel extends OioByteStreamChannel implements SocketChannel { - private static final InternalLogger logger = - InternalLoggerFactory.getInstance(OioSocketChannel.class); + private static final InternalLogger logger = InternalLoggerFactory.getInstance(OioSocketChannel.class); private final Socket socket; private final OioSocketChannelConfig config; @@ -115,14 +113,19 @@ public boolean isActive() { return !socket.isClosed() && socket.isConnected(); } + @Override + public boolean isOutputShutdown() { + return socket.isOutputShutdown() || !isActive(); + } + @Override public boolean isInputShutdown() { - return super.isInputShutdown(); + return socket.isInputShutdown() || !isActive(); } @Override - public boolean isOutputShutdown() { - return socket.isOutputShutdown() || !isActive(); + public boolean isShutdown() { + return socket.isInputShutdown() && socket.isOutputShutdown() || !isActive(); } @Override @@ -130,6 +133,16 @@ public ChannelFuture shutdownOutput() { return shutdownOutput(newPromise()); } + @Override + public ChannelFuture shutdownInput() { + return shutdownInput(newPromise()); + } + + @Override + public ChannelFuture shutdown() { + return shutdown(newPromise()); + } + @Override protected int doReadBytes(ByteBuf buf) throws Exception { if (socket.isClosed()) { @@ -143,24 +156,82 @@ protected int doReadBytes(ByteBuf buf) throws Exception { } @Override - public ChannelFuture shutdownOutput(final ChannelPromise future) { + public ChannelFuture shutdownOutput(final ChannelPromise promise) { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + try { + socket.shutdownOutput(); + promise.setSuccess(); + } catch (Throwable t) { + promise.setFailure(t); + } + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownOutput(promise); + } + }); + } + return promise; + } + + @Override + public ChannelFuture shutdownInput(final ChannelPromise promise) { EventLoop loop = eventLoop(); if (loop.inEventLoop()) { + try { + socket.shutdownInput(); + promise.setSuccess(); + } catch (Throwable t) { + promise.setFailure(t); + } + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput(promise); + } + }); + } + return promise; + } + + @Override + public ChannelFuture shutdown(final ChannelPromise promise) { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + Throwable cause = null; try { socket.shutdownOutput(); - future.setSuccess(); } catch (Throwable t) { - future.setFailure(t); + cause = t; + } + try { + socket.shutdownInput(); + } catch (Throwable t) { + if (cause == null) { + promise.setFailure(t); + } else { + logger.debug("Exception suppressed because a previous exception occurred.", t); + promise.setFailure(cause); + } + return promise; + } + if (cause == null) { + promise.setSuccess(); + } else { + promise.setFailure(cause); } } else { loop.execute(new OneTimeTask() { @Override public void run() { - shutdownOutput(future); + shutdown(promise); } }); } - return future; + return promise; } @Override @@ -221,7 +292,6 @@ protected void doClose() throws Exception { socket.close(); } - @Override protected boolean checkInputShutdown() { if (isInputShutdown()) { try {
diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java new file mode 100644 index 00000000000..5a7bdc1b93f --- /dev/null +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java @@ -0,0 +1,29 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.epoll; + +import io.netty.bootstrap.ServerBootstrap; +import io.netty.testsuite.transport.TestsuitePermutation; +import io.netty.testsuite.transport.socket.SocketShutdownOutputByPeerTest; + +import java.util.List; + +public class EpollSocketShutdownOutputByPeerTest extends SocketShutdownOutputByPeerTest { + @Override + protected List<TestsuitePermutation.BootstrapFactory<ServerBootstrap>> newFactories() { + return EpollSocketTestPermutation.INSTANCE.serverSocket(); + } +} diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java new file mode 100644 index 00000000000..3ad80e472bd --- /dev/null +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java @@ -0,0 +1,29 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.epoll; + +import io.netty.bootstrap.Bootstrap; +import io.netty.testsuite.transport.TestsuitePermutation; +import io.netty.testsuite.transport.socket.SocketShutdownOutputBySelfTest; + +import java.util.List; + +public class EpollSocketShutdownOutputBySelfTest extends SocketShutdownOutputBySelfTest { + @Override + protected List<TestsuitePermutation.BootstrapFactory<Bootstrap>> newFactories() { + return EpollSocketTestPermutation.INSTANCE.clientSocket(); + } +}
val
train
2016-05-10T09:04:52
"2016-04-25T23:24:06Z"
vrozov
val
netty/netty/5173_5195
netty/netty
netty/netty/5173
netty/netty/5195
[ "timestamp(timedelta=18.0, similarity=0.8524211396772581)" ]
cf07f984b16d95719a9ece8c39ed6c11d8c57829
7e748e06e26e95320cc16b8092f4df01c8e79ab3
[ "It only checks the fingerprint. It does not traverse the certificate chain at all. Theoretically, there's a chance where an attacker can forge a certificate with identical fingerprint, although it's not going to be very easy.\n\nI agree that it needs better documentation though. Would you be interested in sending us a PR?\n", "Ah ok, it makes sense to point that out. Will make a PR to improve docs.\n", "Thanks a lot, @CodingFabian !\n", "@CodingFabian any progress here ?\n", "woops, forgot. coming\n", "@CodingFabian 😍 \n", "Fixed by https://github.com/netty/netty/pull/5195\n" ]
[ "@CodingFabian maybe add a `<a href=\"\"...>` for `Man-in-the-middle` \n", "be consistent ... either use Man or man .\n", "`This {@link TrustManagerFactory} will` -> `This {@link TrustManagerFactory} will <strong>only</strong>`\n" ]
"2016-05-02T13:27:58Z"
[ "cleanup", "documentation" ]
FingerprintTrustManagerFactory javadoc note cryptic
the javadoc says: "Never use this {@link TrustManagerFactory} in production unless you are sure exactly what you are doing with it." I was actually sure that in order to prevent MITM attacks, I want to pin the serverside certificates to a list of well known fingerprints. I do not want to accept other certificates. So I think I know what I am doing, but this note made me feel uneasy. Why is it formulated so strongly? Why is it not explaining what "this TrustmanagerFactory" actually does? I am currently trying to figure out if there is an unintentional side effect, but can't find it :-)
[ "handler/src/main/java/io/netty/handler/ssl/util/FingerprintTrustManagerFactory.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/util/FingerprintTrustManagerFactory.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/util/FingerprintTrustManagerFactory.java b/handler/src/main/java/io/netty/handler/ssl/util/FingerprintTrustManagerFactory.java index c4563a99dde..79e434655a9 100644 --- a/handler/src/main/java/io/netty/handler/ssl/util/FingerprintTrustManagerFactory.java +++ b/handler/src/main/java/io/netty/handler/ssl/util/FingerprintTrustManagerFactory.java @@ -39,11 +39,19 @@ /** * An {@link TrustManagerFactory} that trusts an X.509 certificate whose SHA1 checksum matches. * <p> - * <strong>NOTE:</strong> - * Never use this {@link TrustManagerFactory} in production unless you are sure exactly what you are doing with it. - * </p><p> + * <strong>NOTE:</strong> It is recommended to verify certificates and their chain to prevent + * <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">Man-in-the-middle attacks</a>. + * This {@link TrustManagerFactory} will <strong>only</strong> verify that the fingerprint of certificates match one + * of the given fingerprints. This procedure is called + * <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security#Certificate_pinning">certificate pinning</a> and + * is an effective protection. For maximum security one should verify that the whole certificate chain is as expected. + * It is worth mentioning that certain firewalls, proxies or other appliances found in corporate environments, + * actually perform Man-in-the-middle attacks and thus present a different certificate fingerprint. + * </p> + * <p> * The SHA1 checksum of an X.509 certificate is calculated from its DER encoded format. You can get the fingerprint of * an X.509 certificate using the {@code openssl} command. For example: + * * <pre> * $ openssl x509 -fingerprint -sha1 -in my_certificate.crt * SHA1 Fingerprint=4E:85:10:55:BC:7B:12:08:D1:EA:0A:12:C9:72:EE:F3:AA:B2:C7:CB
null
test
train
2016-05-01T20:30:13
"2016-04-25T07:17:18Z"
CodingFabian
val
netty/netty/5157_5203
netty/netty
netty/netty/5157
netty/netty/5203
[ "timestamp(timedelta=4.0, similarity=0.8883271627404382)" ]
a974fff07d9c3c5ff08d3f120f629abe618e2ab6
5f27ef240ad8007068d719ec1c4f1426280197e8
[ "@Zpetkov the 5.x branch has been deprecated and is no longer supported. However, the issue also exists in 4.1. I ll take care of this.\n", "Fixed by https://github.com/netty/netty/pull/5203\n" ]
[]
"2016-05-04T02:32:53Z"
[ "defect" ]
JsonObjectDecoder does not handle escaped backslash
I've noticed the JsonObjectDecoder checks end of value string by looking at the previous char and if its the escape char \ then it's not the end of string if the current char is double quote. This is not correct. Let's send the json object with key "key" and value "something\" , it is serialized as this: {"key":"something\\"} The decoder will not decode this since the last double quote is after a backslash. However this backslash is escaped but this is not handled in the code: https://github.com/netty/netty/blob/83c349ffa94d3992c4ee511d3625afc0c97c12bb/codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java#L194 This is with 5.0.0 Alpha2 Cheers
[ "codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java" ]
[ "codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java" ]
[ "codec/src/test/java/io/netty/handler/codec/json/JsonObjectDecoderTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java b/codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java index b791e410370..a34334b6c0e 100644 --- a/codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/json/JsonObjectDecoder.java @@ -190,9 +190,22 @@ private void decodeByte(byte c, ByteBuf in, int idx) { // also contain braces/brackets and that could lead to incorrect results. if (!insideString) { insideString = true; - // If the double quote wasn't escaped then this is the end of a string. - } else if (in.getByte(idx - 1) != '\\') { - insideString = false; + } else { + int backslashCount = 0; + idx--; + while (idx >= 0) { + if (in.getByte(idx) == '\\') { + backslashCount++; + idx--; + } else { + break; + } + } + // The double quote isn't escaped only if there are even "\"s. + if (backslashCount % 2 == 0) { + // Since the double quote isn't escaped then this is the end of a string. + insideString = false; + } } } }
diff --git a/codec/src/test/java/io/netty/handler/codec/json/JsonObjectDecoderTest.java b/codec/src/test/java/io/netty/handler/codec/json/JsonObjectDecoderTest.java index 08ece60917b..083b4eb1d43 100644 --- a/codec/src/test/java/io/netty/handler/codec/json/JsonObjectDecoderTest.java +++ b/codec/src/test/java/io/netty/handler/codec/json/JsonObjectDecoderTest.java @@ -87,6 +87,51 @@ public void testSingleByteStream() { assertFalse(ch.finish()); } + @Test + public void testBackslashInString1() { + EmbeddedChannel ch = new EmbeddedChannel(new JsonObjectDecoder()); + // {"foo" : "bar\""} + String json = "{\"foo\" : \"bar\\\"\"}"; + System.out.println(json); + ch.writeInbound(Unpooled.copiedBuffer(json, CharsetUtil.UTF_8)); + + ByteBuf res = ch.readInbound(); + assertEquals(json, res.toString(CharsetUtil.UTF_8)); + res.release(); + + assertFalse(ch.finish()); + } + + @Test + public void testBackslashInString2() { + EmbeddedChannel ch = new EmbeddedChannel(new JsonObjectDecoder()); + // {"foo" : "bar\\"} + String json = "{\"foo\" : \"bar\\\\\"}"; + System.out.println(json); + ch.writeInbound(Unpooled.copiedBuffer(json, CharsetUtil.UTF_8)); + + ByteBuf res = ch.readInbound(); + assertEquals(json, res.toString(CharsetUtil.UTF_8)); + res.release(); + + assertFalse(ch.finish()); + } + + @Test + public void testBackslashInString3() { + EmbeddedChannel ch = new EmbeddedChannel(new JsonObjectDecoder()); + // {"foo" : "bar\\\""} + String json = "{\"foo\" : \"bar\\\\\\\"\"}"; + System.out.println(json); + ch.writeInbound(Unpooled.copiedBuffer(json, CharsetUtil.UTF_8)); + + ByteBuf res = ch.readInbound(); + assertEquals(json, res.toString(CharsetUtil.UTF_8)); + res.release(); + + assertFalse(ch.finish()); + } + @Test public void testMultipleJsonObjectsInOneWrite() { EmbeddedChannel ch = new EmbeddedChannel(new JsonObjectDecoder());
train
train
2016-05-03T08:41:30
"2016-04-17T13:22:45Z"
Zpetkov
val
netty/netty/5199_5204
netty/netty
netty/netty/5199
netty/netty/5204
[ "timestamp(timedelta=3.0, similarity=0.9025616792679517)" ]
f2ed3e6ce8039d142e4c047fcc9cf09409105243
4c13c85c398a92ee5080667243fc24c663956096
[ "I've added this to the checklist in #3667. What you've proposed sounds good.\n\nGOAWAY and RST_STREAM can simply be ignored since the channel will closed automatically (they are propagated for additional information to those that want it; HTTP/1 doesn't want it). Stream ID should always be null on a child channel (at least for now), so there should be nothing to handle there either.\n", "Not sure if this is covered somewhere or is understood but just to clarify on the following....\n\n> HeadersFrame is endStream => LastHttpContent with Trailers\n\nA HEADERS frame with EOS doesn't necessarily mean the headers are trailers. This frame could just be the first HEADERS frame and there are no DATA frames to follow.\n", "OK, that's good to know. So that would be converted to a FullHttpRequest with an empty content body?\n" ]
[ "+1\n", "fix formating \n", "fix formating\n", "I think we should retain as late as possible to mimize the risk of leak data in case of exceptions.\n", "fix formating. \n", "see above\n", "Add `.` on EOL.\n", "also assert the return value of all write calls (also in other tests).\n", "also call channel.finish() and assert the return value\n", "`FullHttpResponse` can also have `trailingHeaders()`\n", "looks like this is covered in `encodeLastContent` ... just started looking but do you see an issue with this?\n", "LastHttpContent is not mutually exclusive with HttpResponse (as is seen with FullHttpResponse). Instead of if-elses, I'd suggest having separate ifs for each part of the message, and don't treat FullHttpResponse specially.\n\n```\nif (obj instanceof HttpMessage) {\n out.add(headers);\n}\nif (obj instanceof HttpContent) {\n if (hasContent) {\n out.add(content /* no eos */);\n }\n}\nif (obj instanceof LastHttpContent) {\n if (lastContent.trailingHeaders().isEmpty()) {\n out.add(emptyDataFrameWithEOS);\n } else {\n out.add(trailers);\n }\n}\n```\n\nIt would be possible to optimize it to avoid `emptyDataFrameWithEOS`, but I question how much that matters.\n", "I'm not sure if we should attempt to insert \"filler\". Seems more expected to handle the objects as they are...\n\n``` java\nLastHttpContent last = ...;\nif (last.trailingHeaders().isEmpty()) {\n out.add(new dataframe(last.content(), true));\n} else {\n out.add(new dataframe(last.content(), false));\n out.add(new headersframe(convert(last.trailingHeaders(), ...), true));\n}\n```\n", "Ah, you're right. It's fine.\n", "consider renaming `toHttpRequest` -> `toFullHttpRequest` and renaming this `toHttpChunkedRequest` -> `toHttpRequest`. The \"chunked\" terminology may be confused with \"chunked\" encoding as defined in https://tools.ietf.org/html/rfc7230#section-4.1\n", "the suggestion in https://github.com/netty/netty/pull/5204#discussion_r62067945 by @ejona86 would also help mitigate this issue.\n", "this will create problems if an exception occurs (e.g. during headers conversion). sending a stream error with stream id 0 is not allowed by the spec https://tools.ietf.org/html/rfc7540#section-6.4. We should also get the \"real\" stream ID to ease understanding.\n\n@ejona86 - what is the best way to do this?\n", "`0` -> `id` (assuming id eventually represents the \"real\" stream id)\n", "@Scottmitch, I've got to go to a meeting, so I can't dig too deeply. But there shouldn't be much reason to need the id with the childchan API. If we really have to, we could grab it from an attribute (which currently doesn't exist) on the channel.\n", "Is the intention of this conditional to distinguish between the \"headers\" and the \"trailers\"? Perhaps a better way to do this is to check the stream state. @ejona86 - is there a way to get the stream state in the new API?\n\nAlso as a side note @nmittler I don't think we are currently enforcing this (are we?):\n\nhttps://tools.ietf.org/html/rfc7540#section-8.1.2.3\n\n> All HTTP/2 requests MUST include exactly one valid value for the\n> \":method\", \":scheme\", and \":path\" pseudo-header fields, unless it is\n> a CONNECT request (Section 8.3). An HTTP request that omits\n> mandatory pseudo-header fields is malformed (Section 8.1.2.6).\n", "https://tools.ietf.org/html/rfc7540#section-8.1.2.6\n\n> Malformed requests or responses that are\n> detected MUST be treated as a stream error (Section 5.4.2) of type\n> PROTOCOL_ERROR.\n> \n> For malformed requests, a server MAY send an HTTP response prior to\n> closing or resetting the stream. Clients MUST NOT accept a malformed\n> response. Note that these requirements are intended to protect\n> against several types of common attacks against HTTP; they are\n> deliberately strict because being permissive can expose\n> implementations to these vulnerabilities.\n\nSo maybe its good we don't enforce this on the server ... either it seems best to avoid using the presence/absence of these headers fields to indicate state of the stream.\n", "typically the `validateHeaders` is exposed and configurable by the user. validating headers can add non-trivial overhead that some users decide to forgo.\n", "also expose the `validateHeaders` so it can be configured by users (same as https://github.com/netty/netty/pull/5204#discussion_r62077393)\n", "expose the `validateHeaders` so it can be configured by users (same as https://github.com/netty/netty/pull/5204#discussion_r62077393)\n", "@ejona86 - is the idea that the exception should propagate through the child channel's pipeline and translated to the appropriate type of HTTP/2 frame by the http/2 codec under the hood? In the long term this seems like a good thing (assuming we can distinguish between connection/stream errors if necessary)... in the near term our Http2Exception requires the StreamId ... and if the connection stream id is used a stream error will be translated into a connection error (see [Http2Exception](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Exception.java#L124)) ... we should have some story to handle this in the near term ... wdyt?\n", "> Perhaps a better way to do this is to check the stream state. @ejona86 - is there a way to get the stream state in the new API?\n\nNo, there isn't, and currently we don't track this sort of state anywhere (even in the old API). It could be something we add, but I'd hope to not need it, especially given how hard it will be to resolve races. If the handler wasn't shareable it'd be trivial to deal with, but shareable is a laudable goal.\n\nI don't have too much concern about relying on :method for requests and :status for responses, since they are so core to HTTP and _must_ be present in HTTP/1 during parsing. That would even behave correctly in the face of 1xx status codes where you end up having multiple sets of \"headers\". (For those unaware of this particular dark corner of HTTP, multiple header sets is a real thing in both [HTTP/1.1](https://tools.ietf.org/html/rfc7231#section-6.2) and [HTTP/2](https://tools.ietf.org/html/rfc7540#section-8.1). Yay!)\n\nAnother option is to add a bit to Http2HeadersFrame indicating whether it is a \"header\" or \"trailer,\" but that sounds a bit icky because it has to agree with endStream.\n", "So I looked into this some. The value has two usages: stream exceptions and converting to non-childchan HTTP/2. Converting to non-childchan HTTP/2 doesn't seem important.\n\nExceptions on a stream's channel have to be handled like normal exceptions; in this case with exceptionCaught. If uncaught it just logs. Applications will commonly write exceptionCaught to close the channel, which then becomes [Http2ResetFrame(CANCEL)](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java#L413)). An application could write its own Http2ResetFrame with special error code if it wanted.\n\nSo the Http2Exceptions aren't special with h2childchan. That's true even if you're on the parent channel. And in fact it's true in the non-childchan API as well when you are later in the pipeline than the Http2ConnectionHandler; it's just that that almost never happens.\n", "> No, there isn't, and currently we don't track this sort of state anywhere (even in the old API)\n\nIn the existing API we generally pass around the stream ID or the Http2Stream object. If you have the stream ID then you can use `Http2Connection.stream(id)` to get the Http2Stream. The Http2Stream allows access to the [State](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L68).\n\n> I don't have too much concern about relying on :method for requests and :status for responses, since they are so core to HTTP and must be present in HTTP/1 during parsing\n\nConsider the following sequence of messages:\n\nHeaders (:method=get) StreamId=3 EOS=0\nData StreamId=3 EOS=0\nHeaders (:method=get) StreamId=3 EOS=1\n\nThis is likely a separate issue but where do we enforce that these \"special\" headers can only exist \"exactly once\" before the \"data\" (unless its a connect request)?\n", "Yes this seems reasonable ... only concern is if a childchan uses an API which intends to throw a \"Connection Error\" (which would typically be converted into a GO_AWAY) ... although it seems like most of these scenarios should be on the parent channel decoding stuff before it gets to the childchan\n", "nit: `this(true)`\n", "done, thanks!\n", "> The Http2Stream allows access to the State.\n\nBut that state is 1) not directly usable because it has already been modified in response to the frames, 2) this logic using the state may be delayed, even on the same thread and 3) this logic using the state may be run from a different thread from where the state is maintained/changed.\n\nFor point #1, consider these new streams:\n\nIDLE: (isn't actually tracked yet; it will [never](https://github.com/netty/netty/blob/bd6040a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java#L311) show up as IDLE)\nOPEN: Headers (:method=get) StreamId=3 EOS=0 # These are headers\nOPEN: Headers () StreamId=3 EOS=1 # These are trailers\nHALF_CLOSED_REMOTE: (yes, it happens [afterward](https://github.com/netty/netty/blob/83c349f/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L324))\n\nIDLE: (isn't actually tracked yet; it won't show up as IDLE)\nHALF_CLOSED_REMOTE: Headers (:method=get) StreamId=5 EOS=1 # These are headers\n(yes, this state happens [immediately](https://github.com/netty/netty/blob/bd6040a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java#L315))\n\nSo the State is _never_ useful for this distinction today. That could possibly be fixed, but the child channel has its own queue. The application could simply not read from the child channel immediately and the state will be HALF_CLOSED_REMOTE in both cases. When the child channel is using a different event loop, we'd expect HALF_CLOSED_REMOTE in both cases.\n\n> This is likely a separate issue but where do we enforce that these \"special\" headers can only exist \"exactly once\" before the \"data\" (unless its a connect request)?\n\nIt looks like we should be prohibiting that (still taking into account 1xx):\n\n> Pseudo-header fields are only valid in the context in which they are defined. Pseudo-header fields defined for requests MUST NOT appear in responses; pseudo-header fields defined for responses MUST NOT appear in requests. Pseudo-header fields MUST NOT appear in trailers. Endpoints MUST treat a request or response that contains undefined or invalid pseudo-header fields as malformed (Section 8.1.2.6).\n", "I would hope the Http2Exceptions would go away (or be an implementation detail) in the child chan API (for both child channel and parent channel). They stop being helpful for control flow when you have multiple handlers, since the exceptions are propagated toward the tail of the pipeline, not the head. They could still be used for notifications of failures, but it seems they couldn't trigger RST_STREAM and GOAWAY like they do today.\n", "Agreed the state as it exists today won't be very useful for this purpose\n", "`full.content().readableBytes() == 0` -> `!full.content().isReadable()`\n", "why not make non-static and just use `validateHeaders` as a member variable instead of as a parameter?\n", "its not clear why we should try to simulate the user sending a data frame to make the encoding work. Seems more intuitive to just encode what the user provides and if their logic is flawed they should fix it. Can we just mimic the logic in [HttpToHttp2ConnectionHandler.write](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java) or improve the logic their if it is not good enough? It would be nice to share code or at least use consistent logic if possible.\n", "We need to handle the case where a user sends a `LastHttpContent.EMPTY_LAST_CONTENT` in order to close their streamed message, but there's no need to send a message otherwise. I don't think this is incorrect behavior.\n\nIt looks like in HttpToHttp2ConnectionHandler, it sends a data frame regardless of whether there's content or not, which seems unnecessary. I can clean it up, do you mind if we push to a subsequent review though?\n", "> I don't think this is incorrect behavior.\n\nAgreed. Thanks for clarifying.\n\n> I can clean it up, do you mind if we push to a subsequent review though?\n\nFollowup PR sgtm thanks!\n", "@mosesn - I just merged this an noticed a leak in the build log. can you add another commit (or pr) where these `request` objects are released in a finally block for all these tests?\n", "yep, how can I check if I'm leak-free?\n", "The best thing we have ATM is [-Dio.netty.leakDetectionLevel=paranoid](http://netty.io/wiki/reference-counted-objects.html#leak-detection-levels).\n", "thanks :)\n", "@mosesn run a few times with `-Pleak` and see if you see any `LEAK:` logs in the console\n", "note that you can also use [releaseLater](http://netty.io/wiki/reference-counted-objects.html#fixing-leaks-in-unit-tests) instead of using a `try`/`finally` block.\n", "I've run it a few times, but can't repro locally. Here's what I'm doing:\n\n``` bash\nmvn test -pl codec-http2 -Dtest=Http2ServerDowngraderTest -Pleak\n```\n\nIs there anything I'm missing? Is there a way I can see the failed build, so I can fix based on that?\n", "@mosesn - the build won't fail locally, but you should see log statements that look like this https://garage.netty.io/teamcity/viewLog.html?tab=buildLog&logTab=tree&filter=debug&expand=all&buildId=8253#_focus=18593\n", "Weird, I don't see that failure locally. Do you see it consistently? Do I need to change a log level somewhere?\n", "Not sure why, but it just started working locally. Working on a fix now–looks like it's just for one test.\n", "I made a PR, but I'm not sure if this is the right fix: #5230.\n" ]
"2016-05-04T07:04:45Z"
[]
http/2 message => http/1.1 message adapter for the new http/2 api
## problem http/2 and http/1.1 have similar protocols, and it's useful to be able to implement a single server against a single interface. There's an injection from http/1.1 messages to http/2 ones, so it makes sense to make folks program against http/1.1 and upgrade them under the hood. ## proposed solution a `MessageToMessageDecoder<Http2StreamFrame>` which turns every kind of `Http2StreamFrame` domain object into an `HttpContent` domain object. ## what's next I'm working on building this for myself, and I would love to contribute it back to netty. Please let me know what you think! Right now I'm able to avoid all aggregation, and the adapters that I've written so far are: ``` HeadersFrame is endStream => LastHttpContent with Trailers is not endStream => HttpConversionUtil.toHttpRequest or toHttpResponse Http2DataFrame is endStream => LastHttpContent is not endStream => HttpContent ``` Right now I'm not handling goaway, reset, or stream id, and I figure we can add those as we go along. The opposite is mostly symmetric, except that there are some http/1.1 messages which will decompose to more than one http/2 message, but that's easy to encode.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapter.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerDowngrader.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapter.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ServerDowngraderTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerDowngrader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerDowngrader.java new file mode 100644 index 00000000000..189e2cd52c2 --- /dev/null +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerDowngrader.java @@ -0,0 +1,126 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec.http2; + +import io.netty.buffer.Unpooled; +import io.netty.channel.ChannelHandlerContext; +import io.netty.handler.codec.MessageToMessageCodec; +import io.netty.handler.codec.http.DefaultHttpContent; +import io.netty.handler.codec.http.DefaultLastHttpContent; +import io.netty.handler.codec.http.FullHttpRequest; +import io.netty.handler.codec.http.FullHttpResponse; +import io.netty.handler.codec.http.HttpContent; +import io.netty.handler.codec.http.HttpObject; +import io.netty.handler.codec.http.HttpResponse; +import io.netty.handler.codec.http.HttpVersion; +import io.netty.handler.codec.http.LastHttpContent; +import io.netty.util.ReferenceCountUtil; + +import java.util.List; + +/** + * This is a server-side adapter so that an http2 codec can be downgraded to + * appear as if it's speaking http/1.1. + * + * In particular, this handler converts from {@link Http2StreamFrame} to {@link + * HttpObject}, and back. For simplicity, it converts to chunked encoding + * unless the entire stream is a single header. + */ +public class Http2ServerDowngrader extends MessageToMessageCodec<Http2StreamFrame, HttpObject> { + + private final boolean validateHeaders; + + public Http2ServerDowngrader(boolean validateHeaders) { + this.validateHeaders = validateHeaders; + } + + public Http2ServerDowngrader() { + this(true); + } + + @Override + public boolean acceptInboundMessage(Object msg) throws Exception { + return (msg instanceof Http2HeadersFrame) || (msg instanceof Http2DataFrame); + } + + @Override + protected void decode(ChannelHandlerContext ctx, Http2StreamFrame frame, List<Object> out) throws Exception { + if (frame instanceof Http2HeadersFrame) { + int id = 0; // not really the id + Http2HeadersFrame headersFrame = (Http2HeadersFrame) frame; + Http2Headers headers = headersFrame.headers(); + + if (headersFrame.isEndStream()) { + if (headers.method() == null) { + LastHttpContent last = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER, validateHeaders); + HttpConversionUtil.addHttp2ToHttpHeaders(id, headers, last.trailingHeaders(), + HttpVersion.HTTP_1_1, true, true); + out.add(last); + } else { + FullHttpRequest full = HttpConversionUtil.toFullHttpRequest(id, headers, ctx.alloc(), + validateHeaders); + out.add(full); + } + } else { + out.add(HttpConversionUtil.toHttpRequest(id, headersFrame.headers(), validateHeaders)); + } + + } else if (frame instanceof Http2DataFrame) { + Http2DataFrame dataFrame = (Http2DataFrame) frame; + if (dataFrame.isEndStream()) { + out.add(new DefaultLastHttpContent(dataFrame.content(), validateHeaders)); + } else { + out.add(new DefaultHttpContent(dataFrame.content())); + } + } + ReferenceCountUtil.retain(frame); + } + + private void encodeLastContent(LastHttpContent last, List<Object> out) { + boolean needFiller = !(last instanceof FullHttpResponse) && last.trailingHeaders().isEmpty(); + if (last.content().isReadable() || needFiller) { + out.add(new DefaultHttp2DataFrame(last.content(), last.trailingHeaders().isEmpty())); + } + if (!last.trailingHeaders().isEmpty()) { + Http2Headers headers = HttpConversionUtil.toHttp2Headers(last.trailingHeaders(), validateHeaders); + out.add(new DefaultHttp2HeadersFrame(headers, true)); + } + } + + @Override + protected void encode(ChannelHandlerContext ctx, HttpObject obj, List<Object> out) throws Exception { + if (obj instanceof HttpResponse) { + Http2Headers headers = HttpConversionUtil.toHttp2Headers((HttpResponse) obj, validateHeaders); + boolean noMoreFrames = false; + if (obj instanceof FullHttpResponse) { + FullHttpResponse full = (FullHttpResponse) obj; + noMoreFrames = !full.content().isReadable() && full.trailingHeaders().isEmpty(); + } + + out.add(new DefaultHttp2HeadersFrame(headers, noMoreFrames)); + } + + if (obj instanceof LastHttpContent) { + LastHttpContent last = (LastHttpContent) obj; + encodeLastContent(last, out); + } else if (obj instanceof HttpContent) { + HttpContent cont = (HttpContent) obj; + out.add(new DefaultHttp2DataFrame(cont.content(), false)); + } + ReferenceCountUtil.retain(obj); + } +} diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java index 80bac5eec75..bc81b6f1729 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java @@ -17,6 +17,7 @@ import io.netty.buffer.ByteBufAllocator; import io.netty.handler.codec.http.DefaultFullHttpRequest; import io.netty.handler.codec.http.DefaultFullHttpResponse; +import io.netty.handler.codec.http.DefaultHttpRequest; import io.netty.handler.codec.http.FullHttpMessage; import io.netty.handler.codec.http.FullHttpRequest; import io.netty.handler.codec.http.FullHttpResponse; @@ -228,7 +229,7 @@ public static FullHttpResponse toHttpResponse(int streamId, Http2Headers http2He * @return A new request object which represents headers/data * @throws Http2Exception see {@link #addHttp2ToHttpHeaders(int, Http2Headers, FullHttpMessage, boolean)} */ - public static FullHttpRequest toHttpRequest(int streamId, Http2Headers http2Headers, ByteBufAllocator alloc, + public static FullHttpRequest toFullHttpRequest(int streamId, Http2Headers http2Headers, ByteBufAllocator alloc, boolean validateHttpHeaders) throws Http2Exception { // HTTP/2 does not define a way to carry the version identifier that is included in the HTTP/1.1 request line. @@ -250,6 +251,37 @@ public static FullHttpRequest toHttpRequest(int streamId, Http2Headers http2Head return msg; } + /** + * Create a new object to contain the request data. + * + * @param streamId The stream associated with the request + * @param http2Headers The initial set of HTTP/2 headers to create the request with + * @param validateHttpHeaders <ul> + * <li>{@code true} to validate HTTP headers in the http-codec</li> + * <li>{@code false} not to validate HTTP headers in the http-codec</li> + * </ul> + * @return A new request object which represents headers for a chunked request + * @throws Http2Exception see {@link #addHttp2ToHttpHeaders(int, Http2Headers, FullHttpMessage, boolean)} + */ + public static HttpRequest toHttpRequest(int streamId, Http2Headers http2Headers, boolean validateHttpHeaders) + throws Http2Exception { + // HTTP/2 does not define a way to carry the version identifier that is included in the HTTP/1.1 request line. + final CharSequence method = checkNotNull(http2Headers.method(), + "method header cannot be null in conversion to HTTP/1.x"); + final CharSequence path = checkNotNull(http2Headers.path(), + "path header cannot be null in conversion to HTTP/1.x"); + HttpRequest msg = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.valueOf(method.toString()), + path.toString(), validateHttpHeaders); + try { + addHttp2ToHttpHeaders(streamId, http2Headers, msg.headers(), msg.protocolVersion(), false, true); + } catch (Http2Exception e) { + throw e; + } catch (Throwable t) { + throw streamError(streamId, PROTOCOL_ERROR, t, "HTTP/2 to HTTP/1.x headers conversion error"); + } + return msg; + } + /** * Translate and add HTTP/2 headers to HTTP/1.x headers. * diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapter.java index 68c912aa7c1..37d2f1cd3c5 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapter.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapter.java @@ -150,7 +150,7 @@ protected void fireChannelRead(ChannelHandlerContext ctx, FullHttpMessage msg, b protected FullHttpMessage newMessage(Http2Stream stream, Http2Headers headers, boolean validateHttpHeaders, ByteBufAllocator alloc) throws Http2Exception { - return connection.isServer() ? HttpConversionUtil.toHttpRequest(stream.id(), headers, alloc, + return connection.isServer() ? HttpConversionUtil.toFullHttpRequest(stream.id(), headers, alloc, validateHttpHeaders) : HttpConversionUtil.toHttpResponse(stream.id(), headers, alloc, validateHttpHeaders); }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ServerDowngraderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ServerDowngraderTest.java new file mode 100644 index 00000000000..e7ca6702e73 --- /dev/null +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ServerDowngraderTest.java @@ -0,0 +1,317 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec.http2; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.channel.embedded.EmbeddedChannel; +import io.netty.handler.codec.http.DefaultFullHttpResponse; +import io.netty.handler.codec.http.DefaultHttpContent; +import io.netty.handler.codec.http.DefaultHttpResponse; +import io.netty.handler.codec.http.DefaultLastHttpContent; +import io.netty.handler.codec.http.FullHttpRequest; +import io.netty.handler.codec.http.FullHttpResponse; +import io.netty.handler.codec.http.HttpContent; +import io.netty.handler.codec.http.HttpHeaders; +import io.netty.handler.codec.http.HttpMethod; +import io.netty.handler.codec.http.HttpRequest; +import io.netty.handler.codec.http.HttpResponse; +import io.netty.handler.codec.http.HttpResponseStatus; +import io.netty.handler.codec.http.HttpVersion; +import io.netty.handler.codec.http.LastHttpContent; +import io.netty.util.CharsetUtil; + +import org.junit.Test; + +import static org.hamcrest.CoreMatchers.is; +import static org.hamcrest.CoreMatchers.nullValue; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertThat; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertFalse; + +public class Http2ServerDowngraderTest { + + @Test + public void testUpgradeEmptyFullResponse() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + assertTrue(ch.writeOutbound(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK))); + + Http2HeadersFrame headersFrame = ch.readOutbound(); + assertThat(headersFrame.headers().status().toString(), is("200")); + assertTrue(headersFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeNonEmptyFullResponse() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + assertTrue(ch.writeOutbound(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK, hello))); + + Http2HeadersFrame headersFrame = ch.readOutbound(); + assertThat(headersFrame.headers().status().toString(), is("200")); + assertFalse(headersFrame.isEndStream()); + + Http2DataFrame dataFrame = ch.readOutbound(); + assertThat(dataFrame.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertTrue(dataFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeEmptyFullResponseWithTrailers() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK); + HttpHeaders trailers = response.trailingHeaders(); + trailers.set("key", "value"); + assertTrue(ch.writeOutbound(response)); + + Http2HeadersFrame headersFrame = ch.readOutbound(); + assertThat(headersFrame.headers().status().toString(), is("200")); + assertFalse(headersFrame.isEndStream()); + + Http2HeadersFrame trailersFrame = ch.readOutbound(); + assertThat(trailersFrame.headers().get("key").toString(), is("value")); + assertTrue(trailersFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeNonEmptyFullResponseWithTrailers() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK, hello); + HttpHeaders trailers = response.trailingHeaders(); + trailers.set("key", "value"); + assertTrue(ch.writeOutbound(response)); + + Http2HeadersFrame headersFrame = ch.readOutbound(); + assertThat(headersFrame.headers().status().toString(), is("200")); + assertFalse(headersFrame.isEndStream()); + + Http2DataFrame dataFrame = ch.readOutbound(); + assertThat(dataFrame.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertFalse(dataFrame.isEndStream()); + + Http2HeadersFrame trailersFrame = ch.readOutbound(); + assertThat(trailersFrame.headers().get("key").toString(), is("value")); + assertTrue(trailersFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeHeaders() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + HttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK); + assertTrue(ch.writeOutbound(response)); + + Http2HeadersFrame headersFrame = ch.readOutbound(); + assertThat(headersFrame.headers().status().toString(), is("200")); + assertFalse(headersFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeChunk() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + HttpContent content = new DefaultHttpContent(hello); + assertTrue(ch.writeOutbound(content)); + + Http2DataFrame dataFrame = ch.readOutbound(); + assertThat(dataFrame.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertFalse(dataFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeEmptyEnd() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + LastHttpContent end = LastHttpContent.EMPTY_LAST_CONTENT; + assertTrue(ch.writeOutbound(end)); + + Http2DataFrame emptyFrame = ch.readOutbound(); + assertThat(emptyFrame.content().readableBytes(), is(0)); + assertTrue(emptyFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeDataEnd() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + LastHttpContent end = new DefaultLastHttpContent(hello, true); + assertTrue(ch.writeOutbound(end)); + + Http2DataFrame dataFrame = ch.readOutbound(); + assertThat(dataFrame.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertTrue(dataFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeTrailers() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + LastHttpContent trailers = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER, true); + HttpHeaders headers = trailers.trailingHeaders(); + headers.set("key", "value"); + assertTrue(ch.writeOutbound(trailers)); + + Http2HeadersFrame headerFrame = ch.readOutbound(); + assertThat(headerFrame.headers().get("key").toString(), is("value")); + assertTrue(headerFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testUpgradeDataEndWithTrailers() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + LastHttpContent trailers = new DefaultLastHttpContent(hello, true); + HttpHeaders headers = trailers.trailingHeaders(); + headers.set("key", "value"); + assertTrue(ch.writeOutbound(trailers)); + + Http2DataFrame dataFrame = ch.readOutbound(); + assertThat(dataFrame.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertFalse(dataFrame.isEndStream()); + + Http2HeadersFrame headerFrame = ch.readOutbound(); + assertThat(headerFrame.headers().get("key").toString(), is("value")); + assertTrue(headerFrame.isEndStream()); + + assertThat(ch.readOutbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testDowngradeHeaders() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + Http2Headers headers = new DefaultHttp2Headers(); + headers.path("/"); + headers.method("GET"); + + assertTrue(ch.writeInbound(new DefaultHttp2HeadersFrame(headers))); + + HttpRequest request = ch.readInbound(); + assertThat(request.uri(), is("/")); + assertThat(request.method(), is(HttpMethod.GET)); + assertThat(request.protocolVersion(), is(HttpVersion.HTTP_1_1)); + assertFalse(request instanceof FullHttpRequest); + + assertThat(ch.readInbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testDowngradeFullHeaders() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + Http2Headers headers = new DefaultHttp2Headers(); + headers.path("/"); + headers.method("GET"); + + assertTrue(ch.writeInbound(new DefaultHttp2HeadersFrame(headers, true))); + + FullHttpRequest request = ch.readInbound(); + assertThat(request.uri(), is("/")); + assertThat(request.method(), is(HttpMethod.GET)); + assertThat(request.protocolVersion(), is(HttpVersion.HTTP_1_1)); + assertThat(request.content().readableBytes(), is(0)); + assertTrue(request.trailingHeaders().isEmpty()); + + assertThat(ch.readInbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testDowngradeTrailers() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + Http2Headers headers = new DefaultHttp2Headers(); + headers.set("key", "value"); + assertTrue(ch.writeInbound(new DefaultHttp2HeadersFrame(headers, true))); + + LastHttpContent trailers = ch.readInbound(); + assertThat(trailers.content().readableBytes(), is(0)); + assertThat(trailers.trailingHeaders().get("key").toString(), is("value")); + assertFalse(trailers instanceof FullHttpRequest); + + assertThat(ch.readInbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testDowngradeData() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + assertTrue(ch.writeInbound(new DefaultHttp2DataFrame(hello))); + + HttpContent content = ch.readInbound(); + assertThat(content.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertFalse(content instanceof LastHttpContent); + + assertThat(ch.readInbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testDowngradeEndData() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + ByteBuf hello = Unpooled.copiedBuffer("hello world", CharsetUtil.UTF_8); + assertTrue(ch.writeInbound(new DefaultHttp2DataFrame(hello, true))); + + LastHttpContent content = ch.readInbound(); + assertThat(content.content().toString(CharsetUtil.UTF_8), is("hello world")); + assertTrue(content.trailingHeaders().isEmpty()); + + assertThat(ch.readInbound(), is(nullValue())); + assertFalse(ch.finish()); + } + + @Test + public void testPassThroughOther() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new Http2ServerDowngrader()); + Http2ResetFrame reset = new DefaultHttp2ResetFrame(0); + Http2GoAwayFrame goaway = new DefaultHttp2GoAwayFrame(0); + assertTrue(ch.writeInbound(reset)); + assertTrue(ch.writeInbound(goaway)); + + assertEquals(ch.readInbound(), reset); + assertEquals(ch.readInbound(), goaway); + + assertThat(ch.readInbound(), is(nullValue())); + assertFalse(ch.finish()); + } +}
test
train
2016-05-09T19:33:40
"2016-05-03T00:44:33Z"
mosesn
val
netty/netty/5171_5210
netty/netty
netty/netty/5171
netty/netty/5210
[ "timestamp(timedelta=22.0, similarity=0.8452019802756294)" ]
ce1ae0eb8bfec1fc77a2c1c67b34ac1ea1eb327c
bd939a18bb01ac284e3c089ac52689968702b651
[ "@wanganran - Thanks for reporting.\n\n@trustin - Looks like this was introduced in https://github.com/netty/netty/commit/ada61d4985571f5406ea6158b02755d51392ebff . Can you take care?\n", "will take care\n", "I think NioDatagramChannel should report an error: https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java#L104\n", "Fixed by https://github.com/netty/netty/pull/5210\n" ]
[]
"2016-05-04T17:32:11Z"
[ "defect" ]
NioDatagramChannel can only run under JDK 7+
Hi, My Netty version is 4.0.35.Final. I found that `io.netty.channel.socket.nio.NioDatagramChannelConfig` imports `java.nio.channels.NetworkChannel` which is new in JDK 7. This will cause `NioDatagramChannel` unusable in Android where its Dalvik VM is under Java 1.6. A temporary solution is using `OioDatagramChannel` instead. How to reproduce: Just import Netty to an Android app (my Android version: 4.2.2) and try to use `NioDatagramChannel`. Three errors appear during runtime: ``` Could not find class 'java.nio.channels.MembershipKey', referenced from method io.netty.channel.socket.nio.NioDatagramChannel.block Could not find class 'java.nio.channels.MembershipKey', referenced from method io.netty.channel.socket.nio.NioDatagramChannel.leaveGroup Could not find class 'java.nio.channels.NetworkChannel', referenced from method io.netty.channel.socket.nio.NioDatagramChannelConfig.<clinit> ``` This issue was also reported in #2816. I think it needs to be addressed in order to make NIO UDP communications compatible with JDK 6.
[ "transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannelConfig.java" ]
[ "transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannelConfig.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannelConfig.java b/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannelConfig.java index 21b9e375c5c..5d1c9f84c3e 100644 --- a/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannelConfig.java +++ b/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannelConfig.java @@ -25,7 +25,6 @@ import java.net.NetworkInterface; import java.net.SocketException; import java.nio.channels.DatagramChannel; -import java.nio.channels.NetworkChannel; import java.util.Enumeration; /** @@ -78,16 +77,28 @@ class NioDatagramChannelConfig extends DefaultDatagramChannelConfig { throw new Error("cannot locate the IP_MULTICAST_LOOP field", e); } + Class<?> networkChannelClass = null; try { - getOption = NetworkChannel.class.getDeclaredMethod("getOption", socketOptionType); - } catch (Exception e) { - throw new Error("cannot locate the getOption() method", e); + networkChannelClass = Class.forName("java.nio.channels.NetworkChannel", true, classLoader); + } catch (Throwable ignore) { + // Not Java 7+ } - try { - setOption = NetworkChannel.class.getDeclaredMethod("setOption", socketOptionType, Object.class); - } catch (Exception e) { - throw new Error("cannot locate the setOption() method", e); + if (networkChannelClass == null) { + getOption = null; + setOption = null; + } else { + try { + getOption = networkChannelClass.getDeclaredMethod("getOption", socketOptionType); + } catch (Exception e) { + throw new Error("cannot locate the getOption() method", e); + } + + try { + setOption = networkChannelClass.getDeclaredMethod("setOption", socketOptionType, Object.class); + } catch (Exception e) { + throw new Error("cannot locate the setOption() method", e); + } } } IP_MULTICAST_TTL = ipMulticastTtl; @@ -173,7 +184,7 @@ protected void autoReadCleared() { } private Object getOption0(Object option) { - if (PlatformDependent.javaVersion() < 7) { + if (GET_OPTION == null) { throw new UnsupportedOperationException(); } else { try { @@ -185,7 +196,7 @@ private Object getOption0(Object option) { } private void setOption0(Object option, Object value) { - if (PlatformDependent.javaVersion() < 7) { + if (SET_OPTION == null) { throw new UnsupportedOperationException(); } else { try { @@ -195,5 +206,4 @@ private void setOption0(Object option, Object value) { } } } - }
null
train
train
2016-05-04T14:04:39
"2016-04-22T12:25:40Z"
wanganran
val
netty/netty/5216_5219
netty/netty
netty/netty/5216
netty/netty/5219
[ "timestamp(timedelta=61.0, similarity=0.9070950966957434)" ]
a96874aad4c8c22258a1a77dd86b6f5d161291bb
4ca6a0c1e106d444650dc9eded4977603cb8af2d
[ "@nayato you are right... let me fix it.\n", "Fixed https://github.com/netty/netty/pull/5219\n" ]
[]
"2016-05-06T19:42:14Z"
[ "defect" ]
wrong huge deallocation counting in PoolArena.free
right now there's `deallocationsHuge.decrement();` should be `deallocationsHuge.increment();`
[ "buffer/src/main/java/io/netty/buffer/PoolArena.java" ]
[ "buffer/src/main/java/io/netty/buffer/PoolArena.java" ]
[]
diff --git a/buffer/src/main/java/io/netty/buffer/PoolArena.java b/buffer/src/main/java/io/netty/buffer/PoolArena.java index 1df94e493c3..e667d936c05 100644 --- a/buffer/src/main/java/io/netty/buffer/PoolArena.java +++ b/buffer/src/main/java/io/netty/buffer/PoolArena.java @@ -254,7 +254,7 @@ void free(PoolChunk<T> chunk, long handle, int normCapacity, PoolThreadCache cac int size = chunk.chunkSize(); destroyChunk(chunk); activeBytesHuge.add(-size); - deallocationsHuge.decrement(); + deallocationsHuge.increment(); } else { SizeClass sizeClass = sizeClass(normCapacity); if (cache != null && cache.add(this, chunk, handle, normCapacity, sizeClass)) {
null
train
train
2016-05-06T08:04:10
"2016-05-05T18:44:21Z"
nayato
val
netty/netty/2422_5231
netty/netty
netty/netty/2422
netty/netty/5231
[ "timestamp(timedelta=11.0, similarity=0.8916239172691428)" ]
d580245afc77f11e4b82a7f3320f5619b555c2ec
c9e337d7ad03b5913e5a2b6788729d0d9c7fa157
[ "That's a good point. If a user specifies a promise when registering, a user doesn't really need to specify a channel because an event loop can get the channel from the specified promise. Let me fix this in 4.1.\n", "@colalife @trustin makes sense\n", "really encourage me,btw,i see it implements from EventLoopGroup,some classes have it\n", "Are you suggesting that `EventLoop` extending `EventLoopGroup` is a problem, or..?\n", "I think the reasonable fix for this issue is to remove `register(Channel, ChannelPromise)` (with proper deprecation,) and let people always use `register(Channel)`. What do you think?\n", "@trustin +1\n", "yeah, i agree, and of course other classes et. MultithreadEventLoopGroup... who implement it\n", "@trustin I guess we can only do this for 5.0.0 as it will break the api\n", "@trustin do we want to \"fix this\" for 4.1.0.Final or not ?\n", "Fixed by https://github.com/netty/netty/pull/5231\n" ]
[]
"2016-05-10T01:52:54Z"
[ "defect" ]
not sure about SingleThreadEventLoop.register(Channel, ChannelPromise)
hi all, a little confused that why not this method change to register(ChannelPromise) since the channel is a param in DefaultChannelPromise's constructor...(will the promise change channel? ) ,not sure, thanks source code: @Override public ChannelFuture register(Channel channel) { return register(channel, new DefaultChannelPromise(channel, this)); } @Override public ChannelFuture register(final Channel channel, final ChannelPromise promise) {
[ "transport/src/main/java/io/netty/channel/EventLoopGroup.java", "transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java", "transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java", "transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoop.java", "transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoopGroup.java", "transport/src/main/java/io/netty/channel/embedded/EmbeddedEventLoop.java", "transport/src/main/java/io/netty/channel/oio/OioEventLoopGroup.java" ]
[ "transport/src/main/java/io/netty/channel/EventLoopGroup.java", "transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java", "transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java", "transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoop.java", "transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoopGroup.java", "transport/src/main/java/io/netty/channel/embedded/EmbeddedEventLoop.java", "transport/src/main/java/io/netty/channel/oio/OioEventLoopGroup.java" ]
[ "transport/src/test/java/io/netty/bootstrap/BootstrapTest.java", "transport/src/test/java/io/netty/channel/SingleThreadEventLoopTest.java", "transport/src/test/java/io/netty/channel/ThreadPerChannelEventLoopGroupTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/EventLoopGroup.java b/transport/src/main/java/io/netty/channel/EventLoopGroup.java index e2bf7c2c0af..3e390e82230 100644 --- a/transport/src/main/java/io/netty/channel/EventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/EventLoopGroup.java @@ -35,9 +35,18 @@ public interface EventLoopGroup extends EventExecutorGroup { */ ChannelFuture register(Channel channel); + /** + * Register a {@link Channel} with this {@link EventLoop} using a {@link ChannelFuture}. The passed + * {@link ChannelFuture} will get notified once the registration was complete and also will get returned. + */ + ChannelFuture register(ChannelPromise promise); + /** * Register a {@link Channel} with this {@link EventLoop}. The passed {@link ChannelFuture} * will get notified once the registration was complete and also will get returned. + * + * @deprecated Use {@link #register(ChannelPromise)} instead. */ + @Deprecated ChannelFuture register(Channel channel, ChannelPromise promise); } diff --git a/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java b/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java index aae17b4505e..327bc16dd58 100644 --- a/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java @@ -75,6 +75,12 @@ public ChannelFuture register(Channel channel) { return next().register(channel); } + @Override + public ChannelFuture register(ChannelPromise promise) { + return next().register(promise); + } + + @Deprecated @Override public ChannelFuture register(Channel channel, ChannelPromise promise) { return next().register(channel, promise); diff --git a/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java b/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java index 81cc3dbaefa..c76beb89cae 100644 --- a/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java +++ b/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java @@ -16,6 +16,7 @@ package io.netty.channel; import io.netty.util.concurrent.SingleThreadEventExecutor; +import io.netty.util.internal.ObjectUtil; import java.util.concurrent.Executor; import java.util.concurrent.ThreadFactory; @@ -53,9 +54,17 @@ public ChannelHandlerInvoker asInvoker() { @Override public ChannelFuture register(Channel channel) { - return register(channel, new DefaultChannelPromise(channel, this)); + return register(new DefaultChannelPromise(channel, this)); } + @Override + public ChannelFuture register(final ChannelPromise promise) { + ObjectUtil.checkNotNull(promise, "promise"); + promise.channel().unsafe().register(this, promise); + return promise; + } + + @Deprecated @Override public ChannelFuture register(final Channel channel, final ChannelPromise promise) { if (channel == null) { diff --git a/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoop.java b/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoop.java index c73f36a929c..b10d79de7eb 100644 --- a/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoop.java +++ b/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoop.java @@ -30,6 +30,21 @@ public ThreadPerChannelEventLoop(ThreadPerChannelEventLoopGroup parent) { this.parent = parent; } + @Override + public ChannelFuture register(ChannelPromise promise) { + return super.register(promise).addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + if (future.isSuccess()) { + ch = future.channel(); + } else { + deregister(); + } + } + }); + } + + @Deprecated @Override public ChannelFuture register(Channel channel, ChannelPromise promise) { return super.register(channel, promise).addListener(new ChannelFutureListener() { diff --git a/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoopGroup.java b/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoopGroup.java index 700625506cc..94826c5f732 100644 --- a/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/ThreadPerChannelEventLoopGroup.java @@ -78,7 +78,7 @@ protected ThreadPerChannelEventLoopGroup() { * @param maxChannels the maximum number of channels to handle with this instance. Once you try to register * a new {@link Channel} and the maximum is exceed it will throw an * {@link ChannelException}. on the {@link #register(Channel)} and - * {@link #register(Channel, ChannelPromise)} method. + * {@link #register(ChannelPromise)} method. * Use {@code 0} to use no limit */ protected ThreadPerChannelEventLoopGroup(int maxChannels) { @@ -91,7 +91,7 @@ protected ThreadPerChannelEventLoopGroup(int maxChannels) { * @param maxChannels the maximum number of channels to handle with this instance. Once you try to register * a new {@link Channel} and the maximum is exceed it will throw an * {@link ChannelException} on the {@link #register(Channel)} and - * {@link #register(Channel, ChannelPromise)} method. + * {@link #register(ChannelPromise)} method. * Use {@code 0} to use no limit * @param threadFactory the {@link ThreadFactory} used to create new {@link Thread} instances that handle the * registered {@link Channel}s @@ -107,7 +107,7 @@ protected ThreadPerChannelEventLoopGroup(int maxChannels, ThreadFactory threadFa * @param maxChannels the maximum number of channels to handle with this instance. Once you try to register * a new {@link Channel} and the maximum is exceed it will throw an * {@link ChannelException} on the {@link #register(Channel)} and - * {@link #register(Channel, ChannelPromise)} method. + * {@link #register(ChannelPromise)} method. * Use {@code 0} to use no limit * @param executor the {@link Executor} used to create new {@link Thread} instances that handle the * registered {@link Channel}s @@ -281,12 +281,23 @@ public ChannelFuture register(Channel channel) { } try { EventLoop l = nextChild(); - return l.register(channel, new DefaultChannelPromise(channel, l)); + return l.register(new DefaultChannelPromise(channel, l)); } catch (Throwable t) { return new FailedChannelFuture(channel, GlobalEventExecutor.INSTANCE, t); } } + @Override + public ChannelFuture register(ChannelPromise promise) { + try { + return nextChild().register(promise); + } catch (Throwable t) { + promise.setFailure(t); + return promise; + } + } + + @Deprecated @Override public ChannelFuture register(Channel channel, ChannelPromise promise) { if (channel == null) { diff --git a/transport/src/main/java/io/netty/channel/embedded/EmbeddedEventLoop.java b/transport/src/main/java/io/netty/channel/embedded/EmbeddedEventLoop.java index f88668cedaf..46aeff25905 100644 --- a/transport/src/main/java/io/netty/channel/embedded/EmbeddedEventLoop.java +++ b/transport/src/main/java/io/netty/channel/embedded/EmbeddedEventLoop.java @@ -26,6 +26,7 @@ import io.netty.channel.EventLoopGroup; import io.netty.util.concurrent.AbstractScheduledEventExecutor; import io.netty.util.concurrent.Future; +import io.netty.util.internal.ObjectUtil; import java.net.SocketAddress; import java.util.ArrayDeque; @@ -126,9 +127,17 @@ public boolean awaitTermination(long timeout, TimeUnit unit) { @Override public ChannelFuture register(Channel channel) { - return register(channel, new DefaultChannelPromise(channel, this)); + return register(new DefaultChannelPromise(channel, this)); } + @Override + public ChannelFuture register(ChannelPromise promise) { + ObjectUtil.checkNotNull(promise, "promise"); + promise.channel().unsafe().register(this, promise); + return promise; + } + + @Deprecated @Override public ChannelFuture register(Channel channel, ChannelPromise promise) { channel.unsafe().register(this, promise); diff --git a/transport/src/main/java/io/netty/channel/oio/OioEventLoopGroup.java b/transport/src/main/java/io/netty/channel/oio/OioEventLoopGroup.java index e803f8b325e..684af409839 100644 --- a/transport/src/main/java/io/netty/channel/oio/OioEventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/oio/OioEventLoopGroup.java @@ -46,7 +46,7 @@ public OioEventLoopGroup() { * @param maxChannels the maximum number of channels to handle with this instance. Once you try to register * a new {@link Channel} and the maximum is exceed it will throw an * {@link ChannelException} on the {@link #register(Channel)} and - * {@link #register(Channel, ChannelPromise)} method. + * {@link #register(ChannelPromise)} method. * Use {@code 0} to use no limit */ public OioEventLoopGroup(int maxChannels) { @@ -59,7 +59,7 @@ public OioEventLoopGroup(int maxChannels) { * @param maxChannels the maximum number of channels to handle with this instance. Once you try to register * a new {@link Channel} and the maximum is exceed it will throw an * {@link ChannelException} on the {@link #register(Channel)} and - * {@link #register(Channel, ChannelPromise)} method. + * {@link #register(ChannelPromise)} method. * Use {@code 0} to use no limit * @param executor the {@link Executor} used to create new {@link Thread} instances that handle the * registered {@link Channel}s @@ -74,7 +74,7 @@ public OioEventLoopGroup(int maxChannels, Executor executor) { * @param maxChannels the maximum number of channels to handle with this instance. Once you try to register * a new {@link Channel} and the maximum is exceed it will throw an * {@link ChannelException} on the {@link #register(Channel)} and - * {@link #register(Channel, ChannelPromise)} method. + * {@link #register(ChannelPromise)} method. * Use {@code 0} to use no limit * @param threadFactory the {@link ThreadFactory} used to create new {@link Thread} instances that handle the * registered {@link Channel}s
diff --git a/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java b/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java index 53a15004ed8..cba71b3e4a1 100644 --- a/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java +++ b/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java @@ -275,6 +275,11 @@ public ChannelFuture register(Channel channel) { return promise; } + @Override + public ChannelFuture register(ChannelPromise promise) { + throw new UnsupportedOperationException(); + } + @Override public ChannelFuture register(Channel channel, final ChannelPromise promise) { throw new UnsupportedOperationException(); diff --git a/transport/src/test/java/io/netty/channel/SingleThreadEventLoopTest.java b/transport/src/test/java/io/netty/channel/SingleThreadEventLoopTest.java index fd862cd5675..41b53d90c0e 100644 --- a/transport/src/test/java/io/netty/channel/SingleThreadEventLoopTest.java +++ b/transport/src/test/java/io/netty/channel/SingleThreadEventLoopTest.java @@ -381,7 +381,7 @@ public void operationComplete(ChannelFuture future) throws Exception { } try { - ChannelFuture f = loopA.register(ch, promise); + ChannelFuture f = loopA.register(promise); f.awaitUninterruptibly(); assertFalse(f.isSuccess()); assertThat(f.cause(), is(instanceOf(RejectedExecutionException.class))); diff --git a/transport/src/test/java/io/netty/channel/ThreadPerChannelEventLoopGroupTest.java b/transport/src/test/java/io/netty/channel/ThreadPerChannelEventLoopGroupTest.java index 2c936bd2333..cf02b18090c 100644 --- a/transport/src/test/java/io/netty/channel/ThreadPerChannelEventLoopGroupTest.java +++ b/transport/src/test/java/io/netty/channel/ThreadPerChannelEventLoopGroupTest.java @@ -82,7 +82,7 @@ private static void runTest(ThreadPerChannelEventLoopGroup loopGroup) throws Int ChannelGroup channelGroup = new DefaultChannelGroup(testExecutor); while (taskCount-- > 0) { Channel channel = new EmbeddedChannel(NOOP_HANDLER); - loopGroup.register(channel, new DefaultChannelPromise(channel, testExecutor)); + loopGroup.register(new DefaultChannelPromise(channel, testExecutor)); channelGroup.add(channel); } channelGroup.close().sync();
val
train
2016-05-09T23:16:30
"2014-04-24T09:08:16Z"
colalife
val
netty/netty/5174_5243
netty/netty
netty/netty/5174
netty/netty/5243
[ "timestamp(timedelta=23.0, similarity=0.890638655932035)" ]
2b340df452d5ec282b7cefd15f887f31604b9425
1823abe2920bad29aae3c5ac67114a2321557287
[ "the underlying storage is currently exposed by `options` and `attr` and there is synchronization that must be applied when dealing with these collections. I'm guessing they are not exposed so the storage can be directly exposed (no copy etc...) and its assumed users of the interface \"play nice\" with synchronization and modifiability. We should consider these factors before making changes to these interfaces.\n", "Yeah I think the synchronisation is an issue. That said maybe we could just expose a copy of the attributes / options. Let me cook up something...\n", "Fixed by https://github.com/netty/netty/pull/5243\n" ]
[ "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "`toString()` please\n", "`AbstractBootstrapConfig` and make it top-level?\n", "Could we move this class top-level?\n", "Could we move this to top level?\n", "`toString()`\n", "- u -> U\n- `{@link #config()}.` -> `{@link #config()} instead.`\n", "- u -> U\n- . -> instead.\n", "Global comment: Return's'\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove usage of generic wildcard type. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1452)\n" ]
"2016-05-13T05:27:55Z"
[ "improvement", "feature" ]
Expose Bootstrap getter methods and add some additional ones
The `Bootstrap` class (applies also to `AbstractBootstrap` and `ServerBootstrap`) has a few package private getter methods and some things such as `#attr()` and `#options()` aren't exposed at all. I'd argue there should be public getter methods for all of them. Background: Our application is written in Java but we use Clojure as a configuration language (instead of XML, JSON, YAML) to skip the object mapping indirection. This creates a slight problem for partial configurations. Let's take the following example: ``` clojure (def myClientBootstrap (-> (Bootstrap.) (.option ChannelOption/SO_REUSEADDR true) (.remoteAddress "host" 8080))) ``` ``` java Bootstrap myClientBootstrap = getClientBootstrapFromClojureLand(); ``` There is no way for me to test if a `ChannelFactory` is configured and fill the blanks if it isn't for example (NIO on non-Linux and EPOLL on Linux). Ditto, I can't test if a remote address isn't configured and blow up. We're currently using a wrapper objects around `Bootstrap` and `ServerBootstrap` that provides getter methods but that seems unnecessary and adds only indirection.
[ "transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java", "transport/src/main/java/io/netty/bootstrap/Bootstrap.java", "transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java" ]
[ "transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java", "transport/src/main/java/io/netty/bootstrap/AbstractBootstrapConfig.java", "transport/src/main/java/io/netty/bootstrap/Bootstrap.java", "transport/src/main/java/io/netty/bootstrap/BootstrapConfig.java", "transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java", "transport/src/main/java/io/netty/bootstrap/ServerBootstrapConfig.java" ]
[]
diff --git a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java index a10c0d0fc94..7559dd3ba59 100644 --- a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java @@ -35,6 +35,7 @@ import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.SocketAddress; +import java.util.Collections; import java.util.LinkedHashMap; import java.util.Map; @@ -313,7 +314,7 @@ public void operationComplete(ChannelFuture future) throws Exception { } final ChannelFuture initAndRegister() { - final Channel channel = channelFactory().newChannel(); + final Channel channel = channelFactory.newChannel(); try { init(channel); } catch (Throwable t) { @@ -375,6 +376,41 @@ public B handler(ChannelHandler handler) { return (B) this; } + /** + * Returns the configured {@link EventLoopGroup} or {@code null} if non is configured yet. + * + * @deprecated Use {@link #config()} instead. + */ + @Deprecated + public final EventLoopGroup group() { + return group; + } + + /** + * Returns the {@link AbstractBootstrapConfig} object that can be used to obtain the current config + * of the bootstrap. + */ + public abstract AbstractBootstrapConfig<B, C> config(); + + static <K, V> Map<K, V> copiedMap(Map<K, V> map) { + final Map<K, V> copied; + synchronized (map) { + if (map.isEmpty()) { + return Collections.emptyMap(); + } + copied = new LinkedHashMap<K, V>(map); + } + return Collections.unmodifiableMap(copied); + } + + final Map<ChannelOption<?>, Object> options0() { + return options; + } + + final Map<AttributeKey<?>, Object> attrs0() { + return attrs; + } + final SocketAddress localAddress() { return localAddress; } @@ -388,66 +424,19 @@ final ChannelHandler handler() { return handler; } - /** - * Return the configured {@link EventLoopGroup} or {@code null} if non is configured yet. - */ - public EventLoopGroup group() { - return group; - } - final Map<ChannelOption<?>, Object> options() { - return options; + return copiedMap(options); } final Map<AttributeKey<?>, Object> attrs() { - return attrs; + return copiedMap(attrs); } @Override public String toString() { StringBuilder buf = new StringBuilder() .append(StringUtil.simpleClassName(this)) - .append('('); - if (group != null) { - buf.append("group: ") - .append(StringUtil.simpleClassName(group)) - .append(", "); - } - if (channelFactory != null) { - buf.append("channelFactory: ") - .append(channelFactory) - .append(", "); - } - if (localAddress != null) { - buf.append("localAddress: ") - .append(localAddress) - .append(", "); - } - synchronized (options) { - if (!options.isEmpty()) { - buf.append("options: ") - .append(options) - .append(", "); - } - } - synchronized (attrs) { - if (!attrs.isEmpty()) { - buf.append("attrs: ") - .append(attrs) - .append(", "); - } - } - if (handler != null) { - buf.append("handler: ") - .append(handler) - .append(", "); - } - if (buf.charAt(buf.length() - 1) == '(') { - buf.append(')'); - } else { - buf.setCharAt(buf.length() - 2, ')'); - buf.setLength(buf.length() - 1); - } + .append('(').append(config()).append(')'); return buf.toString(); } diff --git a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrapConfig.java b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrapConfig.java new file mode 100644 index 00000000000..976ec94640b --- /dev/null +++ b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrapConfig.java @@ -0,0 +1,135 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.bootstrap; + +import io.netty.channel.Channel; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelOption; +import io.netty.channel.EventLoopGroup; +import io.netty.util.AttributeKey; +import io.netty.util.internal.ObjectUtil; +import io.netty.util.internal.StringUtil; + +import java.net.SocketAddress; +import java.util.Map; + +/** + * Exposes the configuration of an {@link AbstractBootstrap}. + */ +public abstract class AbstractBootstrapConfig<B extends AbstractBootstrap<B, C>, C extends Channel> { + + protected final B bootstrap; + + protected AbstractBootstrapConfig(B bootstrap) { + this.bootstrap = ObjectUtil.checkNotNull(bootstrap, "bootstrap"); + } + + /** + * Returns the configured local address or {@code null} if non is configured yet. + */ + public final SocketAddress localAddress() { + return bootstrap.localAddress(); + } + + /** + * Returns the configured {@link ChannelFactory} or {@code null} if non is configured yet. + */ + @SuppressWarnings("deprecation") + public final ChannelFactory<? extends C> channelFactory() { + return bootstrap.channelFactory(); + } + + /** + * Returns the configured {@link ChannelHandler} or {@code null} if non is configured yet. + */ + public final ChannelHandler handler() { + return bootstrap.handler(); + } + + /** + * Returns a copy of the configured options. + */ + public final Map<ChannelOption<?>, Object> options() { + return bootstrap.options(); + } + + /** + * Returns a copy of the configured attributes. + */ + public final Map<AttributeKey<?>, Object> attrs() { + return bootstrap.attrs(); + } + + /** + * Returns the configured {@link EventLoopGroup} or {@code null} if non is configured yet. + */ + @SuppressWarnings("deprecation") + public final EventLoopGroup group() { + return bootstrap.group(); + } + + @Override + public String toString() { + StringBuilder buf = new StringBuilder() + .append(StringUtil.simpleClassName(this)) + .append('('); + EventLoopGroup group = group(); + if (group != null) { + buf.append("group: ") + .append(StringUtil.simpleClassName(group)) + .append(", "); + } + @SuppressWarnings("deprecation") + ChannelFactory<? extends C> factory = channelFactory(); + if (factory != null) { + buf.append("channelFactory: ") + .append(factory) + .append(", "); + } + SocketAddress localAddress = localAddress(); + if (localAddress != null) { + buf.append("localAddress: ") + .append(localAddress) + .append(", "); + } + + Map<ChannelOption<?>, Object> options = options(); + if (!options.isEmpty()) { + buf.append("options: ") + .append(options) + .append(", "); + } + Map<AttributeKey<?>, Object> attrs = attrs(); + if (!attrs.isEmpty()) { + buf.append("attrs: ") + .append(attrs) + .append(", "); + } + ChannelHandler handler = handler(); + if (handler != null) { + buf.append("handler: ") + .append(handler) + .append(", "); + } + if (buf.charAt(buf.length() - 1) == '(') { + buf.append(')'); + } else { + buf.setCharAt(buf.length() - 2, ')'); + buf.setLength(buf.length() - 1); + } + return buf.toString(); + } +} diff --git a/transport/src/main/java/io/netty/bootstrap/Bootstrap.java b/transport/src/main/java/io/netty/bootstrap/Bootstrap.java index 408214393d1..7aa5d111ea5 100644 --- a/transport/src/main/java/io/netty/bootstrap/Bootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/Bootstrap.java @@ -53,6 +53,8 @@ public class Bootstrap extends AbstractBootstrap<Bootstrap, Channel> { private static final AddressResolverGroup<?> DEFAULT_RESOLVER = DefaultAddressResolverGroup.INSTANCE; + private final BootstrapConfig config = new BootstrapConfig(this); + @SuppressWarnings("unchecked") private volatile AddressResolverGroup<SocketAddress> resolver = (AddressResolverGroup<SocketAddress>) DEFAULT_RESOLVER; @@ -113,7 +115,7 @@ public ChannelFuture connect() { throw new IllegalStateException("remoteAddress not set"); } - return doResolveAndConnect(remoteAddress, localAddress()); + return doResolveAndConnect(remoteAddress, config.localAddress()); } /** @@ -139,7 +141,7 @@ public ChannelFuture connect(SocketAddress remoteAddress) { } validate(); - return doResolveAndConnect(remoteAddress, localAddress()); + return doResolveAndConnect(remoteAddress, config.localAddress()); } /** @@ -248,9 +250,9 @@ public void run() { @SuppressWarnings("unchecked") void init(Channel channel) throws Exception { ChannelPipeline p = channel.pipeline(); - p.addLast(handler()); + p.addLast(config.handler()); - final Map<ChannelOption<?>, Object> options = options(); + final Map<ChannelOption<?>, Object> options = options0(); synchronized (options) { for (Entry<ChannelOption<?>, Object> e: options.entrySet()) { try { @@ -263,7 +265,7 @@ void init(Channel channel) throws Exception { } } - final Map<AttributeKey<?>, Object> attrs = attrs(); + final Map<AttributeKey<?>, Object> attrs = attrs0(); synchronized (attrs) { for (Entry<AttributeKey<?>, Object> e: attrs.entrySet()) { channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue()); @@ -274,7 +276,7 @@ void init(Channel channel) throws Exception { @Override public Bootstrap validate() { super.validate(); - if (handler() == null) { + if (config.handler() == null) { throw new IllegalStateException("handler not set"); } return this; @@ -298,17 +300,15 @@ public Bootstrap clone(EventLoopGroup group) { } @Override - public String toString() { - if (remoteAddress == null) { - return super.toString(); - } + public final BootstrapConfig config() { + return config; + } - StringBuilder buf = new StringBuilder(super.toString()); - buf.setLength(buf.length() - 1); + final SocketAddress remoteAddress() { + return remoteAddress; + } - return buf.append(", remoteAddress: ") - .append(remoteAddress) - .append(')') - .toString(); + final AddressResolverGroup<?> resolver() { + return resolver; } } diff --git a/transport/src/main/java/io/netty/bootstrap/BootstrapConfig.java b/transport/src/main/java/io/netty/bootstrap/BootstrapConfig.java new file mode 100644 index 00000000000..24d9fa45e18 --- /dev/null +++ b/transport/src/main/java/io/netty/bootstrap/BootstrapConfig.java @@ -0,0 +1,58 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.bootstrap; + +import io.netty.channel.Channel; +import io.netty.resolver.AddressResolverGroup; + +import java.net.SocketAddress; + +/** + * Exposes the configuration of a {@link Bootstrap}. + */ +public final class BootstrapConfig extends AbstractBootstrapConfig<Bootstrap, Channel> { + + BootstrapConfig(Bootstrap bootstrap) { + super(bootstrap); + } + + /** + * Returns the configured remote address or {@code null} if non is configured yet. + */ + public SocketAddress remoteAddress() { + return bootstrap.remoteAddress(); + } + + /** + * Returns the configured {@link AddressResolverGroup} or the default if non is configured yet. + */ + public AddressResolverGroup<?> resolver() { + return bootstrap.resolver(); + } + + @Override + public String toString() { + StringBuilder buf = new StringBuilder(super.toString()); + buf.setLength(buf.length() - 1); + buf.append(", resolver: ").append(resolver()); + SocketAddress remoteAddress = remoteAddress(); + if (remoteAddress != null) { + buf.append(", remoteAddress: ") + .append(remoteAddress); + } + return buf.append(')').toString(); + } +} diff --git a/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java b/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java index 38929b1b71d..822a1cd133d 100644 --- a/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java @@ -29,7 +29,6 @@ import io.netty.channel.ServerChannel; import io.netty.util.AttributeKey; import io.netty.util.internal.OneTimeTask; -import io.netty.util.internal.StringUtil; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -48,6 +47,7 @@ public class ServerBootstrap extends AbstractBootstrap<ServerBootstrap, ServerCh private final Map<ChannelOption<?>, Object> childOptions = new LinkedHashMap<ChannelOption<?>, Object>(); private final Map<AttributeKey<?>, Object> childAttrs = new LinkedHashMap<AttributeKey<?>, Object>(); + private final ServerBootstrapConfig config = new ServerBootstrapConfig(this); private volatile EventLoopGroup childGroup; private volatile ChannelHandler childHandler; @@ -138,22 +138,14 @@ public ServerBootstrap childHandler(ChannelHandler childHandler) { return this; } - /** - * Return the configured {@link EventLoopGroup} which will be used for the child channels or {@code null} - * if non is configured yet. - */ - public EventLoopGroup childGroup() { - return childGroup; - } - @Override void init(Channel channel) throws Exception { - final Map<ChannelOption<?>, Object> options = options(); + final Map<ChannelOption<?>, Object> options = options0(); synchronized (options) { channel.config().setOptions(options); } - final Map<AttributeKey<?>, Object> attrs = attrs(); + final Map<AttributeKey<?>, Object> attrs = attrs0(); synchronized (attrs) { for (Entry<AttributeKey<?>, Object> e: attrs.entrySet()) { @SuppressWarnings("unchecked") @@ -179,7 +171,7 @@ void init(Channel channel) throws Exception { @Override public void initChannel(Channel ch) throws Exception { ChannelPipeline pipeline = ch.pipeline(); - ChannelHandler handler = handler(); + ChannelHandler handler = config.handler(); if (handler != null) { pipeline.addLast(handler); } @@ -294,42 +286,31 @@ public ServerBootstrap clone() { return new ServerBootstrap(this); } - @Override - public String toString() { - StringBuilder buf = new StringBuilder(super.toString()); - buf.setLength(buf.length() - 1); - buf.append(", "); - if (childGroup != null) { - buf.append("childGroup: "); - buf.append(StringUtil.simpleClassName(childGroup)); - buf.append(", "); - } - synchronized (childOptions) { - if (!childOptions.isEmpty()) { - buf.append("childOptions: "); - buf.append(childOptions); - buf.append(", "); - } - } - synchronized (childAttrs) { - if (!childAttrs.isEmpty()) { - buf.append("childAttrs: "); - buf.append(childAttrs); - buf.append(", "); - } - } - if (childHandler != null) { - buf.append("childHandler: "); - buf.append(childHandler); - buf.append(", "); - } - if (buf.charAt(buf.length() - 1) == '(') { - buf.append(')'); - } else { - buf.setCharAt(buf.length() - 2, ')'); - buf.setLength(buf.length() - 1); - } + /** + * Return the configured {@link EventLoopGroup} which will be used for the child channels or {@code null} + * if non is configured yet. + * + * @deprecated Use {@link #config()} instead. + */ + @Deprecated + public EventLoopGroup childGroup() { + return childGroup; + } + + final ChannelHandler childHandler() { + return childHandler; + } + + final Map<ChannelOption<?>, Object> childOptions() { + return copiedMap(childOptions); + } + + final Map<AttributeKey<?>, Object> childAttrs() { + return copiedMap(childAttrs); + } - return buf.toString(); + @Override + public final ServerBootstrapConfig config() { + return config; } } diff --git a/transport/src/main/java/io/netty/bootstrap/ServerBootstrapConfig.java b/transport/src/main/java/io/netty/bootstrap/ServerBootstrapConfig.java new file mode 100644 index 00000000000..1401d59eb33 --- /dev/null +++ b/transport/src/main/java/io/netty/bootstrap/ServerBootstrapConfig.java @@ -0,0 +1,105 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.bootstrap; + +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelOption; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.ServerChannel; +import io.netty.util.AttributeKey; +import io.netty.util.internal.StringUtil; + +import java.util.Map; + +/** + * Exposes the configuration of a {@link ServerBootstrapConfig}. + */ +public final class ServerBootstrapConfig extends AbstractBootstrapConfig<ServerBootstrap, ServerChannel> { + + ServerBootstrapConfig(ServerBootstrap bootstrap) { + super(bootstrap); + } + + /** + * Returns the configured {@link EventLoopGroup} which will be used for the child channels or {@code null} + * if non is configured yet. + */ + @SuppressWarnings("deprecation") + public EventLoopGroup childGroup() { + return bootstrap.childGroup(); + } + + /** + * Returns the configured {@link ChannelHandler} be used for the child channels or {@code null} + * if non is configured yet. + */ + public ChannelHandler childHandler() { + return bootstrap.childHandler(); + } + + /** + * Returns a copy of the configured options which will be used for the child channels. + */ + public Map<ChannelOption<?>, Object> childOptions() { + return bootstrap.childOptions(); + } + + /** + * Returns a copy of the configured attributes which will be used for the child channels. + */ + public Map<AttributeKey<?>, Object> childAttrs() { + return bootstrap.childAttrs(); + } + + @Override + public String toString() { + StringBuilder buf = new StringBuilder(super.toString()); + buf.setLength(buf.length() - 1); + buf.append(", "); + EventLoopGroup childGroup = childGroup(); + if (childGroup != null) { + buf.append("childGroup: "); + buf.append(StringUtil.simpleClassName(childGroup)); + buf.append(", "); + } + Map<ChannelOption<?>, Object> childOptions = childOptions(); + if (!childOptions.isEmpty()) { + buf.append("childOptions: "); + buf.append(childOptions); + buf.append(", "); + } + Map<AttributeKey<?>, Object> childAttrs = childAttrs(); + if (!childAttrs.isEmpty()) { + buf.append("childAttrs: "); + buf.append(childAttrs); + buf.append(", "); + } + ChannelHandler childHandler = childHandler(); + if (childHandler != null) { + buf.append("childHandler: "); + buf.append(childHandler); + buf.append(", "); + } + if (buf.charAt(buf.length() - 1) == '(') { + buf.append(')'); + } else { + buf.setCharAt(buf.length() - 2, ')'); + buf.setLength(buf.length() - 1); + } + + return buf.toString(); + } +}
null
train
train
2016-05-18T09:11:49
"2016-04-25T15:15:23Z"
rkapsi
val
netty/netty/5223_5251
netty/netty
netty/netty/5223
netty/netty/5251
[ "timestamp(timedelta=16.0, similarity=0.8965582301594414)" ]
15711d938dd7311b54b744a9ef98e0e6bacf32d3
d2ee422f765c72350c5f158ca234900c55804d43
[ "@nayato there is no good reason... let me fix it.\n", "Fixed by https://github.com/netty/netty/pull/5251\n" ]
[]
"2016-05-13T05:57:12Z"
[ "cleanup" ]
question on DefaultChannelPipeline.executorSafe - why indirection?
Is there a particular reason to access channel-assigned executor like so `channel.unsafe().invoker().executor()` vs `channel.eventLoop()`?
[ "transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java" ]
[ "transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java b/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java index 5e56a974069..85696240a55 100644 --- a/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java +++ b/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java @@ -1286,7 +1286,7 @@ private EventExecutor executorSafe(ChannelHandlerInvoker invoker) { // We check for channel().isRegistered and handlerAdded because even if isRegistered() is false we // can safely access the invoker() if handlerAdded is true. This is because in this case the Channel // was previously registered and so we can still access the old EventLoop to dispatch things. - return channel.isRegistered() || registered ? channel.unsafe().invoker().executor() : null; + return channel.isRegistered() || registered ? channel.eventLoop() : null; } return invoker.executor(); }
null
train
train
2016-05-11T09:03:53
"2016-05-07T02:02:19Z"
nayato
val
netty/netty/3127_5255
netty/netty
netty/netty/3127
netty/netty/5255
[ "timestamp(timedelta=24.0, similarity=0.8564786573284091)" ]
30bb3094c16f8c3bf0f570f34f62a4a098cdb9b6
f660a863d30452dd024d18e7ca0c7f7f4c477314
[ "@Underbalanced we love contributions ;)\n", "I will read up on the how to contribute as well as your code, I am pretty new to using proper design patterns. Started back in April with java. Though I have learned enough to write a Craftbukkit like server tool for another game, I skipped some important fundamentals.\n", "Fixed by https://github.com/netty/netty/pull/5255\n" ]
[ "that match -> that are matched by\n\n.. because it's not a channel that matches a matcher?\n" ]
"2016-05-13T07:06:06Z"
[ "feature" ]
voidPromise() - ChannelGroups
I think it would be useful to optionally voidPromises for Channel Groups.
[ "transport/src/main/java/io/netty/channel/group/ChannelGroup.java", "transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java" ]
[ "transport/src/main/java/io/netty/channel/group/ChannelGroup.java", "transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java", "transport/src/main/java/io/netty/channel/group/VoidChannelGroupFuture.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/group/ChannelGroup.java b/transport/src/main/java/io/netty/channel/group/ChannelGroup.java index c66022e1d08..f24325b1cc9 100644 --- a/transport/src/main/java/io/netty/channel/group/ChannelGroup.java +++ b/transport/src/main/java/io/netty/channel/group/ChannelGroup.java @@ -132,6 +132,22 @@ public interface ChannelGroup extends Set<Channel>, Comparable<ChannelGroup> { */ ChannelGroupFuture write(Object message, ChannelMatcher matcher); + /** + * Writes the specified {@code message} to all {@link Channel}s in this + * group that match the given {@link ChannelMatcher}. If the specified {@code message} is an instance of + * {@link ByteBuf}, it is automatically + * {@linkplain ByteBuf#duplicate() duplicated} to avoid a race + * condition. The same is true for {@link ByteBufHolder}. Please note that this operation is asynchronous as + * {@link Channel#write(Object)} is. + * + * If {@code voidPromise} is {@code true} {@link Channel#voidPromise()} is used for the writes and so the same + * restrictions to the returned {@link ChannelGroupFuture} apply as to a void promise. + * + * @return the {@link ChannelGroupFuture} instance that notifies when + * the operation is done for all channels + */ + ChannelGroupFuture write(Object message, ChannelMatcher matcher, boolean voidPromise); + /** * Flush all {@link Channel}s in this * group. If the specified {@code messages} are an instance of @@ -175,6 +191,12 @@ public interface ChannelGroup extends Set<Channel>, Comparable<ChannelGroup> { */ ChannelGroupFuture writeAndFlush(Object message, ChannelMatcher matcher); + /** + * Shortcut for calling {@link #write(Object, ChannelMatcher, boolean)} and {@link #flush()} and only act on + * {@link Channel}s that match the {@link ChannelMatcher}. + */ + ChannelGroupFuture writeAndFlush(Object message, ChannelMatcher matcher, boolean voidPromise); + /** * @deprecated Use {@link #writeAndFlush(Object, ChannelMatcher)} instead. */ diff --git a/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java b/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java index 19d74d46edc..26ef91dfdcf 100644 --- a/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java +++ b/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java @@ -52,6 +52,7 @@ public void operationComplete(ChannelFuture future) throws Exception { remove(future.channel()); } }; + private final VoidChannelGroupFuture voidFuture = new VoidChannelGroupFuture(this); private final boolean stayClosed; private volatile boolean closed; @@ -254,6 +255,11 @@ private static Object safeDuplicate(Object message) { @Override public ChannelGroupFuture write(Object message, ChannelMatcher matcher) { + return write(message, matcher, false); + } + + @Override + public ChannelGroupFuture write(Object message, ChannelMatcher matcher, boolean voidPromise) { if (message == null) { throw new NullPointerException("message"); } @@ -261,15 +267,25 @@ public ChannelGroupFuture write(Object message, ChannelMatcher matcher) { throw new NullPointerException("matcher"); } - Map<Channel, ChannelFuture> futures = new LinkedHashMap<Channel, ChannelFuture>(size()); - for (Channel c: nonServerChannels.values()) { - if (matcher.matches(c)) { - futures.put(c, c.write(safeDuplicate(message))); + final ChannelGroupFuture future; + if (voidPromise) { + for (Channel c: nonServerChannels.values()) { + if (matcher.matches(c)) { + c.write(safeDuplicate(message), c.voidPromise()); + } } + future = voidFuture; + } else { + Map<Channel, ChannelFuture> futures = new LinkedHashMap<Channel, ChannelFuture>(size()); + for (Channel c: nonServerChannels.values()) { + if (matcher.matches(c)) { + futures.put(c, c.write(safeDuplicate(message))); + } + } + future = new DefaultChannelGroupFuture(this, futures, executor); } - ReferenceCountUtil.release(message); - return new DefaultChannelGroupFuture(this, futures, executor); + return future; } @Override @@ -383,21 +399,34 @@ public ChannelGroupFuture flushAndWrite(Object message, ChannelMatcher matcher) @Override public ChannelGroupFuture writeAndFlush(Object message, ChannelMatcher matcher) { + return writeAndFlush(message, matcher, false); + } + + @Override + public ChannelGroupFuture writeAndFlush(Object message, ChannelMatcher matcher, boolean voidPromise) { if (message == null) { throw new NullPointerException("message"); } - Map<Channel, ChannelFuture> futures = new LinkedHashMap<Channel, ChannelFuture>(size()); - - for (Channel c: nonServerChannels.values()) { - if (matcher.matches(c)) { - futures.put(c, c.writeAndFlush(safeDuplicate(message))); + final ChannelGroupFuture future; + if (voidPromise) { + for (Channel c: nonServerChannels.values()) { + if (matcher.matches(c)) { + c.writeAndFlush(safeDuplicate(message), c.voidPromise()); + } } + future = voidFuture; + } else { + Map<Channel, ChannelFuture> futures = new LinkedHashMap<Channel, ChannelFuture>(size()); + for (Channel c: nonServerChannels.values()) { + if (matcher.matches(c)) { + futures.put(c, c.writeAndFlush(safeDuplicate(message))); + } + } + future = new DefaultChannelGroupFuture(this, futures, executor); } - ReferenceCountUtil.release(message); - - return new DefaultChannelGroupFuture(this, futures, executor); + return future; } @Override diff --git a/transport/src/main/java/io/netty/channel/group/VoidChannelGroupFuture.java b/transport/src/main/java/io/netty/channel/group/VoidChannelGroupFuture.java new file mode 100644 index 00000000000..e8fa84266b4 --- /dev/null +++ b/transport/src/main/java/io/netty/channel/group/VoidChannelGroupFuture.java @@ -0,0 +1,169 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.group; + +import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GenericFutureListener; + +import java.util.Collections; +import java.util.Iterator; +import java.util.concurrent.TimeUnit; + +final class VoidChannelGroupFuture implements ChannelGroupFuture { + + private static final Iterator<ChannelFuture> EMPTY = Collections.<ChannelFuture>emptyList().iterator(); + private final ChannelGroup group; + + VoidChannelGroupFuture(ChannelGroup group) { + this.group = group; + } + + @Override + public ChannelGroup group() { + return group; + } + + @Override + public ChannelFuture find(Channel channel) { + return null; + } + + @Override + public boolean isSuccess() { + return false; + } + + @Override + public ChannelGroupException cause() { + return null; + } + + @Override + public boolean isPartialSuccess() { + return false; + } + + @Override + public boolean isPartialFailure() { + return false; + } + + @Override + public ChannelGroupFuture addListener(GenericFutureListener<? extends Future<? super Void>> listener) { + throw reject(); + } + + @Override + public ChannelGroupFuture addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) { + throw reject(); + } + + @Override + public ChannelGroupFuture removeListener(GenericFutureListener<? extends Future<? super Void>> listener) { + throw reject(); + } + + @Override + public ChannelGroupFuture removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) { + throw reject(); + } + + @Override + public ChannelGroupFuture await() { + throw reject(); + } + + @Override + public ChannelGroupFuture awaitUninterruptibly() { + throw reject(); + } + + @Override + public ChannelGroupFuture syncUninterruptibly() { + throw reject(); + } + + @Override + public ChannelGroupFuture sync() { + throw reject(); + } + + @Override + public Iterator<ChannelFuture> iterator() { + return EMPTY; + } + + @Override + public boolean isCancellable() { + return false; + } + + @Override + public boolean await(long timeout, TimeUnit unit) { + throw reject(); + } + + @Override + public boolean await(long timeoutMillis) { + throw reject(); + } + + @Override + public boolean awaitUninterruptibly(long timeout, TimeUnit unit) { + throw reject(); + } + + @Override + public boolean awaitUninterruptibly(long timeoutMillis) { + throw reject(); + } + + @Override + public Void getNow() { + return null; + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + return false; + } + + @Override + public boolean isCancelled() { + return false; + } + + @Override + public boolean isDone() { + return false; + } + + @Override + public Void get() { + throw reject(); + } + + @Override + public Void get(long timeout, TimeUnit unit) { + throw reject(); + } + + private static RuntimeException reject() { + return new IllegalStateException("void future"); + } +}
null
test
train
2016-05-13T08:51:44
"2014-11-11T13:14:48Z"
Mr00Anderson
val
netty/netty/4200_5265
netty/netty
netty/netty/4200
netty/netty/5265
[ "timestamp(timedelta=18.0, similarity=0.9098434823812074)" ]
f44f3e7926f1676315ae86d0f18bdd9b95681d9f
eb6331aca950945aeb087832bcea70b6b69f835d
[ "Fixed by https://github.com/netty/netty/pull/5265\n" ]
[ "use `new LinkedHashSet<...>(domanNamePatterns.length)`\n", "Also ensure this is read-only ?\n", "Added the length hint. Is there any `Map.Entry` implementation in netty?\n", "> Also ensure this is read-only ?\n\nDone.\n" ]
"2016-05-17T05:53:53Z"
[ "improvement", "feature" ]
Allow retrieving the domain match lists in DomainNameMapping
Currently, there's no way to retrieve the list of all domain name patterns and their mapped values. It would be very useful when you have to build another `DomainNameMapping` (or an arbitrary `Mapping`) based on that list. e.g. ``` java DomainNameMapping<SslContext> mapping = ...; MyNameMapping<MyService> newMapping = ...; mapping.entries().stream().forEach(e -> newMapping.add(e.getKey(), ...)); ```
[ "common/src/main/java/io/netty/util/DomainMappingBuilder.java", "common/src/main/java/io/netty/util/DomainNameMapping.java" ]
[ "common/src/main/java/io/netty/util/DomainMappingBuilder.java", "common/src/main/java/io/netty/util/DomainNameMapping.java" ]
[ "common/src/test/java/io/netty/util/DomainNameMappingTest.java" ]
diff --git a/common/src/main/java/io/netty/util/DomainMappingBuilder.java b/common/src/main/java/io/netty/util/DomainMappingBuilder.java index 0cdb9712b7d..4d497456710 100644 --- a/common/src/main/java/io/netty/util/DomainMappingBuilder.java +++ b/common/src/main/java/io/netty/util/DomainMappingBuilder.java @@ -16,6 +16,8 @@ package io.netty.util; +import java.util.Collections; +import java.util.HashMap; import java.util.LinkedHashMap; import java.util.Map; import java.util.Set; @@ -137,6 +139,16 @@ public V map(String hostname) { return defaultValue; } + @Override + public Set<Map.Entry<String, V>> entries() { + int length = domainNamePatterns.length; + Map<String, V> map = new HashMap<String, V>(length); + for (int index = 0; index < length; ++index) { + map.put(domainNamePatterns[index], values[index]); + } + return Collections.unmodifiableSet(map.entrySet()); + } + @Override public String toString() { String defaultValueStr = defaultValue.toString(); diff --git a/common/src/main/java/io/netty/util/DomainNameMapping.java b/common/src/main/java/io/netty/util/DomainNameMapping.java index ff9aeb9cb0a..f31da8643c3 100644 --- a/common/src/main/java/io/netty/util/DomainNameMapping.java +++ b/common/src/main/java/io/netty/util/DomainNameMapping.java @@ -19,9 +19,11 @@ import io.netty.util.internal.StringUtil; import java.net.IDN; +import java.util.Collections; import java.util.LinkedHashMap; import java.util.Locale; import java.util.Map; +import java.util.Set; import static io.netty.util.internal.ObjectUtil.checkNotNull; import static io.netty.util.internal.StringUtil.commonSuffixOfLength; @@ -132,6 +134,13 @@ public V map(String hostname) { return defaultValue; } + /** + * Returns a read-only {@link Set} of the domain mapping patterns and their associated value objects. + */ + public Set<Map.Entry<String, V>> entries() { + return Collections.unmodifiableSet(map.entrySet()); + } + @Override public String toString() { return StringUtil.simpleClassName(this) + "(default: " + defaultValue + ", map: " + map + ')';
diff --git a/common/src/test/java/io/netty/util/DomainNameMappingTest.java b/common/src/test/java/io/netty/util/DomainNameMappingTest.java index bd369e93018..606d6fca68d 100644 --- a/common/src/test/java/io/netty/util/DomainNameMappingTest.java +++ b/common/src/test/java/io/netty/util/DomainNameMappingTest.java @@ -16,6 +16,9 @@ package io.netty.util; +import java.util.HashMap; +import java.util.Map; + import org.junit.Test; import static org.junit.Assert.assertEquals; @@ -181,4 +184,37 @@ public void testToString() { "ImmutableDomainNameMapping(default: NotFound, map: {*.netty.io=Netty, downloads.netty.io=Netty-Download})", mapping.toString()); } + + @Test + public void testEntries() { + DomainNameMapping<String> mapping = new DomainNameMapping<String>("NotFound") + .add("netty.io", "Netty") + .add("downloads.netty.io", "Netty-Downloads"); + + Map<String, String> entries = new HashMap<String, String>(); + for (Map.Entry<String, String> entry: mapping.entries()) { + entries.put(entry.getKey(), entry.getValue()); + } + + assertEquals(2, entries.size()); + assertEquals("Netty", entries.get("netty.io")); + assertEquals("Netty-Downloads", entries.get("downloads.netty.io")); + } + + @Test + public void testEntriesWithImmutableDomainNameMapping() { + DomainNameMapping<String> mapping = new DomainMappingBuilder<String>("NotFound") + .add("netty.io", "Netty") + .add("downloads.netty.io", "Netty-Downloads") + .build(); + + Map<String, String> entries = new HashMap<String, String>(); + for (Map.Entry<String, String> entry: mapping.entries()) { + entries.put(entry.getKey(), entry.getValue()); + } + + assertEquals(2, entries.size()); + assertEquals("Netty", entries.get("netty.io")); + assertEquals("Netty-Downloads", entries.get("downloads.netty.io")); + } }
train
train
2016-05-17T07:43:46
"2015-09-07T14:53:02Z"
trustin
val
netty/netty/5179_5267
netty/netty
netty/netty/5179
netty/netty/5267
[ "timestamp(timedelta=28.0, similarity=0.9338759135638064)" ]
3a9f47216143082bdfba62e8940160856767d672
77ef634bd7eedcfd77de3aa755f0d19430181314
[ "This is the patch I wrote for [Armeria](https://github.com/line/armeria):\n\nhttps://github.com/line/armeria/pull/155/files\n\nAs shown in the PR above, we could write a decorating `NameResolver` that works for all `NameResolver` implementations. However, I'm not sure what the name of that class would be. Any suggestions?\n", "/cc @slandelle \n", "InflightDnsNameResolver ? Not sure its the best name though\n\n> Am 27.04.2016 um 12:18 schrieb Trustin Lee [email protected]:\n> \n> This is the patch I wrote for Armeria:\n> \n> https://github.com/line/armeria/pull/155/files\n> \n> As shown in the PR above, we could write a decorating NameResolver that works for all NameResolver implementations. However, I'm not sure what the name of that class would be. Any suggestions?\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly or view it on GitHub\n", "@trustin WDYT ?\n", "FlyingQueriesAwareDnsNameResolver ? Pro is that such name will make Spring people jealous ;-) \n", "InProgressAware?\n", "@trustin also we can just make it package private for now and find a good name later if we feel we not have a good one atm.\n", "Alright. Let me cook some tomorrow.\n", "any news ?\n", "Fixed by https://github.com/netty/netty/pull/5267\n" ]
[ "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Take the required action to fix the issue indicated by this comment. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1134)\n" ]
"2016-05-17T11:01:32Z"
[ "defect" ]
DNS name resolution queries should not be sent while the same query is in progress already.
When you attempt to make a lot of connection attempts to the same target host at the same time and our DNS resolver does not have a record for it in the cache, the DNS resolver will send as many DNS queries as the number of connection attempts. As a result, DNS server will reject or drop the requests, making the name resolution attempt fail. To fix this, we could keep the list of name resolution queries and subscribe to the future of the matching query instead of sending a duplicate query. Will work on a fix.
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java", "resolver-dns/src/main/java/io/netty/resolver/dns/InflightNameResolver.java" ]
[]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java index 4fb88c44d8d..1f666076c74 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java @@ -22,13 +22,20 @@ import io.netty.channel.socket.DatagramChannel; import io.netty.resolver.AddressResolver; import io.netty.resolver.AddressResolverGroup; +import io.netty.resolver.InetSocketAddressResolver; +import io.netty.resolver.NameResolver; import io.netty.util.concurrent.EventExecutor; +import io.netty.util.concurrent.Promise; import io.netty.util.internal.StringUtil; import io.netty.util.internal.UnstableApi; +import java.net.InetAddress; import java.net.InetSocketAddress; +import java.util.List; +import java.util.concurrent.ConcurrentMap; import static io.netty.resolver.dns.DnsNameResolver.ANY_LOCAL_ADDR; +import static io.netty.util.internal.PlatformDependent.newConcurrentHashMap; /** * A {@link AddressResolverGroup} of {@link DnsNameResolver}s. @@ -40,6 +47,9 @@ public class DnsAddressResolverGroup extends AddressResolverGroup<InetSocketAddr private final InetSocketAddress localAddress; private final DnsServerAddresses nameServerAddresses; + private final ConcurrentMap<String, Promise<InetAddress>> resolvesInProgress = newConcurrentHashMap(); + private final ConcurrentMap<String, Promise<List<InetAddress>>> resolveAllsInProgress = newConcurrentHashMap(); + public DnsAddressResolverGroup( Class<? extends DatagramChannel> channelType, DnsServerAddresses nameServerAddresses) { this(channelType, ANY_LOCAL_ADDR, nameServerAddresses); @@ -83,11 +93,16 @@ protected AddressResolver<InetSocketAddress> newResolver( EventLoop eventLoop, ChannelFactory<? extends DatagramChannel> channelFactory, InetSocketAddress localAddress, DnsServerAddresses nameServerAddresses) throws Exception { - return new DnsNameResolverBuilder(eventLoop) - .channelFactory(channelFactory) - .localAddress(localAddress) - .nameServerAddresses(nameServerAddresses) - .build() - .asAddressResolver(); + final NameResolver<InetAddress> resolver = new InflightNameResolver<InetAddress>( + eventLoop, + new DnsNameResolverBuilder(eventLoop) + .channelFactory(channelFactory) + .localAddress(localAddress) + .nameServerAddresses(nameServerAddresses) + .build(), + resolvesInProgress, + resolveAllsInProgress); + + return new InetSocketAddressResolver(eventLoop, resolver); } } diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/InflightNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/InflightNameResolver.java new file mode 100644 index 00000000000..cb93f9a2127 --- /dev/null +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/InflightNameResolver.java @@ -0,0 +1,131 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.resolver.dns; + +import io.netty.resolver.NameResolver; +import io.netty.util.concurrent.EventExecutor; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.FutureListener; +import io.netty.util.concurrent.Promise; +import io.netty.util.internal.StringUtil; + +import java.util.List; +import java.util.concurrent.ConcurrentMap; + +import static io.netty.util.internal.ObjectUtil.checkNotNull; + +// FIXME(trustin): Find a better name and move it to the 'resolver' module. +final class InflightNameResolver<T> implements NameResolver<T> { + + private final EventExecutor executor; + private final NameResolver<T> delegate; + private final ConcurrentMap<String, Promise<T>> resolvesInProgress; + private final ConcurrentMap<String, Promise<List<T>>> resolveAllsInProgress; + + InflightNameResolver(EventExecutor executor, NameResolver<T> delegate, + ConcurrentMap<String, Promise<T>> resolvesInProgress, + ConcurrentMap<String, Promise<List<T>>> resolveAllsInProgress) { + + this.executor = checkNotNull(executor, "executor"); + this.delegate = checkNotNull(delegate, "delegate"); + this.resolvesInProgress = checkNotNull(resolvesInProgress, "resolvesInProgress"); + this.resolveAllsInProgress = checkNotNull(resolveAllsInProgress, "resolveAllsInProgress"); + } + + @Override + public Future<T> resolve(String inetHost) { + return resolve(inetHost, executor.<T>newPromise()); + } + + @Override + public Future<List<T>> resolveAll(String inetHost) { + return resolveAll(inetHost, executor.<List<T>>newPromise()); + } + + @Override + public void close() { + delegate.close(); + } + + @Override + public Promise<T> resolve(String inetHost, Promise<T> promise) { + return resolve(resolvesInProgress, inetHost, promise, false); + } + + @Override + public Promise<List<T>> resolveAll(String inetHost, Promise<List<T>> promise) { + return resolve(resolveAllsInProgress, inetHost, promise, true); + } + + private <U> Promise<U> resolve( + final ConcurrentMap<String, Promise<U>> resolveMap, + final String inetHost, final Promise<U> promise, boolean resolveAll) { + + final Promise<U> earlyPromise = resolveMap.putIfAbsent(inetHost, promise); + if (earlyPromise != null) { + // Name resolution for the specified inetHost is in progress already. + if (earlyPromise.isDone()) { + transferResult(earlyPromise, promise); + } else { + earlyPromise.addListener(new FutureListener<U>() { + @Override + public void operationComplete(Future<U> f) throws Exception { + transferResult(f, promise); + } + }); + } + } else { + try { + if (resolveAll) { + @SuppressWarnings("unchecked") + final Promise<List<T>> castPromise = (Promise<List<T>>) promise; // U is List<T> + delegate.resolveAll(inetHost, castPromise); + } else { + @SuppressWarnings("unchecked") + final Promise<T> castPromise = (Promise<T>) promise; // U is T + delegate.resolve(inetHost, castPromise); + } + } finally { + if (promise.isDone()) { + resolveMap.remove(inetHost); + } else { + promise.addListener(new FutureListener<U>() { + @Override + public void operationComplete(Future<U> f) throws Exception { + resolveMap.remove(inetHost); + } + }); + } + } + } + + return promise; + } + + private static <T> void transferResult(Future<T> src, Promise<T> dst) { + if (src.isSuccess()) { + dst.trySuccess(src.getNow()); + } else { + dst.tryFailure(src.cause()); + } + } + + @Override + public String toString() { + return StringUtil.simpleClassName(this) + '(' + delegate + ')'; + } +}
null
test
train
2016-05-17T11:16:13
"2016-04-27T03:44:37Z"
trustin
val
netty/netty/5175_5269
netty/netty
netty/netty/5175
netty/netty/5269
[ "timestamp(timedelta=45.0, similarity=0.876440567581754)" ]
1cb706ac932bab732257e02fc095d91ff7ed3788
36f8d74cf9adb83d51ee8ef6755632333b977e8e
[ "At the moment not through this interface. What type of socket channel are you dealing with as there is likely a way to do it using an implementation of this interface.\n", "NioSocketChannel, EpollSocketChannel or OioSocketChannel\n", "I see you want it all ... stand by for a PR.\n", "Fixed by https://github.com/netty/netty/pull/5269\n" ]
[]
"2016-05-17T20:57:13Z"
[ "feature" ]
SocketChannel.shutdownInput()?
Is there a way to shutdownInput? I see that there is SocketChannel.shutdownOutput(), but no shutdownInput().
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java", "transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java", "transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java", "transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java", "transport/src/main/java/io/netty/channel/socket/DuplexChannel.java", "transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java", "transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java", "transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java", "transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java", "transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java", "transport/src/main/java/io/netty/channel/socket/DuplexChannel.java", "transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java", "transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java" ]
[ "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java", "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java" ]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java index bc22b37370e..8cdbba542ef 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java @@ -530,7 +530,7 @@ protected Object filterOutboundMessage(Object msg) { "unsupported message type: " + StringUtil.simpleClassName(msg) + EXPECTED_TYPES); } - protected void shutdownOutput0(final ChannelPromise promise) { + private void shutdownOutput0(final ChannelPromise promise) { try { fd().shutdown(false, true); promise.setSuccess(); @@ -539,14 +539,37 @@ protected void shutdownOutput0(final ChannelPromise promise) { } } + private void shutdownInput0(final ChannelPromise promise) { + try { + fd().shutdown(true, false); + promise.setSuccess(); + } catch (Throwable cause) { + promise.setFailure(cause); + } + } + + private void shutdown0(final ChannelPromise promise) { + try { + fd().shutdown(true, true); + promise.setSuccess(); + } catch (Throwable cause) { + promise.setFailure(cause); + } + } + + @Override + public boolean isOutputShutdown() { + return fd().isOutputShutdown(); + } + @Override public boolean isInputShutdown() { return fd().isInputShutdown(); } @Override - public boolean isOutputShutdown() { - return fd().isOutputShutdown(); + public boolean isShutdown() { + return fd().isShutdown(); } @Override @@ -580,6 +603,68 @@ public void run() { return promise; } + @Override + public ChannelFuture shutdownInput() { + return shutdownInput(newPromise()); + } + + @Override + public ChannelFuture shutdownInput(final ChannelPromise promise) { + Executor closeExecutor = ((EpollStreamUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdownInput0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } + } + return promise; + } + + @Override + public ChannelFuture shutdown() { + return shutdown(newPromise()); + } + + @Override + public ChannelFuture shutdown(final ChannelPromise promise) { + Executor closeExecutor = ((EpollStreamUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdown0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } + } + return promise; + } + @Override protected void doClose() throws Exception { try { diff --git a/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java b/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java index 07628a944ae..7ff98e06ef9 100644 --- a/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java +++ b/transport-rxtx/src/main/java/io/netty/channel/rxtx/RxtxChannel.java @@ -18,6 +18,7 @@ import gnu.io.CommPort; import gnu.io.CommPortIdentifier; import gnu.io.SerialPort; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelPromise; import io.netty.channel.oio.OioByteStreamChannel; import io.netty.util.internal.OneTimeTask; @@ -25,7 +26,14 @@ import java.net.SocketAddress; import java.util.concurrent.TimeUnit; -import static io.netty.channel.rxtx.RxtxChannelOption.*; +import static io.netty.channel.rxtx.RxtxChannelOption.BAUD_RATE; +import static io.netty.channel.rxtx.RxtxChannelOption.DATA_BITS; +import static io.netty.channel.rxtx.RxtxChannelOption.DTR; +import static io.netty.channel.rxtx.RxtxChannelOption.PARITY_BIT; +import static io.netty.channel.rxtx.RxtxChannelOption.READ_TIMEOUT; +import static io.netty.channel.rxtx.RxtxChannelOption.RTS; +import static io.netty.channel.rxtx.RxtxChannelOption.STOP_BITS; +import static io.netty.channel.rxtx.RxtxChannelOption.WAIT_TIME; /** * A channel to a serial device using the RXTX library. @@ -129,6 +137,16 @@ protected void doClose() throws Exception { } } + @Override + protected boolean isInputShutdown() { + return !open; + } + + @Override + protected ChannelFuture shutdownInput() { + return newFailedFuture(new UnsupportedOperationException("shutdownInput")); + } + private final class RxtxUnsafe extends AbstractUnsafe { @Override public void connect( diff --git a/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java b/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java index 20adae249ba..62dbb9ffd7e 100644 --- a/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java +++ b/transport-udt/src/main/java/io/netty/channel/udt/nio/NioUdtByteConnectorChannel.java @@ -17,10 +17,10 @@ import com.barchart.udt.TypeUDT; import com.barchart.udt.nio.SocketChannelUDT; - import io.netty.buffer.ByteBuf; import io.netty.channel.Channel; import io.netty.channel.ChannelException; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelMetadata; import io.netty.channel.FileRegion; import io.netty.channel.RecvByteBufAllocator; @@ -34,7 +34,7 @@ import java.net.InetSocketAddress; import java.net.SocketAddress; -import static java.nio.channels.SelectionKey.*; +import static java.nio.channels.SelectionKey.OP_CONNECT; /** * Byte Channel Connector for UDT Streams. @@ -149,6 +149,11 @@ protected int doWriteBytes(final ByteBuf byteBuf) throws Exception { return byteBuf.readBytes(javaChannel(), expectedWrittenBytes); } + @Override + protected ChannelFuture shutdownInput() { + return newFailedFuture(new UnsupportedOperationException("shutdownInput")); + } + @Override protected long doWriteFileRegion(FileRegion region) throws Exception { throw new UnsupportedOperationException(); diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java index 69f99b15758..cc59b84d8a7 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java @@ -19,6 +19,7 @@ import io.netty.buffer.ByteBufAllocator; import io.netty.channel.Channel; import io.netty.channel.ChannelConfig; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelOption; import io.netty.channel.ChannelOutboundBuffer; import io.netty.channel.ChannelPipeline; @@ -52,6 +53,11 @@ protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) { super(parent, ch, SelectionKey.OP_READ); } + /** + * Shutdown the input side of the channel. + */ + protected abstract ChannelFuture shutdownInput(); + @Override protected AbstractNioUnsafe newUnsafe() { return new NioByteUnsafe(); @@ -60,10 +66,10 @@ protected AbstractNioUnsafe newUnsafe() { protected class NioByteUnsafe extends AbstractNioUnsafe { private void closeOnRead(ChannelPipeline pipeline) { - SelectionKey key = selectionKey(); - setInputShutdown(); if (isOpen()) { if (Boolean.TRUE.equals(config().getOption(ChannelOption.ALLOW_HALF_CLOSURE))) { + shutdownInput(); + SelectionKey key = selectionKey(); key.interestOps(key.interestOps() & ~readInterestOp); pipeline.fireUserEventTriggered(ChannelInputShutdownEvent.INSTANCE); } else { diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java index 582e6ae6faf..8de66aba187 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java @@ -60,7 +60,6 @@ public abstract class AbstractNioChannel extends AbstractChannel { private final SelectableChannel ch; protected final int readInterestOp; volatile SelectionKey selectionKey; - private volatile boolean inputShutdown; boolean readPending; private final Runnable clearReadPendingRunnable = new Runnable() { @Override @@ -197,20 +196,6 @@ private void clearReadPending0() { ((AbstractNioUnsafe) unsafe()).removeReadOp(); } - /** - * Return {@code true} if the input of this {@link Channel} is shutdown - */ - protected boolean isInputShutdown() { - return inputShutdown; - } - - /** - * Shutdown the input of this {@link Channel}. - */ - void setInputShutdown() { - inputShutdown = true; - } - /** * Special {@link Unsafe} sub-type which allows to access the underlying {@link SelectableChannel} */ @@ -422,10 +407,6 @@ protected void doDeregister() throws Exception { @Override protected void doBeginRead() throws Exception { // Channel.read() or ChannelHandlerContext.read() was called - if (inputShutdown) { - return; - } - final SelectionKey selectionKey = this.selectionKey; if (!selectionKey.isValid()) { return; diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java index 1d606e075b1..1c7f92aa2b1 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioMessageChannel.java @@ -33,6 +33,7 @@ * {@link AbstractNioChannel} base class for {@link Channel}s that operate on messages. */ public abstract class AbstractNioMessageChannel extends AbstractNioChannel { + boolean inputShutdown; /** * @see {@link AbstractNioChannel#AbstractNioChannel(Channel, SelectableChannel, int)} @@ -46,6 +47,14 @@ protected AbstractNioUnsafe newUnsafe() { return new NioMessageUnsafe(); } + @Override + protected void doBeginRead() throws Exception { + if (inputShutdown) { + return; + } + super.doBeginRead(); + } + private final class NioMessageUnsafe extends AbstractNioUnsafe { private final List<Object> readBuf = new ArrayList<Object>(); @@ -98,7 +107,7 @@ public void read() { } if (closed) { - setInputShutdown(); + inputShutdown = true; if (isOpen()) { close(voidPromise()); } diff --git a/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java b/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java index 8c10cf2b206..15da4099b6b 100644 --- a/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java +++ b/transport/src/main/java/io/netty/channel/oio/AbstractOioByteChannel.java @@ -19,6 +19,7 @@ import io.netty.buffer.ByteBufAllocator; import io.netty.channel.Channel; import io.netty.channel.ChannelConfig; +import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelMetadata; import io.netty.channel.ChannelOption; import io.netty.channel.ChannelOutboundBuffer; @@ -40,8 +41,6 @@ public abstract class AbstractOioByteChannel extends AbstractOioChannel { " (expected: " + StringUtil.simpleClassName(ByteBuf.class) + ", " + StringUtil.simpleClassName(FileRegion.class) + ')'; - private volatile boolean inputShutdown; - /** * @see AbstractOioByteChannel#AbstractOioByteChannel(Channel) */ @@ -49,39 +48,27 @@ protected AbstractOioByteChannel(Channel parent) { super(parent); } - protected boolean isInputShutdown() { - return inputShutdown; - } - @Override public ChannelMetadata metadata() { return METADATA; } /** - * Check if the input was shutdown and if so return {@code true}. The default implementation sleeps also for - * {@link #SO_TIMEOUT} milliseconds to simulate some blocking. + * Determine if the input side of this channel is shutdown. + * @return {@code true} if the input side of this channel is shutdown. */ - protected boolean checkInputShutdown() { - if (inputShutdown) { - try { - Thread.sleep(SO_TIMEOUT); - } catch (InterruptedException e) { - // ignore - } - return true; - } - return false; - } + protected abstract boolean isInputShutdown(); - void setInputShutdown() { - inputShutdown = true; - } + /** + * Shutdown the input side of this channel. + * @return A channel future that will complete when the shutdown is complete. + */ + protected abstract ChannelFuture shutdownInput(); private void closeOnRead(ChannelPipeline pipeline) { - setInputShutdown(); if (isOpen()) { if (Boolean.TRUE.equals(config().getOption(ChannelOption.ALLOW_HALF_CLOSURE))) { + shutdownInput(); pipeline.fireUserEventTriggered(ChannelInputShutdownEvent.INSTANCE); } else { unsafe().close(unsafe().voidPromise()); diff --git a/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java index d34ec36bff1..4770d22511a 100644 --- a/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java @@ -32,6 +32,18 @@ public interface DuplexChannel extends Channel { */ boolean isInputShutdown(); + /** + * @see Socket#shutdownInput() + */ + ChannelFuture shutdownInput(); + + /** + * Will shutdown the input and notify {@link ChannelPromise}. + * + * @see Socket#shutdownInput() + */ + ChannelFuture shutdownInput(ChannelPromise promise); + /** * @see Socket#isOutputShutdown() */ @@ -43,9 +55,27 @@ public interface DuplexChannel extends Channel { ChannelFuture shutdownOutput(); /** - * @see Socket#shutdownOutput() + * Will shutdown the output and notify {@link ChannelPromise}. * - * Will notify the given {@link ChannelPromise} + * @see Socket#shutdownOutput() */ ChannelFuture shutdownOutput(ChannelPromise promise); + + /** + * Determine if both the input and output of this channel have been shutdown. + */ + boolean isShutdown(); + + /** + * Will shutdown the input and output sides of this channel. + * @return will be completed when both shutdown operations complete. + */ + ChannelFuture shutdown(); + + /** + * Will shutdown the input and output sides of this channel. + * @param promise will be completed when both shutdown operations complete. + * @return will be completed when both shutdown operations complete. + */ + ChannelFuture shutdown(ChannelPromise promise); } diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java index 86253ab8510..dc1cbe1cae2 100644 --- a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java @@ -31,6 +31,8 @@ import io.netty.channel.socket.SocketChannelConfig; import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.internal.OneTimeTask; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; import java.io.IOException; import java.net.InetSocketAddress; @@ -46,7 +48,7 @@ * {@link io.netty.channel.socket.SocketChannel} which uses NIO selector based implementation. */ public class NioSocketChannel extends AbstractNioByteChannel implements io.netty.channel.socket.SocketChannel { - + private static final InternalLogger logger = InternalLoggerFactory.getInstance(NioSocketChannel.class); private static final ChannelMetadata METADATA = new ChannelMetadata(false, 16); private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider(); @@ -124,9 +126,20 @@ public boolean isActive() { return ch.isOpen() && ch.isConnected(); } + @Override + public boolean isOutputShutdown() { + return javaChannel().socket().isOutputShutdown() || !isActive(); + } + @Override public boolean isInputShutdown() { - return super.isInputShutdown(); + return javaChannel().socket().isInputShutdown() || !isActive(); + } + + @Override + public boolean isShutdown() { + Socket socket = javaChannel().socket(); + return socket.isInputShutdown() && socket.isOutputShutdown() || !isActive(); } @Override @@ -139,11 +152,6 @@ public InetSocketAddress remoteAddress() { return (InetSocketAddress) super.remoteAddress(); } - @Override - public boolean isOutputShutdown() { - return javaChannel().socket().isOutputShutdown() || !isActive(); - } - @Override public ChannelFuture shutdownOutput() { return shutdownOutput(newPromise()); @@ -175,6 +183,68 @@ public void run() { return promise; } + @Override + public ChannelFuture shutdownInput() { + return shutdownInput(newPromise()); + } + + @Override + public ChannelFuture shutdownInput(final ChannelPromise promise) { + Executor closeExecutor = ((NioSocketChannelUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdownInput0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } + } + return promise; + } + + @Override + public ChannelFuture shutdown() { + return shutdown(newPromise()); + } + + @Override + public ChannelFuture shutdown(final ChannelPromise promise) { + Executor closeExecutor = ((NioSocketChannelUnsafe) unsafe()).prepareToClose(); + if (closeExecutor != null) { + closeExecutor.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } else { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdown0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } + } + return promise; + } + private void shutdownOutput0(final ChannelPromise promise) { try { javaChannel().socket().shutdownOutput(); @@ -184,6 +254,41 @@ private void shutdownOutput0(final ChannelPromise promise) { } } + private void shutdownInput0(final ChannelPromise promise) { + try { + javaChannel().socket().shutdownInput(); + promise.setSuccess(); + } catch (Throwable t) { + promise.setFailure(t); + } + } + + private void shutdown0(final ChannelPromise promise) { + Socket socket = javaChannel().socket(); + Throwable cause = null; + try { + socket.shutdownOutput(); + } catch (Throwable t) { + cause = t; + } + try { + socket.shutdownInput(); + } catch (Throwable t) { + if (cause == null) { + promise.setFailure(t); + } else { + logger.debug("Exception suppressed because a previous exception occurred.", t); + promise.setFailure(cause); + } + return; + } + if (cause == null) { + promise.setSuccess(); + } else { + promise.setFailure(cause); + } + } + @Override protected SocketAddress localAddress0() { return javaChannel().socket().getLocalSocketAddress(); diff --git a/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java index 3b34e213ecc..3e6cef632e4 100644 --- a/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/oio/OioSocketChannel.java @@ -38,11 +38,9 @@ /** * A {@link SocketChannel} which is using Old-Blocking-IO */ -public class OioSocketChannel extends OioByteStreamChannel - implements SocketChannel { +public class OioSocketChannel extends OioByteStreamChannel implements SocketChannel { - private static final InternalLogger logger = - InternalLoggerFactory.getInstance(OioSocketChannel.class); + private static final InternalLogger logger = InternalLoggerFactory.getInstance(OioSocketChannel.class); private final Socket socket; private final OioSocketChannelConfig config; @@ -115,14 +113,19 @@ public boolean isActive() { return !socket.isClosed() && socket.isConnected(); } + @Override + public boolean isOutputShutdown() { + return socket.isOutputShutdown() || !isActive(); + } + @Override public boolean isInputShutdown() { - return super.isInputShutdown(); + return socket.isInputShutdown() || !isActive(); } @Override - public boolean isOutputShutdown() { - return socket.isOutputShutdown() || !isActive(); + public boolean isShutdown() { + return socket.isInputShutdown() && socket.isOutputShutdown() || !isActive(); } @Override @@ -130,6 +133,16 @@ public ChannelFuture shutdownOutput() { return shutdownOutput(newPromise()); } + @Override + public ChannelFuture shutdownInput() { + return shutdownInput(newPromise()); + } + + @Override + public ChannelFuture shutdown() { + return shutdown(newPromise()); + } + @Override protected int doReadBytes(ByteBuf buf) throws Exception { if (socket.isClosed()) { @@ -143,24 +156,94 @@ protected int doReadBytes(ByteBuf buf) throws Exception { } @Override - public ChannelFuture shutdownOutput(final ChannelPromise future) { + public ChannelFuture shutdownOutput(final ChannelPromise promise) { EventLoop loop = eventLoop(); if (loop.inEventLoop()) { - try { - socket.shutdownOutput(); - future.setSuccess(); - } catch (Throwable t) { - future.setFailure(t); - } + shutdownOutput0(promise); } else { loop.execute(new OneTimeTask() { @Override public void run() { - shutdownOutput(future); + shutdownOutput0(promise); } }); } - return future; + return promise; + } + + private void shutdownOutput0(ChannelPromise promise) { + try { + socket.shutdownOutput(); + promise.setSuccess(); + } catch (Throwable t) { + promise.setFailure(t); + } + } + + @Override + public ChannelFuture shutdownInput(final ChannelPromise promise) { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdownInput0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdownInput0(promise); + } + }); + } + return promise; + } + + private void shutdownInput0(ChannelPromise promise) { + try { + socket.shutdownInput(); + promise.setSuccess(); + } catch (Throwable t) { + promise.setFailure(t); + } + } + + @Override + public ChannelFuture shutdown(final ChannelPromise promise) { + EventLoop loop = eventLoop(); + if (loop.inEventLoop()) { + shutdown0(promise); + } else { + loop.execute(new OneTimeTask() { + @Override + public void run() { + shutdown0(promise); + } + }); + } + return promise; + } + + private void shutdown0(ChannelPromise promise) { + Throwable cause = null; + try { + socket.shutdownOutput(); + } catch (Throwable t) { + cause = t; + } + try { + socket.shutdownInput(); + } catch (Throwable t) { + if (cause == null) { + promise.setFailure(t); + } else { + logger.debug("Exception suppressed because a previous exception occurred.", t); + promise.setFailure(cause); + } + return; + } + if (cause == null) { + promise.setSuccess(); + } else { + promise.setFailure(cause); + } } @Override @@ -221,7 +304,6 @@ protected void doClose() throws Exception { socket.close(); } - @Override protected boolean checkInputShutdown() { if (isInputShutdown()) { try {
diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java new file mode 100644 index 00000000000..5a7bdc1b93f --- /dev/null +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputByPeerTest.java @@ -0,0 +1,29 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.epoll; + +import io.netty.bootstrap.ServerBootstrap; +import io.netty.testsuite.transport.TestsuitePermutation; +import io.netty.testsuite.transport.socket.SocketShutdownOutputByPeerTest; + +import java.util.List; + +public class EpollSocketShutdownOutputByPeerTest extends SocketShutdownOutputByPeerTest { + @Override + protected List<TestsuitePermutation.BootstrapFactory<ServerBootstrap>> newFactories() { + return EpollSocketTestPermutation.INSTANCE.serverSocket(); + } +} diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java new file mode 100644 index 00000000000..3ad80e472bd --- /dev/null +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketShutdownOutputBySelfTest.java @@ -0,0 +1,29 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.epoll; + +import io.netty.bootstrap.Bootstrap; +import io.netty.testsuite.transport.TestsuitePermutation; +import io.netty.testsuite.transport.socket.SocketShutdownOutputBySelfTest; + +import java.util.List; + +public class EpollSocketShutdownOutputBySelfTest extends SocketShutdownOutputBySelfTest { + @Override + protected List<TestsuitePermutation.BootstrapFactory<Bootstrap>> newFactories() { + return EpollSocketTestPermutation.INSTANCE.clientSocket(); + } +}
train
train
2016-05-17T22:42:16
"2016-04-25T23:24:06Z"
vrozov
val
netty/netty/4211_5270
netty/netty
netty/netty/4211
netty/netty/5270
[ "timestamp(timedelta=28.0, similarity=0.8719758372520758)" ]
1cb706ac932bab732257e02fc095d91ff7ed3788
a44783ab3084df6f91f6008148bd1901883a1121
[ "Note that those `Submitter`s are just `int` wrappers, so I think the leak is more of an annoying warning than a severe issue.\n", "Also the Netty `ForkJoinPool` is just a copy of the OpenJDK FJP. So maybe a simple up date to the latest OpenJDK version would resolve this.\n", "@buchgr I wonder if it's actually a fork of Doug Lea's initial work for JSR166-e, before it was merged into OpenJDK and modified again.\n\nNetty's version has this [ThreadLocal<Subscriber>](https://github.com/netty/netty/blob/netty-4.0.31.Final/common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java#L1082) than [OpenJDK hasn't](http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/concurrent/ForkJoinPool.java#l1098).\n", "@slandelle good point. I am not exactly sure. Just saying that updating it might resolve such things.\n", "@buchgr I'm not sure you guys will want to fork OpenJDK as it's GPLv2. You might want to use [Doug Lea's JSR166](http://g.oswego.edu/dl/concurrency-interest/) work that's Creative Commons - public domain.\n", "But I agree, upgrading should fix the issue as Tomcat doesn't report it against regular CHMv8.\n", "@slandelle - Thanks for the heads up!\n\n@normanmaurer - I'll assign to you for now...pending discussion about 5.0.\n", "Fixed by https://github.com/netty/netty/pull/5270\n" ]
[]
"2016-05-18T02:06:41Z"
[ "defect" ]
[ForkJoinPool] Thread Local Leak detected
The Atmosphere Framework bundles the [ForkJoinPool](https://github.com/netty/netty/blob/master/common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java) implementation of Netty. It seems there is a Thread Local leak as reported [here](https://github.com/Atmosphere/atmosphere/issues/2048) ``` java SEVERE: The web application [/atmosphere-chat] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@41a80e5a]) and a value of type [org.atmosphere.util.chmv8.ForkJoinPool.Submitter] (value [org.atmosphere.util.chmv8.ForkJoinPool$Submitter@450ae3fb]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. ``` When using the JDK7/8 implementation the leak doesn't occurs (no error message) so I suspect the current implementation leaks.
[ "common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java" ]
[ "common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java b/common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java index bfe4e9d269d..24bded99e5b 100644 --- a/common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java +++ b/common/src/main/java/io/netty/util/internal/chmv8/ForkJoinPool.java @@ -1078,8 +1078,13 @@ final boolean isApparentlyUnblocked() { * to avoid contention in one pool is likely to hold for others. * Lazily initialized on first submission (but null-checked * in other contexts to avoid unnecessary initialization). + * + * Note: this was changed to fix https://github.com/netty/netty/issues/4211 + * Instead of using "ThreadLocal<Submitter>" like jsr166e, just use "ThreadLocal<int[]>" to + * avoid leaking the Submitter's class loader. Here "int[]" is just an array with exactly one + * int. */ - static final ThreadLocal<Submitter> submitters; + static final ThreadLocal<int[]> submitters; /** * Creates a new ForkJoinWorkerThread. This factory is used unless @@ -1478,10 +1483,10 @@ final void deregisterWorker(ForkJoinWorkerThread wt, Throwable ex) { * randomly modified upon collisions using xorshifts, which * requires a non-zero seed. */ - static final class Submitter { - int seed; - Submitter(int s) { seed = s; } - } + //static final class Submitter { + // int seed; + // Submitter(int s) { seed = s; } + //} /** * Unless shutting down, adds the given task to a submission queue @@ -1492,12 +1497,12 @@ static final class Submitter { * @param task the task. Caller must ensure non-null. */ final void externalPush(ForkJoinTask<?> task) { - Submitter z = submitters.get(); + int[] z = submitters.get(); WorkQueue q; int r, m, s, n, am; ForkJoinTask<?>[] a; int ps = plock; WorkQueue[] ws = workQueues; if (z != null && ps > 0 && ws != null && (m = (ws.length - 1)) >= 0 && - (q = ws[m & (r = z.seed) & SQMASK]) != null && r != 0 && + (q = ws[m & (r = z[0]) & SQMASK]) != null && r != 0 && U.compareAndSwapInt(q, QLOCK, 0, 1)) { // lock if ((a = q.array) != null && (am = a.length - 1) > (n = (s = q.top) - q.base)) { @@ -1533,18 +1538,18 @@ final void externalPush(ForkJoinTask<?> task) { */ private void fullExternalPush(ForkJoinTask<?> task) { int r = 0; // random index seed - for (Submitter z = submitters.get();;) { + for (int[] z = submitters.get();;) { WorkQueue[] ws; WorkQueue q; int ps, m, k; if (z == null) { if (U.compareAndSwapInt(this, INDEXSEED, r = indexSeed, r += SEED_INCREMENT) && r != 0) - submitters.set(z = new Submitter(r)); + submitters.set(z = new int[]{ r }); } else if (r == 0) { // move to a different index - r = z.seed; + r = z[0]; r ^= r << 13; // same xorshift as WorkQueues r ^= r >>> 17; - z.seed = r ^= (r << 5); + z[0] = r ^= (r << 5); } if ((ps = plock) < 0) throw new RejectedExecutionException(); @@ -2313,12 +2318,12 @@ private boolean tryTerminate(boolean now, boolean enable) { * least one task. */ static WorkQueue commonSubmitterQueue() { - Submitter z; ForkJoinPool p; WorkQueue[] ws; int m, r; + int[] z; ForkJoinPool p; WorkQueue[] ws; int m, r; return ((z = submitters.get()) != null && (p = common) != null && (ws = p.workQueues) != null && (m = ws.length - 1) >= 0) ? - ws[m & z.seed & SQMASK] : null; + ws[m & z[0] & SQMASK] : null; } /** @@ -2326,11 +2331,11 @@ static WorkQueue commonSubmitterQueue() { */ final boolean tryExternalUnpush(ForkJoinTask<?> task) { WorkQueue joiner; ForkJoinTask<?>[] a; int m, s; - Submitter z = submitters.get(); + int[] z = submitters.get(); WorkQueue[] ws = workQueues; boolean popped = false; if (z != null && ws != null && (m = ws.length - 1) >= 0 && - (joiner = ws[z.seed & m & SQMASK]) != null && + (joiner = ws[z[0] & m & SQMASK]) != null && joiner.base != (s = joiner.top) && (a = joiner.array) != null) { long j = (((a.length - 1) & (s - 1)) << ASHIFT) + ABASE; @@ -2349,11 +2354,11 @@ final boolean tryExternalUnpush(ForkJoinTask<?> task) { final int externalHelpComplete(CountedCompleter<?> task) { WorkQueue joiner; int m, j; - Submitter z = submitters.get(); + int[] z = submitters.get(); WorkQueue[] ws = workQueues; int s = 0; if (z != null && ws != null && (m = ws.length - 1) >= 0 && - (joiner = ws[(j = z.seed) & m & SQMASK]) != null && task != null) { + (joiner = ws[(j = z[0]) & m & SQMASK]) != null && task != null) { int scans = m + m + 1; long c = 0L; // for stability check j |= 1; // poll odd queues @@ -3279,7 +3284,7 @@ protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) { throw new Error(e); } - submitters = new ThreadLocal<Submitter>(); + submitters = new ThreadLocal<int[]>(); defaultForkJoinWorkerThreadFactory = new DefaultForkJoinWorkerThreadFactory(); modifyThreadPermission = new RuntimePermission("modifyThread");
null
train
train
2016-05-17T22:42:16
"2015-09-14T15:55:42Z"
jfarcand
val
netty/netty/5320_5321
netty/netty
netty/netty/5320
netty/netty/5321
[ "timestamp(timedelta=22.0, similarity=0.9284216160032346)" ]
48ed70c1aa13a1d9b0a03d617edd1772f6aba314
240dd96dde8aabca777da51178caea73faa1e871
[ "PR https://github.com/netty/netty/pull/5321\n", "Fixed by https://github.com/netty/netty/pull/5321\n" ]
[ "Please define a static byte[] and reuse everytime. Something like:\n\nprivate static final byte[] ID = new byte[] {'n', 'e', 't', 't', 'y'};\n" ]
"2016-05-28T09:02:54Z"
[ "defect" ]
OpensslEngine throws "session id context uninitialized" when used with client authentication
I use netty to implement a webserver. With JDK ssl provider all work well but with open ssl provider i see the following errors when connection is made from a firefox browser and client authentication (mutual ssl) is requested. Without client authentication the error does not occur. <pre> javax.net.ssl.SSLHandshakeException: error:140D9115:SSL routines:ssl_get_prev_session:session id context uninitialized at io.netty.handler.ssl.OpenSslEngine.shutdownWithError(OpenSslEngine.java:575) at io.netty.handler.ssl.OpenSslEngine.sslReadErrorResult(OpenSslEngine.java:778) at io.netty.handler.ssl.OpenSslEngine.unwrap(OpenSslEngine.java:733) at io.netty.handler.ssl.OpenSslEngine.unwrap(OpenSslEngine.java:810) </pre> When used with chrome i see sometimes also this error <pre> error:140A1175:SSL routines:ssl_bytes_to_cipher_list:inappropriate fallback </pre> When used with safari i see sometimes also this error <pre> error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol </pre> After a little bit of research this issue look like: http://openssl.6102.n7.nabble.com/error-140D9115-SSL-routines-SSL-GET-PREV-SESSION-session-id-context-uninitialized-td5749.html https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=729480 https://jira.mongodb.org/browse/SERVER-9307 Netty: 4.0.36 with Fork15 of netty tcnative and OpenSSL 1.0.2e 3 Dec 2015
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java index 156843eb63c..f069eaa4927 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java @@ -36,6 +36,7 @@ * A server-side {@link SslContext} which uses OpenSSL's SSL/TLS implementation. */ public final class OpenSslServerContext extends OpenSslContext { + private static final byte[] ID = new byte[] {'n', 'e', 't', 't', 'y'}; private final OpenSslServerSessionContext sessionContext; /** @@ -439,6 +440,7 @@ void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth) } } sessionContext = new OpenSslServerSessionContext(ctx); + sessionContext.setSessionIdContext(ID); success = true; } finally { if (!success) {
null
train
train
2016-05-28T08:02:07
"2016-05-28T08:19:03Z"
floragunn
val
netty/netty/5119_5326
netty/netty
netty/netty/5119
netty/netty/5326
[ "timestamp(timedelta=18.0, similarity=0.9191116549719136)" ]
db6b72da199f80c64d19d2d9bb1dfeec281dfd50
283720ca33294b3bd04783835ce7c1b68a2481e2
[ "@MilosFabian we love PRs.. so maybe you can provide one as I don't have any good OSGI knowledge :(\n", "Alright, I will work on it :) \n", "Thanks ❤️!\n\n> Am 12.04.2016 um 09:33 schrieb MilosFabian [email protected]:\n> \n> Alright, I will work on it :)\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly or view it on GitHub\n", "I am sorry for a delay...finally I found some time take a look at it.\nI managed produce bundle (include manifest in jar) for netty-transport-native-epoll-*-linux-x86_64\nHowever, when I was trying to use the bundle in OSGi (Apache Karaf) I hit a problem when loading native library.\nThe Epoll.isAvailable() returns false with following unavailability cause.\n\njava.lang.ClassNotFoundException: io.netty.channel.epoll.NativeStaticallyReferencedJniMethods\n at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)\n at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)\n at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)\n at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_45]\n at java.lang.ClassLoader$NativeLibrary.load(Native Method)[:1.8.0_45]\n at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1937)[:1.8.0_45]\n at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1822)[:1.8.0_45]\n at java.lang.Runtime.load0(Runtime.java:809)[:1.8.0_45]\n at java.lang.System.load(System.java:1086)[:1.8.0_45]\n at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:214)\n at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:261)\n at io.netty.channel.epoll.Native.<clinit>(Native.java:62)\n at io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:33)\n\nMore investigation needed.\n", "Fixed by https://github.com/netty/netty/pull/5326\n" ]
[]
"2016-05-30T17:34:17Z"
[ "defect" ]
netty-transport-native-epoll-*-linux-x86_64.jar is not OSGi bundle
Hello, basically, reopening https://github.com/netty/netty/issues/3781 I am working on OpenDaylight project and we would like use netty-transport-native-epoll due to the TCP MD5 signature support. Currently we are using Netty version 4.0.33.Final and bumping to 4.0.36.Final right now. Unfortunately, netty-transport-native-epoll-*-linux-x86_64.jar is not a bundle and since the OpenDaylight is running in OSGi container (Apache Karaf) we are not able use it directly as a bundle (like other Netty artifacts). It would be great for us if the jar is packed as a bundle as well. Thanks, Milos Fabian
[ "common/pom.xml", "transport-native-epoll/pom.xml" ]
[ "common/pom.xml", "transport-native-epoll/pom.xml" ]
[]
diff --git a/common/pom.xml b/common/pom.xml index 6385c932854..d0b99399e5b 100644 --- a/common/pom.xml +++ b/common/pom.xml @@ -138,6 +138,27 @@ </execution> </executions> </plugin> + <plugin> + <groupId>org.apache.felix</groupId> + <artifactId>maven-bundle-plugin</artifactId> + <version>2.4.0</version> + <executions> + <execution> + <id>generate-manifest</id> + <phase>process-classes</phase> + <goals> + <goal>manifest</goal> + </goals> + <configuration> + <instructions> + <!-- NativeLibraryLoader can be used to manually load native libraries from other bundles that this bundle does not depend on, + hence use DynamicImport-Package instruction to ensure the loading is successful --> + <DynamicImport-Package>*</DynamicImport-Package> + </instructions> + </configuration> + </execution> + </executions> + </plugin> </plugins> </build> </project> diff --git a/transport-native-epoll/pom.xml b/transport-native-epoll/pom.xml index a705b08ebaf..b12ba73623b 100644 --- a/transport-native-epoll/pom.xml +++ b/transport-native-epoll/pom.xml @@ -105,6 +105,13 @@ <goal>jar</goal> </goals> <configuration> + <archive> + <manifest> + <addDefaultImplementationEntries>true</addDefaultImplementationEntries> + </manifest> + <index>true</index> + <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile> + </archive> <classifier>${epoll.classifier}</classifier> </configuration> </execution>
null
train
train
2016-05-30T15:10:15
"2016-04-11T09:52:20Z"
MilosFabian
val
netty/netty/5280_5332
netty/netty
netty/netty/5280
netty/netty/5332
[ "timestamp(timedelta=2632.0, similarity=0.9519604256868078)" ]
f6ad9df8acfad15e2a0986d9d2c91730f138163d
bc4b2b343e24baa5442e0410dbfadee054a40c82
[ "@maseev I think you should try latest 4.1 branch and see how it behaves as we now also pool Slices if you use the PooledByteBufAllocator here.\n", "@maseev maybe you want to send a PR which implements an \"optimised\" `LengthFieldBasedFrameDecoder` and `FixedLengthFrameDecoder` ?\n", "I don't think your handler can be Sharable with that buffer bytebuf. \n" ]
[ "Maybe the check can be moved to some constructor?\n", "No need to assign writeIndex here I guess?\n", "Maybe I am missing something, but I assume a decoder like this one may throw every frame decoded to out via buffer.retainSlice(), etc. I could not quite follow the point here, even after reading the tests. Correct me if I am wrong. :-)\n", "nit: mismatch of the if condition and the exception message?\n", "I don't think so. Either way, you have to call an appropriate method which depends on an offset value. The same approach is used in LengthFieldBasedFrameDecoder. Seems legit for me.\n", "Thanks! Will fix that very soon.\n", "Do you mean `buffer.retain().slice()`? Well, it's not how it supposed to be working. The whole point of this decoder is to batch all incoming ByteBuf messages into one preallocated ByteBuf object. When the `channelReadComplete` method gets called we simply copy data from the buffer to another chunk buffer which would contain precisely N messages (where N >= 1) and pass it to next pipeline element. By doing so we're able to reuse this buffer, again and again, no matter how many messages we get. If I chose retain-slice approach I would have to keep an eye on the buffer reference counter because slice creates only a view of buffer sub-region, so you wouldn't be able to reuse it, if for example, the channel wasn't ready for write (channel.isWritable() returns false).\n", "See my comment below.\n" ]
"2016-05-31T15:38:40Z"
[]
Feature request: Optimized versions of LengthFieldBasedFrameDecoder and FixedLengthFrameDecoder
##
[]
[ "codec/src/main/java/io/netty/handler/codec/BulkFixedLengthFrameDecoder.java", "codec/src/main/java/io/netty/handler/codec/BulkFrameDecoder.java", "codec/src/main/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoder.java" ]
[ "codec/src/test/java/io/netty/handler/codec/BulkFixedLengthFrameDecoderTest.java", "codec/src/test/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoderTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/BulkFixedLengthFrameDecoder.java b/codec/src/main/java/io/netty/handler/codec/BulkFixedLengthFrameDecoder.java new file mode 100644 index 00000000000..4c3cfb8866d --- /dev/null +++ b/codec/src/main/java/io/netty/handler/codec/BulkFixedLengthFrameDecoder.java @@ -0,0 +1,73 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; + +import java.util.List; + +/** + * A decoder that mostly mimics {@link FixedLengthFrameDecoder} semantics. The only difference is this decoder batches + * incoming {@link ByteBuf} messages and then decodes them at once, passing a big {@link ByteBuf} which can possibly + * contain N messages (where N >= 1) through the pipeline at once. + */ +public class BulkFixedLengthFrameDecoder extends BulkFrameDecoder { + + final int frameLength; + + /** + * Creates a new instance. + * + * @param frameLength the length of the frame + */ + public BulkFixedLengthFrameDecoder(int frameLength) { + this(0, frameLength); + } + + /** + * Creates a new instance. + * + * @param startBufferSize the start buffer size which is used for batching incoming {@link ByteBuf} messages. Size + * of this buffer must be greater or equal to zero. If the size of this buffer is zero then the default size will be + * used (= 256 bytes). + * @param frameLength the length of the frame + */ + public BulkFixedLengthFrameDecoder(int startBufferSize, int frameLength) { + super(startBufferSize); + + if (frameLength <= 0) { + throw new IllegalArgumentException("frameLength must be a positive integer: " + frameLength); + } + + this.frameLength = frameLength; + } + + @Override + protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) { + final int readableBytes = buffer.readableBytes(); + + if (readableBytes < frameLength) { + return; + } + + final int chunkLength = buffer.readableBytes() / frameLength * frameLength; + final ByteBuf chunk = ctx.alloc().buffer(chunkLength, chunkLength).writeBytes(buffer, chunkLength); + + out.add(chunk); + } +} diff --git a/codec/src/main/java/io/netty/handler/codec/BulkFrameDecoder.java b/codec/src/main/java/io/netty/handler/codec/BulkFrameDecoder.java new file mode 100644 index 00000000000..cfc65229d8a --- /dev/null +++ b/codec/src/main/java/io/netty/handler/codec/BulkFrameDecoder.java @@ -0,0 +1,114 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; + +import java.util.List; + +public abstract class BulkFrameDecoder extends ChannelInboundHandlerAdapter { + + private final int startBufferSize; + + private ByteBuf buffer; + + /** + * @param startBufferSize the start buffer size which is used for batching incoming {@link ByteBuf} messages. Size + * of this buffer must be greater or equal to zero. If the size of this buffer is zero then the default size will be + * used (= 256 bytes). + */ + protected BulkFrameDecoder(int startBufferSize) { + if (startBufferSize < 0) { + throw new IllegalArgumentException("startBufferSize must be greater or equal to zero: " + startBufferSize); + } + + this.startBufferSize = startBufferSize; + + CodecUtil.ensureNotSharable(this); + } + + protected abstract void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out); + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + buffer = internalBuffer(ctx, startBufferSize); + + ctx.fireChannelActive(); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + releaseInternalBuffer(); + ctx.fireChannelInactive(); + } + + @Override + public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { + releaseInternalBuffer(); + } + + @Override + public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { + if (msg instanceof ByteBuf) { + final ByteBuf frame = (ByteBuf) msg; + + try { + buffer.writeBytes(frame); + } finally { + frame.release(); + } + } else { + ctx.fireChannelRead(msg); + } + } + + @Override + public void channelReadComplete(ChannelHandlerContext ctx) throws Exception { + CodecOutputList out = CodecOutputList.newInstance(); + + try { + decode(ctx, buffer, out); + } finally { + buffer.discardReadBytes(); + fireChannelRead(ctx, out); + out.recycle(); + } + } + + private static void fireChannelRead(ChannelHandlerContext ctx, List<Object> msgs) { + for (int i = 0, size = msgs.size(); i < size; ++i) { + ctx.fireChannelRead(msgs.get(i)); + } + } + + private static ByteBuf internalBuffer(ChannelHandlerContext ctx, int bufferSize) { + if (bufferSize == 0) { + return ctx.alloc().buffer(); + } + + return ctx.alloc().buffer(bufferSize); + } + + private void releaseInternalBuffer() { + if (buffer != null) { + buffer.release(); + buffer = null; + } + } +} diff --git a/codec/src/main/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoder.java b/codec/src/main/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoder.java new file mode 100644 index 00000000000..5ec47fa65f4 --- /dev/null +++ b/codec/src/main/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoder.java @@ -0,0 +1,160 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; + +import java.util.List; + +/** + * A decoder that mostly mimics {@link LengthFieldBasedFrameDecoder} semantics. The only difference is this decoder + * batches incoming {@link ByteBuf} messages and then decodes them at once, passing a big {@link ByteBuf} which can + * possibly contain N messages (where N >= 1) through the pipeline at once. + */ +public class BulkLengthFieldBasedFrameDecoder extends BulkFrameDecoder { + + private final int maxFrameLength; + private final int lengthFieldOffset; + private final int lengthFieldLength; + private final int lengthFieldEndOffset; + + /** + * Creates a new instance. + * + * @param maxFrameLength the maximum length of the frame. If the length of the frame is greater than this value, + * {@link TooLongFrameException} will be thrown. + * @param lengthFieldOffset the offset of the length field + * @param lengthFieldLength the length of the length field + */ + public BulkLengthFieldBasedFrameDecoder(int maxFrameLength, int lengthFieldOffset, int lengthFieldLength) { + this(0, maxFrameLength, lengthFieldOffset, lengthFieldLength); + } + + /** + * Creates a new instance. + * + * @param startBufferSize the start buffer size which is used for batching incoming {@link ByteBuf} messages. Size + * of this buffer must be greater or equal to zero. If the size of this buffer is zero then the default size will be + * used (= 256 bytes). + * @param maxFrameLength the maximum length of the frame. If the length of the frame is greater than this value, + * {@link TooLongFrameException} will be thrown. + * @param lengthFieldOffset the offset of the length field + * @param lengthFieldLength the length of the length field + */ + public BulkLengthFieldBasedFrameDecoder(int startBufferSize, int maxFrameLength, int lengthFieldOffset, + int lengthFieldLength) { + super(startBufferSize); + + if (maxFrameLength <= 0) { + throw new IllegalArgumentException( + "maxFrameLength must be a positive integer: " + + maxFrameLength); + } + + if (lengthFieldOffset < 0) { + throw new IllegalArgumentException( + "lengthFieldOffset must be a non-negative integer: " + + lengthFieldOffset); + } + + if (lengthFieldOffset > maxFrameLength - lengthFieldLength) { + throw new IllegalArgumentException( + "maxFrameLength (" + maxFrameLength + ") " + + "must be equal to or greater than " + + "lengthFieldOffset (" + lengthFieldOffset + ") + " + + "lengthFieldLength (" + lengthFieldLength + ")."); + } + + this.maxFrameLength = maxFrameLength; + this.lengthFieldOffset = lengthFieldOffset; + this.lengthFieldLength = lengthFieldLength; + lengthFieldEndOffset = lengthFieldOffset + lengthFieldLength; + } + + @Override + protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) { + int readableBytes = buffer.readableBytes(); + int rdIdx = buffer.readerIndex(); + + while (readableBytes >= lengthFieldEndOffset) { + int actualLengthFieldOffset = rdIdx + lengthFieldOffset; + long frameLength = getUnadjustedFrameLength(buffer, actualLengthFieldOffset, lengthFieldLength); + + if (frameLength < 0) { + throw new CorruptedFrameException( + "negative pre-adjustment length field: " + frameLength); + } + + frameLength += lengthFieldEndOffset; + + if (frameLength > maxFrameLength) { + throw new TooLongFrameException( + "Adjusted frame length exceeds " + maxFrameLength + + ": " + frameLength + " - discarded"); + } + + int frameLengthInt = (int) frameLength; + + if (readableBytes < frameLengthInt) { + break; + } + + readableBytes -= frameLengthInt; + rdIdx += frameLengthInt; + } + + if (rdIdx == 0) { + return; + } + + final int previousWriterIndex = buffer.writerIndex(); + + buffer.writerIndex(rdIdx); + + final ByteBuf chunk = ctx.alloc().buffer(rdIdx, rdIdx).writeBytes(buffer, rdIdx); + + buffer.writerIndex(previousWriterIndex); + buffer.discardReadBytes(); + + out.add(chunk); + } + + protected long getUnadjustedFrameLength(ByteBuf buf, int offset, int length) { + long frameLength; + switch (length) { + case 1: + frameLength = buf.getUnsignedByte(offset); + break; + case 2: + frameLength = buf.getUnsignedShort(offset); + break; + case 3: + frameLength = buf.getUnsignedMedium(offset); + break; + case 4: + frameLength = buf.getUnsignedInt(offset); + break; + case 8: + frameLength = buf.getLong(offset); + break; + default: + throw new DecoderException( + "unsupported lengthFieldLength: " + lengthFieldLength + " (expected: 1, 2, 3, 4, or 8)"); + } + return frameLength; + } +}
diff --git a/codec/src/test/java/io/netty/handler/codec/BulkFixedLengthFrameDecoderTest.java b/codec/src/test/java/io/netty/handler/codec/BulkFixedLengthFrameDecoderTest.java new file mode 100644 index 00000000000..d6ba85f2066 --- /dev/null +++ b/codec/src/test/java/io/netty/handler/codec/BulkFixedLengthFrameDecoderTest.java @@ -0,0 +1,108 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.channel.embedded.EmbeddedChannel; +import org.junit.Test; + +import static org.hamcrest.CoreMatchers.*; +import static org.junit.Assert.*; + +public class BulkFixedLengthFrameDecoderTest { + + @Test(expected = IllegalArgumentException.class) + public void decoderWithZeroFrameLengthShouldThrowException() { + new BulkFixedLengthFrameDecoder(0); + } + + @Test(expected = IllegalArgumentException.class) + public void decoderWithNegativeFrameLengthShouldThrowException() { + new BulkFixedLengthFrameDecoder(-1); + } + + @Test(expected = IllegalArgumentException.class) + public void decoderWithNegativeStartBufferSizeShouldThrowException() { + new BulkFixedLengthFrameDecoder(-1, 1); + } + + @Test + public void writingNotAByteBufShouldPassThePipeline() { + final int frameLength = 10; + final EmbeddedChannel channel = new EmbeddedChannel(new BulkFixedLengthFrameDecoder(frameLength)); + final Object marker = new Object(); + + channel.writeInbound(marker); + final Object receivedObject = channel.readInbound(); + + assertThat(receivedObject, is(equalTo(marker))); + + channel.finish(); + } + + @Test + public void writingNotEnoughBytesReadsNothing() { + final int frameLength = 10; + final EmbeddedChannel channel = new EmbeddedChannel(new BulkFixedLengthFrameDecoder(frameLength)); + final ByteBuf chunk = Unpooled.copiedBuffer(new byte[5]); + + channel.writeInbound(chunk); + final Object receivedObject = channel.readInbound(); + assertThat(receivedObject, is(equalTo(null))); + + channel.finish(); + } + + @Test + public void writingOneAndAHalfMessageShouldRetrieveOnlyOneMessage() { + final int frameLength = 10; + final int startBufferSize = 5; + final int oneAndAHalfMessageLength = frameLength + frameLength / 2; + final EmbeddedChannel channel = + new EmbeddedChannel(new BulkFixedLengthFrameDecoder(startBufferSize, frameLength)); + final ByteBuf chunk = Unpooled.copiedBuffer(new byte[oneAndAHalfMessageLength]); + + channel.writeInbound(chunk); + final ByteBuf receivedChunk = channel.readInbound(); + assertThat(receivedChunk.readableBytes(), is(equalTo(frameLength))); + receivedChunk.release(); + + channel.finish(); + } + + @Test + public void leftBytesShouldAccumulate() { + final int frameLength = 10; + final int startBufferSize = 5; + final EmbeddedChannel channel = + new EmbeddedChannel(new BulkFixedLengthFrameDecoder(startBufferSize, frameLength)); + + channel.writeInbound(Unpooled.copiedBuffer(new byte[15])); + ByteBuf receivedChunk = channel.readInbound(); + assertThat(receivedChunk.readableBytes(), is(equalTo(frameLength))); + receivedChunk.release(); + + channel.writeInbound(Unpooled.copiedBuffer(new byte[10])); + receivedChunk = channel.readInbound(); + assertThat(receivedChunk.readableBytes(), is(equalTo(frameLength))); + receivedChunk.release(); + + receivedChunk = channel.readInbound(); + assertThat(receivedChunk, is(equalTo(null))); + channel.finish(); + } +} diff --git a/codec/src/test/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoderTest.java b/codec/src/test/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoderTest.java new file mode 100644 index 00000000000..16b231e98e5 --- /dev/null +++ b/codec/src/test/java/io/netty/handler/codec/BulkLengthFieldBasedFrameDecoderTest.java @@ -0,0 +1,112 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.channel.embedded.EmbeddedChannel; +import org.junit.Test; + +import static org.hamcrest.CoreMatchers.*; +import static org.junit.Assert.*; + +public class BulkLengthFieldBasedFrameDecoderTest { + + @Test(expected = IllegalArgumentException.class) + public void decoderWithZeroMaxFrameLengthShouldThrowException() { + new BulkLengthFieldBasedFrameDecoder(0, 1, 1); + } + + @Test(expected = IllegalArgumentException.class) + public void decoderWithNegativeMaxFrameLengthShouldThrowException() { + new BulkLengthFieldBasedFrameDecoder(-1, 1, 1); + } + + @Test(expected = IllegalArgumentException.class) + public void decoderWithNegativeLengthFieldOffsetShouldThrowException() { + new BulkLengthFieldBasedFrameDecoder(1, -1, 1); + } + + @Test(expected = IllegalArgumentException.class) + public void decoderWithTooBigLengthFieldOffsetShouldThrowException() { + new BulkLengthFieldBasedFrameDecoder(10, 8, 8); + } + + @Test + public void writingNotAByteBufShouldPassThePipeline() { + final EmbeddedChannel channel = new EmbeddedChannel(new BulkLengthFieldBasedFrameDecoder(2, 0, 1)); + final Object marker = new Object(); + + channel.writeInbound(marker); + final Object receivedObject = channel.readInbound(); + + assertThat(receivedObject, is(equalTo(marker))); + + channel.finish(); + } + + @Test + public void writingNotEnoughBytesReadsNothing() { + final EmbeddedChannel channel = new EmbeddedChannel(new BulkLengthFieldBasedFrameDecoder(6, 2, 1)); + final ByteBuf chunk = Unpooled.copiedBuffer(new byte[] { 1, 1, 3, 0, 0 }); + + channel.writeInbound(chunk); + final Object receivedObject = channel.readInbound(); + assertThat(receivedObject, is(equalTo(null))); + + channel.finish(); + } + + @Test + public void writingOneAndAHalfMessageShouldRetrieveOnlyOneMessage() { + final EmbeddedChannel channel = + new EmbeddedChannel(new BulkLengthFieldBasedFrameDecoder(10, 2, 1)); + final byte[] completeFrame = { 1, 1, 3, 0, 0, 0 }; + final byte[] halfFrame = { 1, 1, 2, 0 }; + final ByteBuf chunk = Unpooled.copiedBuffer(completeFrame, halfFrame); + + channel.writeInbound(chunk); + final ByteBuf receivedChunk = channel.readInbound(); + assertThat(receivedChunk.readableBytes(), is(equalTo(completeFrame.length))); + receivedChunk.release(); + + channel.finish(); + } + + @Test + public void leftBytesShouldAccumulate() { + final EmbeddedChannel channel = + new EmbeddedChannel(new BulkLengthFieldBasedFrameDecoder(10, 2, 1)); + final byte[] completeFrame = { 1, 1, 3, 0, 0, 0 }; + final byte[] halfFrame = { 1, 1, 4, 0 }; + final byte[] anotherHalfFrame = { 0, 0, 0 }; + ByteBuf chunk = Unpooled.copiedBuffer(completeFrame, halfFrame); + + channel.writeInbound(chunk); + ByteBuf receivedChunk = channel.readInbound(); + assertThat(receivedChunk.readableBytes(), is(equalTo(completeFrame.length))); + receivedChunk.release(); + + chunk = Unpooled.copiedBuffer(anotherHalfFrame); + channel.writeInbound(chunk); + receivedChunk = channel.readInbound(); + assertThat(receivedChunk.readableBytes(), is(equalTo(halfFrame.length + anotherHalfFrame.length))); + receivedChunk.release(); + + channel.finish(); + } +}
train
train
2016-06-02T18:22:47
"2016-05-20T09:05:21Z"
maseev
val
netty/netty/5307_5349
netty/netty
netty/netty/5307
netty/netty/5349
[ "timestamp(timedelta=14.0, similarity=0.8851513607779111)" ]
b461c9d54c40b79403d21aa6980ae4c4abcb3283
b9d399d19c82946043601f5acff0bb6980e4ed2f
[ "@nmittler @ejona86 - WDYT?\n", "@Scottmitch Yeah I think that seems reasonable. I suspect we'll want to set some sort of timeout on how long we wait for all of the streams to close before sending a `hard` `GOAWAY` and just closing the connection. Having all of this \"pluggable\" in some way might be handy.\n", "@Scottmitch SGTM\n", "> I suspect we'll want to set some sort of timeout on how long we wait for all of the streams to close before sending a hard GOAWAY\n\nWhen close is called we already have a timeout mechanism [gracefulShutdownTimeoutMillis](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L73) which attempts to wait for all streams to be closed before the channel is actually closed. Can you clarify if you think we need more than this, and what you mean by `hard GOAWAY`?\n", "@Scottmitch ah right, I had forgotten about that :). SGTM\n" ]
[ "Should the comment above be moved this this `else`?\n", "Maybe add a comment here to describe what you're doing?\n", "I will clarify the comments.\n", "I will clarify the comments.\n" ]
"2016-06-03T19:39:10Z"
[ "improvement" ]
Http2ConnectionHandler always sends GO_AWAY when channel is closed
If the channel is active `Http2ConnectionHandler` will always send a GO_AWAY when the channel is closed. This may be unnecessary, and even confusing to the remote peer, if the local peer is attempting to do a [graceful shutdown](https://tools.ietf.org/html/rfc7540#section-6.8). For example if the local peer manually sends a GO_AWAY to indicate the connection is closing with some `Additional Debug Data` to indicate why, they may also want to send that same debug data in the next GO_AWAY message which carries the real `Last-Stream-ID`. Currently the `Http2ConnectionHandler` will always send a GO_AWAY with empty `Additional Debug Data` when the channel is closing. One approach to give the user more control is if a GO_AWAY has already been sent when `Http2ConnectionHandler` close's method is called, then we don't send a GO_AWAY, but instead just wait for all the streams to close.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java index 3ba341eb456..b5c19d7078d 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java @@ -35,6 +35,7 @@ import java.util.concurrent.TimeUnit; import static io.netty.buffer.ByteBufUtil.hexDump; +import static io.netty.buffer.Unpooled.EMPTY_BUFFER; import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_STREAM_ID; import static io.netty.handler.codec.http2.Http2CodecUtil.connectionPrefaceBuf; import static io.netty.handler.codec.http2.Http2CodecUtil.getEmbeddedHttp2Exception; @@ -417,17 +418,22 @@ public void close(ChannelHandlerContext ctx, ChannelPromise promise) throws Exce return; } - ChannelFuture future = goAway(ctx, null); + // If the user has already sent a GO_AWAY frame they may be attempting to do a graceful shutdown which requires + // sending multiple GO_AWAY frames. We should only send a GO_AWAY here if one has not already been sent. If + // a GO_AWAY has been sent we send a empty buffer just so we can wait to close until all other data has been + // flushed to the OS. + // https://github.com/netty/netty/issues/5307 + final ChannelFuture future = connection().goAwaySent() ? ctx.write(EMPTY_BUFFER) : goAway(ctx, null); ctx.flush(); doGracefulShutdown(ctx, future, promise); } private void doGracefulShutdown(ChannelHandlerContext ctx, ChannelFuture future, ChannelPromise promise) { - // If there are no active streams, close immediately after the send is complete. - // Otherwise wait until all streams are inactive. if (isGracefulShutdownComplete()) { + // If there are no active streams, close immediately after the GO_AWAY write completes. future.addListener(new ClosingChannelFutureListener(ctx, promise)); } else { + // If there are active streams we should wait until they are all closed before closing the connection. closeListener = new ClosingChannelFutureListener(ctx, promise, gracefulShutdownTimeoutMillis, MILLISECONDS); }
null
test
train
2016-06-06T11:04:56
"2016-05-26T02:02:16Z"
Scottmitch
val
netty/netty/5374_5377
netty/netty
netty/netty/5374
netty/netty/5377
[ "timestamp(timedelta=13.0, similarity=0.9126192891211647)" ]
c3abb9146e47b1c8de5a26b7ae5076845fc80c28
41f4c1a3df531908fa273f7de3361f4380b1a77a
[ "@rkapsi - Would you be interested in submitting a PR?\n", "@Scottmitch On it...\n\n... but `io.netty.util.collection.*` seem to be missing on the 4.1 branch?!\n", "@rkapsi its generated ... run mvn clean compile in the common module\n", "@normanmaurer thanks!\n", "Fixed by https://github.com/netty/netty/pull/5377\n" ]
[ "Pass a `ByteBufAllocator` as a parameter and use it to allocate this buffer. We can use `UnpooledByteBufAllocator.DEFAULT` by default but at least this gives control over allocations done in this method.\n", "Pass a `ByteBufAllocator` as a parameter and use it to allocate this buffer. We can use `UnpooledByteBufAllocator.DEFAULT` by default but at least this gives control over allocations done in this method.\n", "if `pem.content().isDirect()` is true we should be able to just use `pem.content().retainedSlice()` instead of allocating a new buffer and copying.\n", "we may also want have the new classes that construct the PEM cert/key allocate direct by default.\n", "not introduced by this PR but if we adjust the ordering of events such that the encoding is done first ... we should know the exact size to allocate `pem` (`BEGIN_CERT.length + base64.readableBytes() + END_CERT.length`)\n", "discussion: is this necessary ATM? I wonder if we can do with out this for now?\n", "nit: add some descriptive text? `content is not accessible after this object has been destroyed`\n", "Please elaborate. You mean classes such as `SelfSignedCertificate` create and return PemX509Certificates?\n", "any reason why we can't implement this?\n", "`PKCS#8` \nhttps://docs.oracle.com/javase/7/docs/api/java/security/Key.html#getFormat()\n", "can/should we support this based upon https://docs.oracle.com/javase/7/docs/api/java/security/Key.html#getAlgorithm() ?\n", "`PemX509Certificate` and `PemPrivateKey` currently use `Unpooled.buffer()` to allocate the buffer they use for internal storage. However we could change this to `ByteBufAllocator.directBuffer(...)` to make it more likely that `pem.content().isDirect()` is true and thus avoid the copy operation.\n", "is it correct to return 0 if the length is 0 ? If so we need to update the javadocs.\n", "move this in another finally block as just to be safe ?\n", "to a null check via ObjectUtil.checkNonNull(...) ?\n", "maybe you could just extend `DefaultByteBufHolder` and so reuse some code ?\n", "directly call content.touch(...)\n", "+1\n", "+1\n", "Use javadoc notion and use {@link...} \n\nSame above\n", "Again why not extend `DefaultByteBufHolder` ?\n", "null checks\n", "null check\n", "@normanmaurer `DefaultByteBufHolder` doesn't have a `#destroyed()` method which it used to fill the ByteBuf with zero bytes if the value is sensitive (PrivateKey).\n", "and ? What is the problem to just add it to PemValue ? \n", "How about throwing an IllegalArgumentException if length==0?\n", "yep sounds good\n", "So, `PrivateKey` implements that `Destroyable` interface but it doesn't say when it was introduced. Netty's build agent didn't like my `@Overrides`.\n", "Something like... extend and then...?\n\n``` java\n@Override\npublic boolean release() {\n if (super.release()) {\n // zero out\n return true;\n }\n return false;\n}\n```\n", "nit: just pass `ByteBufAllocator.DEFAULT` directly ... no need for temp variable (even if we think JDK may optimize it away)\n", "Actually that will not work, `DefaultByteBufHolder#release()` returns true/false after it has released the underlying ByteBuf at which point it's no longer possible for me to wipe the bytes.\n", "can this actually happen?\n", "The alloc is passed ~2 lines further down into the `toBIO()` call. Feels more appropriate to have it as a temp variable and ideally passed in as method argument (but that would break existing API). Happy to change it to `ByteBufAllocator.DEFAULT` though.\n", "if the constructor of PemValue were to throw for some reason, it seems like we would leak the buffer. Consider adding comments on the constructor that it must take ownership of the buffer, or handle the release here:\n\n``` java\nPemEncoded result = new PemValue(pem, false);\nsuccess = true;\nreturn result;\n```\n", "nit: consider using a tertiary statement.\n", "only if `chain.length==0` is passed in. Happy to turn that into a precondition that throws an IllegalArgumentException or so.\n", "Ah ok... Sorry for the noise\n\n> Am 09.06.2016 um 19:28 schrieb Roger [email protected]:\n> \n> In handler/src/main/java/io/netty/handler/ssl/PemValue.java:\n> \n> > - _/\n> > +package io.netty.handler.ssl;\n> > +\n> > +import io.netty.buffer.ByteBuf;\n> > +import io.netty.buffer.ByteBufAllocator;\n> > +import io.netty.util.AbstractReferenceCounted;\n> > +import io.netty.util.IllegalReferenceCountException;\n> > +\n> > +/_*\n> > - \\* A PEM encoded value.\n> > - *\n> > - \\* @see PemEncoded\n> > - \\* @see PemPrivateKey#toPEM(ByteBufAllocator, boolean, java.security.PrivateKey)\n> > - \\* @see PemX509Certificate#toPEM(ByteBufAllocator, boolean, java.security.cert.X509Certificate[])\n> > - */\n> > +class PemValue extends AbstractReferenceCounted implements PemEncoded {\n> > Actually that will not work, DefaultByteBufHolder#release() returns true/false after it has released the underlying ByteBuf at which point it's no longer possible for me to wipe the bytes.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "can this be package private?\n", "just for content :)\n", "I missed that it was used in 2 spots originally ... thanks for clarifying ... I'm fine leaving it as is.\n", "is this necessary any more now that we don't implement `Destroyable`?\n", "nvm ... `PrivateKey` brings `Destroyable` in to the heirarchy ... add `@Override`?\n", "add `@Override`?\n", "yah lets handle that case explicitly as a precondition then\n", "I'm a bit on the fence. Java's PrivateKey interface extends Destroyable. No idea when that was introduced. I assume JDK8 because its got `default` impls. straight on the interface. Yesterday's build errors were because of that.\n", "consider simplifying this `try` block to:\n\n``` java\n boolean success = false;\n final ByteBuf pem = useDirect ? allocator.directBuffer(size) : allocator.buffer(size);\n try {\n pem.writeBytes(BEGIN_PRIVATE_KEY);\n pem.writeBytes(base64);\n pem.writeBytes(END_PRIVATE_KEY);\n\n PemValue value = new PemValue(pem, true);\n success = true;\n return value;\n } finally {\n // Make sure we never leak that PEM ByteBuf if there's an Exception.\n if (!success) {\n SslUtils.zerooutAndRelease(pem);\n }\n }\n```\n", "The build agent didn't like that yesterday which lead to that error\n\n```\n[Compiler] Compilation failure\n where T is a type-variable:\n T extends Object declared in method <T>attr(AttributeKey<T>)\n/var/lib/teamcity-agent/work/a00bdf77e621d005/handler/src/main/java/io/netty/handler/ssl/PemPrivateKey.java:[178,4] error: method does not override or implement a method from a supertype\n```\n", "@rkapsi - Good point. `PrivateKey` started extending `Destroyable` in [JDK8](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/security/PrivateKey.java/). I recall you had the `//@Override // JDK8` and I now understand why :) ... Maybe add something like this back `//@Override will only in work with JDK8 but Netty must be JDK6 compatible so we can't have nice things`\n", "+1 that makes sense ... we can discuss above in https://github.com/netty/netty/pull/5377#discussion_r66490755\n", "+1\n", "sgtm\n", "@rkapsi just address this comment and I will pull in. Great work!\n" ]
"2016-06-08T22:39:46Z"
[ "improvement", "feature" ]
Let OpenSslContext take pre-encoded pkcs#8 private key/cert bytes
Netty 4.1.1-SNAPSHOT It would be great if it was possible to pass pre-encoded pkcs#8 private key and certificate bytes into OpenSslContext and skip some of the stuff that is happening in OpenSslContext's `toBIO(...)` methods (namely the Base64 encoding and appending of the "---- BEGIN/END ----" markers).
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java", "handler/src/main/java/io/netty/handler/ssl/SslUtils.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java", "handler/src/main/java/io/netty/handler/ssl/PemEncoded.java", "handler/src/main/java/io/netty/handler/ssl/PemPrivateKey.java", "handler/src/main/java/io/netty/handler/ssl/PemValue.java", "handler/src/main/java/io/netty/handler/ssl/PemX509Certificate.java", "handler/src/main/java/io/netty/handler/ssl/SslUtils.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java index 446560492fd..67de79d4320 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java @@ -17,9 +17,6 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufAllocator; -import io.netty.buffer.Unpooled; -import io.netty.handler.codec.base64.Base64; -import io.netty.util.CharsetUtil; import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.SystemPropertyUtil; import io.netty.util.internal.logging.InternalLogger; @@ -36,10 +33,8 @@ import javax.net.ssl.TrustManager; import javax.net.ssl.X509ExtendedTrustManager; import javax.net.ssl.X509TrustManager; -import java.io.File; import java.security.PrivateKey; import java.security.cert.Certificate; -import java.security.cert.CertificateException; import java.security.cert.CertificateExpiredException; import java.security.cert.CertificateNotYetValidException; import java.security.cert.CertificateRevokedException; @@ -55,11 +50,6 @@ import static io.netty.handler.ssl.ApplicationProtocolConfig.SelectedListenerFailureBehavior; public abstract class OpenSslContext extends SslContext { - private static final byte[] BEGIN_CERT = "-----BEGIN CERTIFICATE-----\n".getBytes(CharsetUtil.US_ASCII); - private static final byte[] END_CERT = "\n-----END CERTIFICATE-----\n".getBytes(CharsetUtil.US_ASCII); - private static final byte[] BEGIN_PRIVATE_KEY = "-----BEGIN PRIVATE KEY-----\n".getBytes(CharsetUtil.US_ASCII); - private static final byte[] END_PRIVATE_KEY = "\n-----END PRIVATE KEY-----\n".getBytes(CharsetUtil.US_ASCII); - private static final InternalLogger logger = InternalLoggerFactory.getInstance(OpenSslContext.class); /** * To make it easier for users to replace JDK implemention with OpenSsl version we also use @@ -513,34 +503,16 @@ static long toBIO(PrivateKey key) throws Exception { if (key == null) { return 0; } - ByteBuf buffer = Unpooled.directBuffer(); + + ByteBufAllocator allocator = ByteBufAllocator.DEFAULT; + PemEncoded pem = PemPrivateKey.toPEM(allocator, true, key); try { - buffer.writeBytes(BEGIN_PRIVATE_KEY); - ByteBuf wrappedBuf = Unpooled.wrappedBuffer(key.getEncoded()); - final ByteBuf encodedBuf; - try { - encodedBuf = Base64.encode(wrappedBuf, true); - try { - buffer.writeBytes(encodedBuf); - } finally { - zerooutAndRelease(encodedBuf); - } - } finally { - zerooutAndRelease(wrappedBuf); - } - buffer.writeBytes(END_PRIVATE_KEY); - return newBIO(buffer); + return toBIO(allocator, pem.retain()); } finally { - // Zero out the buffer and so the private key it held. - zerooutAndRelease(buffer); + pem.release(); } } - private static void zerooutAndRelease(ByteBuf buffer) { - buffer.setZero(0, buffer.capacity()); - buffer.release(); - } - /** * Return the pointer to a <a href="https://www.openssl.org/docs/crypto/BIO_get_mem_ptr.html">in-memory BIO</a> * or {@code 0} if the {@code certChain} is {@code null}. The BIO contains the content of the {@code certChain}. @@ -549,37 +521,62 @@ static long toBIO(X509Certificate[] certChain) throws Exception { if (certChain == null) { return 0; } - ByteBuf buffer = Unpooled.directBuffer(); + + if (certChain.length == 0) { + throw new IllegalArgumentException("certChain can't be empty"); + } + + ByteBufAllocator allocator = ByteBufAllocator.DEFAULT; + PemEncoded pem = PemX509Certificate.toPEM(allocator, true, certChain); try { - for (X509Certificate cert: certChain) { - buffer.writeBytes(BEGIN_CERT); - ByteBuf wrappedBuf = Unpooled.wrappedBuffer(cert.getEncoded()); + return toBIO(allocator, pem.retain()); + } finally { + pem.release(); + } + } + + private static long toBIO(ByteBufAllocator allocator, PemEncoded pem) throws Exception { + try { + // We can turn direct buffers straight into BIOs. No need to + // make a yet another copy. + ByteBuf content = pem.content(); + + if (content.isDirect()) { + return newBIO(content.retainedSlice()); + } + + ByteBuf buffer = allocator.directBuffer(content.readableBytes()); + try { + buffer.writeBytes(content); + return newBIO(buffer.retainedSlice()); + } finally { try { - ByteBuf encodedBuf = Base64.encode(wrappedBuf, true); - try { - buffer.writeBytes(encodedBuf); - } finally { - encodedBuf.release(); + // If the contents of the ByteBuf is sensitive (e.g. a PrivateKey) we + // need to zero out the bytes of the copy before we're releasing it. + if (pem.isSensitive()) { + SslUtils.zeroout(buffer); } } finally { - wrappedBuf.release(); + buffer.release(); } - buffer.writeBytes(END_CERT); } - return newBIO(buffer); - } finally { - buffer.release(); + } finally { + pem.release(); } } private static long newBIO(ByteBuf buffer) throws Exception { - long bio = SSL.newMemBIO(); - int readable = buffer.readableBytes(); - if (SSL.writeToBIO(bio, OpenSsl.memoryAddress(buffer), readable) != readable) { - SSL.freeBIO(bio); - throw new IllegalStateException("Could not write data to memory BIO"); + try { + long bio = SSL.newMemBIO(); + int readable = buffer.readableBytes(); + if (SSL.writeToBIO(bio, OpenSsl.memoryAddress(buffer), readable) != readable) { + SSL.freeBIO(bio); + throw new IllegalStateException("Could not write data to memory BIO"); + } + return bio; + } finally { + buffer.release(); } - return bio; } static void checkKeyManagerFactory(KeyManagerFactory keyManagerFactory) { diff --git a/handler/src/main/java/io/netty/handler/ssl/PemEncoded.java b/handler/src/main/java/io/netty/handler/ssl/PemEncoded.java new file mode 100644 index 00000000000..fc3db93aa31 --- /dev/null +++ b/handler/src/main/java/io/netty/handler/ssl/PemEncoded.java @@ -0,0 +1,55 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.ssl; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufHolder; + +/** + * A marker interface for PEM encoded values. + */ +interface PemEncoded extends ByteBufHolder { + + /** + * Returns {@code true} if the PEM encoded value is considered + * sensitive information such as a private key. + */ + boolean isSensitive(); + + @Override + PemEncoded copy(); + + @Override + PemEncoded duplicate(); + + @Override + PemEncoded retainedDuplicate(); + + @Override + PemEncoded replace(ByteBuf content); + + @Override + PemEncoded retain(); + + @Override + PemEncoded retain(int increment); + + @Override + PemEncoded touch(); + + @Override + PemEncoded touch(Object hint); +} diff --git a/handler/src/main/java/io/netty/handler/ssl/PemPrivateKey.java b/handler/src/main/java/io/netty/handler/ssl/PemPrivateKey.java new file mode 100644 index 00000000000..c1b828c027a --- /dev/null +++ b/handler/src/main/java/io/netty/handler/ssl/PemPrivateKey.java @@ -0,0 +1,218 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.ssl; + +import java.security.PrivateKey; + +import javax.security.auth.Destroyable; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; +import io.netty.util.AbstractReferenceCounted; +import io.netty.util.CharsetUtil; +import io.netty.util.IllegalReferenceCountException; +import io.netty.util.internal.ObjectUtil; + +/** + * This is a special purpose implementation of a {@link PrivateKey} which allows the + * user to pass PEM/PKCS#8 encoded key material straight into {@link OpenSslContext} + * without having to parse and re-encode bytes in Java land. + * + * All methods other than what's implemented in {@link PemEncoded} and {@link Destroyable} + * throw {@link UnsupportedOperationException}s. + * + * @see PemEncoded + * @see OpenSslContext + * @see #valueOf(byte[]) + * @see #valueOf(ByteBuf) + */ +public final class PemPrivateKey extends AbstractReferenceCounted implements PrivateKey, PemEncoded { + + private static final byte[] BEGIN_PRIVATE_KEY = "-----BEGIN PRIVATE KEY-----\n".getBytes(CharsetUtil.US_ASCII); + private static final byte[] END_PRIVATE_KEY = "\n-----END PRIVATE KEY-----\n".getBytes(CharsetUtil.US_ASCII); + + private static final String PKCS8_FORMAT = "PKCS#8"; + + /** + * Creates a {@link PemEncoded} value from the {@link PrivateKey}. + */ + static PemEncoded toPEM(ByteBufAllocator allocator, boolean useDirect, PrivateKey key) { + // We can take a shortcut if the private key happens to be already + // PEM/PKCS#8 encoded. This is the ideal case and reason why all + // this exists. It allows the user to pass pre-encoded bytes straight + // into OpenSSL without having to do any of the extra work. + if (key instanceof PemEncoded) { + return ((PemEncoded) key).retain(); + } + + ByteBuf encoded = Unpooled.wrappedBuffer(key.getEncoded()); + try { + ByteBuf base64 = SslUtils.toBase64(allocator, encoded); + try { + int size = BEGIN_PRIVATE_KEY.length + base64.readableBytes() + END_PRIVATE_KEY.length; + + boolean success = false; + final ByteBuf pem = useDirect ? allocator.directBuffer(size) : allocator.buffer(size); + try { + pem.writeBytes(BEGIN_PRIVATE_KEY); + pem.writeBytes(base64); + pem.writeBytes(END_PRIVATE_KEY); + + PemValue value = new PemValue(pem, true); + success = true; + return value; + } finally { + // Make sure we never leak that PEM ByteBuf if there's an Exception. + if (!success) { + SslUtils.zerooutAndRelease(pem); + } + } + } finally { + SslUtils.zerooutAndRelease(base64); + } + } finally { + SslUtils.zerooutAndRelease(encoded); + } + } + + /** + * Creates a {@link PemPrivateKey} from raw {@code byte[]}. + * + * ATTENTION: It's assumed that the given argument is a PEM/PKCS#8 encoded value. + * No input validation is performed to validate it. + */ + public static PemPrivateKey valueOf(byte[] key) { + return valueOf(Unpooled.wrappedBuffer(key)); + } + + /** + * Creates a {@link PemPrivateKey} from raw {@code ByteBuf}. + * + * ATTENTION: It's assumed that the given argument is a PEM/PKCS#8 encoded value. + * No input validation is performed to validate it. + */ + public static PemPrivateKey valueOf(ByteBuf key) { + return new PemPrivateKey(key); + } + + private final ByteBuf content; + + private PemPrivateKey(ByteBuf content) { + this.content = ObjectUtil.checkNotNull(content, "content"); + } + + @Override + public boolean isSensitive() { + return true; + } + + @Override + public ByteBuf content() { + int count = refCnt(); + if (count <= 0) { + throw new IllegalReferenceCountException(count); + } + + return content; + } + + @Override + public PemPrivateKey copy() { + return replace(content.copy()); + } + + @Override + public PemPrivateKey duplicate() { + return replace(content.duplicate()); + } + + @Override + public PemPrivateKey retainedDuplicate() { + return replace(content.retainedDuplicate()); + } + + @Override + public PemPrivateKey replace(ByteBuf content) { + return new PemPrivateKey(content); + } + + @Override + public PemPrivateKey touch() { + content.touch(); + return this; + } + + @Override + public PemPrivateKey touch(Object hint) { + content.touch(hint); + return this; + } + + @Override + public PemPrivateKey retain() { + return (PemPrivateKey) super.retain(); + } + + @Override + public PemPrivateKey retain(int increment) { + return (PemPrivateKey) super.retain(increment); + } + + @Override + protected void deallocate() { + // Private Keys are sensitive. We need to zero the bytes + // before we're releasing the underlying ByteBuf + SslUtils.zerooutAndRelease(content); + } + + @Override + public byte[] getEncoded() { + throw new UnsupportedOperationException(); + } + + @Override + public String getAlgorithm() { + throw new UnsupportedOperationException(); + } + + @Override + public String getFormat() { + return PKCS8_FORMAT; + } + + /** + * NOTE: This is a JDK8 interface/method. Due to backwards compatibility + * reasons it's not possible to slap the {@code @Override} annotation onto + * this method. + * + * @see Destroyable#destroy() + */ + public void destroy() { + release(refCnt()); + } + + /** + * NOTE: This is a JDK8 interface/method. Due to backwards compatibility + * reasons it's not possible to slap the {@code @Override} annotation onto + * this method. + * + * @see Destroyable#isDestroyed() + */ + public boolean isDestroyed() { + return refCnt() == 0; + } +} diff --git a/handler/src/main/java/io/netty/handler/ssl/PemValue.java b/handler/src/main/java/io/netty/handler/ssl/PemValue.java new file mode 100644 index 00000000000..becb5b84921 --- /dev/null +++ b/handler/src/main/java/io/netty/handler/ssl/PemValue.java @@ -0,0 +1,105 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.ssl; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.util.AbstractReferenceCounted; +import io.netty.util.IllegalReferenceCountException; +import io.netty.util.internal.ObjectUtil; + +/** + * A PEM encoded value. + * + * @see PemEncoded + * @see PemPrivateKey#toPEM(ByteBufAllocator, boolean, java.security.PrivateKey) + * @see PemX509Certificate#toPEM(ByteBufAllocator, boolean, java.security.cert.X509Certificate[]) + */ +class PemValue extends AbstractReferenceCounted implements PemEncoded { + + private final ByteBuf content; + + private final boolean sensitive; + + public PemValue(ByteBuf content, boolean sensitive) { + this.content = ObjectUtil.checkNotNull(content, "content"); + this.sensitive = sensitive; + } + + @Override + public boolean isSensitive() { + return sensitive; + } + + @Override + public ByteBuf content() { + int count = refCnt(); + if (count <= 0) { + throw new IllegalReferenceCountException(count); + } + + return content; + } + + @Override + public PemValue copy() { + return replace(content.copy()); + } + + @Override + public PemValue duplicate() { + return replace(content.duplicate()); + } + + @Override + public PemValue retainedDuplicate() { + return replace(content.retainedDuplicate()); + } + + @Override + public PemValue replace(ByteBuf content) { + return new PemValue(content, sensitive); + } + + @Override + public PemValue touch() { + return (PemValue) super.touch(); + } + + @Override + public PemValue touch(Object hint) { + content.touch(hint); + return this; + } + + @Override + public PemValue retain() { + return (PemValue) super.retain(); + } + + @Override + public PemValue retain(int increment) { + return (PemValue) super.retain(increment); + } + + @Override + protected void deallocate() { + if (sensitive) { + SslUtils.zeroout(content); + } + content.release(); + } +} diff --git a/handler/src/main/java/io/netty/handler/ssl/PemX509Certificate.java b/handler/src/main/java/io/netty/handler/ssl/PemX509Certificate.java new file mode 100644 index 00000000000..1d60c732f52 --- /dev/null +++ b/handler/src/main/java/io/netty/handler/ssl/PemX509Certificate.java @@ -0,0 +1,415 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.ssl; + +import java.math.BigInteger; +import java.security.InvalidKeyException; +import java.security.NoSuchAlgorithmException; +import java.security.NoSuchProviderException; +import java.security.Principal; +import java.security.PublicKey; +import java.security.SignatureException; +import java.security.cert.CertificateEncodingException; +import java.security.cert.CertificateException; +import java.security.cert.CertificateExpiredException; +import java.security.cert.CertificateNotYetValidException; +import java.security.cert.X509Certificate; +import java.util.Arrays; +import java.util.Date; +import java.util.Set; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.Unpooled; +import io.netty.util.CharsetUtil; +import io.netty.util.IllegalReferenceCountException; +import io.netty.util.internal.ObjectUtil; + +/** + * This is a special purpose implementation of a {@link X509Certificate} which allows + * the user to pass PEM/PKCS#8 encoded data straight into {@link OpenSslContext} without + * having to parse and re-encode bytes in Java land. + * + * All methods other than what's implemented in {@link PemEncoded}'s throw + * {@link UnsupportedOperationException}s. + * + * @see PemEncoded + * @see OpenSslContext + * @see #valueOf(byte[]) + * @see #valueOf(ByteBuf) + */ +public final class PemX509Certificate extends X509Certificate implements PemEncoded { + + private static final byte[] BEGIN_CERT = "-----BEGIN CERTIFICATE-----\n".getBytes(CharsetUtil.US_ASCII); + private static final byte[] END_CERT = "\n-----END CERTIFICATE-----\n".getBytes(CharsetUtil.US_ASCII); + + /** + * Creates a {@link PemEncoded} value from the {@link X509Certificate}s. + */ + static PemEncoded toPEM(ByteBufAllocator allocator, boolean useDirect, + X509Certificate... chain) throws CertificateEncodingException { + + if (chain == null || chain.length == 0) { + throw new IllegalArgumentException("X.509 certificate chain can't be null or empty"); + } + + // We can take a shortcut if there is only one certificate and + // it already happens to be a PemEncoded instance. This is the + // ideal case and reason why all this exists. It allows the user + // to pass pre-encoded bytes straight into OpenSSL without having + // to do any of the extra work. + if (chain.length == 1) { + X509Certificate first = chain[0]; + if (first instanceof PemEncoded) { + return ((PemEncoded) first).retain(); + } + } + + boolean success = false; + ByteBuf pem = null; + try { + for (X509Certificate cert : chain) { + + if (cert == null) { + throw new IllegalArgumentException("Null element in chain: " + Arrays.toString(chain)); + } + + if (cert instanceof PemEncoded) { + pem = append(allocator, useDirect, (PemEncoded) cert, chain.length, pem); + } else { + pem = append(allocator, useDirect, cert, chain.length, pem); + } + } + + PemValue value = new PemValue(pem, false); + success = true; + return value; + } finally { + // Make sure we never leak the PEM's ByteBuf in the event of an Exception + if (!success && pem != null) { + pem.release(); + } + } + } + + /** + * Appends the {@link PemEncoded} value to the {@link ByteBuf} (last arg) and returns it. + * If the {@link ByteBuf} didn't exist yet it'll create it using the {@link ByteBufAllocator}. + */ + private static ByteBuf append(ByteBufAllocator allocator, boolean useDirect, + PemEncoded encoded, int count, ByteBuf pem) { + + ByteBuf content = encoded.content(); + + if (pem == null) { + // see the other append() method + pem = newBuffer(allocator, useDirect, content.readableBytes() * count); + } + + pem.writeBytes(content.slice()); + return pem; + } + + /** + * Appends the {@link X509Certificate} value to the {@link ByteBuf} (last arg) and returns it. + * If the {@link ByteBuf} didn't exist yet it'll create it using the {@link ByteBufAllocator}. + */ + private static ByteBuf append(ByteBufAllocator allocator, boolean useDirect, + X509Certificate cert, int count, ByteBuf pem) throws CertificateEncodingException { + + ByteBuf encoded = Unpooled.wrappedBuffer(cert.getEncoded()); + try { + ByteBuf base64 = SslUtils.toBase64(allocator, encoded); + try { + if (pem == null) { + // We try to approximate the buffer's initial size. The sizes of + // certificates can vary a lot so it'll be off a bit depending + // on the number of elements in the array (count argument). + pem = newBuffer(allocator, useDirect, + (BEGIN_CERT.length + base64.readableBytes() + END_CERT.length) * count); + } + + pem.writeBytes(BEGIN_CERT); + pem.writeBytes(base64); + pem.writeBytes(END_CERT); + } finally { + base64.release(); + } + } finally { + encoded.release(); + } + + return pem; + } + + private static ByteBuf newBuffer(ByteBufAllocator allocator, boolean useDirect, int initialCapacity) { + return useDirect ? allocator.directBuffer(initialCapacity) : allocator.buffer(initialCapacity); + } + + /** + * Creates a {@link PemX509Certificate} from raw {@code byte[]}. + * + * ATTENTION: It's assumed that the given argument is a PEM/PKCS#8 encoded value. + * No input validation is performed to validate it. + */ + public static PemX509Certificate valueOf(byte[] key) { + return valueOf(Unpooled.wrappedBuffer(key)); + } + + /** + * Creates a {@link PemX509Certificate} from raw {@code ByteBuf}. + * + * ATTENTION: It's assumed that the given argument is a PEM/PKCS#8 encoded value. + * No input validation is performed to validate it. + */ + public static PemX509Certificate valueOf(ByteBuf key) { + return new PemX509Certificate(key); + } + + private final ByteBuf content; + + private PemX509Certificate(ByteBuf content) { + this.content = ObjectUtil.checkNotNull(content, "content"); + } + + @Override + public boolean isSensitive() { + // There is no sensitive information in a X509 Certificate + return false; + } + + @Override + public int refCnt() { + return content.refCnt(); + } + + @Override + public ByteBuf content() { + int count = refCnt(); + if (count <= 0) { + throw new IllegalReferenceCountException(count); + } + + return content; + } + + @Override + public PemX509Certificate copy() { + return replace(content.copy()); + } + + @Override + public PemX509Certificate duplicate() { + return replace(content.duplicate()); + } + + @Override + public PemX509Certificate retainedDuplicate() { + return replace(content.retainedDuplicate()); + } + + @Override + public PemX509Certificate replace(ByteBuf content) { + return new PemX509Certificate(content); + } + + @Override + public PemX509Certificate retain() { + content.retain(); + return this; + } + + @Override + public PemX509Certificate retain(int increment) { + content.retain(increment); + return this; + } + + @Override + public PemX509Certificate touch() { + content.touch(); + return this; + } + + @Override + public PemX509Certificate touch(Object hint) { + content.touch(hint); + return this; + } + + @Override + public boolean release() { + return content.release(); + } + + @Override + public boolean release(int decrement) { + return content.release(decrement); + } + + @Override + public byte[] getEncoded() throws CertificateEncodingException { + throw new UnsupportedOperationException(); + } + + @Override + public boolean hasUnsupportedCriticalExtension() { + throw new UnsupportedOperationException(); + } + + @Override + public Set<String> getCriticalExtensionOIDs() { + throw new UnsupportedOperationException(); + } + + @Override + public Set<String> getNonCriticalExtensionOIDs() { + throw new UnsupportedOperationException(); + } + + @Override + public byte[] getExtensionValue(String oid) { + throw new UnsupportedOperationException(); + } + + @Override + public void checkValidity() throws CertificateExpiredException, + CertificateNotYetValidException { + throw new UnsupportedOperationException(); + } + + @Override + public void checkValidity(Date date) throws CertificateExpiredException, + CertificateNotYetValidException { + throw new UnsupportedOperationException(); + } + + @Override + public int getVersion() { + throw new UnsupportedOperationException(); + } + + @Override + public BigInteger getSerialNumber() { + throw new UnsupportedOperationException(); + } + + @Override + public Principal getIssuerDN() { + throw new UnsupportedOperationException(); + } + + @Override + public Principal getSubjectDN() { + throw new UnsupportedOperationException(); + } + + @Override + public Date getNotBefore() { + throw new UnsupportedOperationException(); + } + + @Override + public Date getNotAfter() { + throw new UnsupportedOperationException(); + } + + @Override + public byte[] getTBSCertificate() throws CertificateEncodingException { + throw new UnsupportedOperationException(); + } + + @Override + public byte[] getSignature() { + throw new UnsupportedOperationException(); + } + + @Override + public String getSigAlgName() { + throw new UnsupportedOperationException(); + } + + @Override + public String getSigAlgOID() { + throw new UnsupportedOperationException(); + } + + @Override + public byte[] getSigAlgParams() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean[] getIssuerUniqueID() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean[] getSubjectUniqueID() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean[] getKeyUsage() { + throw new UnsupportedOperationException(); + } + + @Override + public int getBasicConstraints() { + throw new UnsupportedOperationException(); + } + + @Override + public void verify(PublicKey key) + throws CertificateException, NoSuchAlgorithmException, + InvalidKeyException, NoSuchProviderException, SignatureException { + throw new UnsupportedOperationException(); + } + + @Override + public void verify(PublicKey key, String sigProvider) + throws CertificateException, NoSuchAlgorithmException, + InvalidKeyException, NoSuchProviderException, SignatureException { + throw new UnsupportedOperationException(); + } + + @Override + public PublicKey getPublicKey() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean equals(Object o) { + if (o == this) { + return true; + } else if (!(o instanceof PemX509Certificate)) { + return false; + } + + PemX509Certificate other = (PemX509Certificate) o; + return content.equals(other.content); + } + + @Override + public int hashCode() { + return content.hashCode(); + } + + @Override + public String toString() { + return content.toString(CharsetUtil.UTF_8); + } +} diff --git a/handler/src/main/java/io/netty/handler/ssl/SslUtils.java b/handler/src/main/java/io/netty/handler/ssl/SslUtils.java index a70c00c2e0f..a7ad75e3e32 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslUtils.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslUtils.java @@ -16,7 +16,10 @@ package io.netty.handler.ssl; import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; import io.netty.channel.ChannelHandlerContext; +import io.netty.handler.codec.base64.Base64; +import io.netty.handler.codec.base64.Base64Dialect; /** * Constants for SSL packets. @@ -125,6 +128,35 @@ static void notifyHandshakeFailure(ChannelHandlerContext ctx, Throwable cause) { ctx.close(); } + /** + * Fills the {@link ByteBuf} with zero bytes. + */ + static void zeroout(ByteBuf buffer) { + if (!buffer.isReadOnly()) { + buffer.setZero(0, buffer.capacity()); + } + } + + /** + * Fills the {@link ByteBuf} with zero bytes and releases it. + */ + static void zerooutAndRelease(ByteBuf buffer) { + zeroout(buffer); + buffer.release(); + } + + /** + * Same as {@link Base64#encode(ByteBuf, boolean)} but allows the use of a custom {@link ByteBufAllocator}. + * + * @see Base64#encode(ByteBuf, boolean) + */ + static ByteBuf toBase64(ByteBufAllocator allocator, ByteBuf src) { + ByteBuf dst = Base64.encode(src, src.readerIndex(), + src.readableBytes(), true, Base64Dialect.STANDARD, allocator); + src.readerIndex(src.writerIndex()); + return dst; + } + private SslUtils() { } }
null
train
train
2016-06-10T13:19:45
"2016-06-08T18:58:21Z"
rkapsi
val
netty/netty/5351_5379
netty/netty
netty/netty/5351
netty/netty/5379
[ "timestamp(timedelta=22.0, similarity=0.8790954761111075)" ]
783567420fd8109f172b5ab3f299ef750bac8805
2cd48502d0c9c6fd7a48f59029cfd888d413898b
[ "Splitted the proposal to new issue: https://github.com/netty/netty/issues/5352\n", "@mickare - Thanks for bringing this to our attention. The javadocs should be updated. Let me take care.\n" ]
[]
"2016-06-08T22:47:52Z"
[ "documentation" ]
[4.1] Futures removeListener: JavaDoc and implementation are inconsistent - duplicate listeners
#### Netty version: 4.1.0.CR7 #### Context: I encountered an inconsistency between the JavaDoc description and implementation of the [Future](http://netty.io/4.1/api/io/netty/util/concurrent/Future.html). I want to add multiple times (e.g. x 1000) the same close listener to a channel but I don't want this listener to be called multiple times. Also I don't want to remove the listener prior to adding it. I could do this but I don't like the wasteful [Java array copy](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/DefaultFutureListeners.java#L62) calls. ##### JavaDoc vs Implementation The Netty 4.1 JavaDoc on [Future](http://netty.io/4.1/api/io/netty/util/concurrent/Future.html#removeListener%28io.netty.util.concurrent.GenericFutureListener%29) says: > Future<V> removeListener(GenericFutureListener<? extends Future<? super V>> listener) > > Removes the specified listener from this future. The specified listener is no longer notified when this future is done. If the specified listener is not associated with this future, this method does nothing and returns silently. But if I add multiple times the **same listener** to the future, only the first entry is removed. ([see implementation](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/DefaultFutureListeners.java#L55)) With the current implementation in mind the JavaDoc should state that only the first occurrence of the listener is removed. Like: > Removes the first occurrence of the specified listener from this future. ##### ~~More on futures (_could be an independent proposal_):~~ ~~Even more the listener is called multiple times. I think futures are part of the observer pattern. That means an observer should be notified only once an event occurs.~~ ~~That means there should be no duplicate listeners in the future. E.g.: the [listeners array](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/DefaultFutureListeners.java#L22) sould be changed to a set, or duplicates avoided.~~ ~~Or is there a puprose, that I can not see at the moment? Like performance? But if it is performance, then would not multiple calls on the same listener effect the performance? For example:~~ ``` java private final ChannelFutureListener closeListener = f -> cache.invalidate(f.channel); public void oftenCalled(Channel ch) { ch.closeFuture().addListener(closeListener); ... } ``` ~~With the current implementation `closeListener` would be called as often as the method `oftenCalled`.~~ I had just a discussion that there could be a use-case of a self-incrementing counter, that uses the future to add a future increment. This counter would need multiple entries. Therefore I propose to add a new method "addListenerIfAbsent", that could resolve the proposal. I'll create a new issue for that. Best regards, Michael
[ "common/src/main/java/io/netty/util/concurrent/Future.java" ]
[ "common/src/main/java/io/netty/util/concurrent/Future.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/concurrent/Future.java b/common/src/main/java/io/netty/util/concurrent/Future.java index 8ddb860e149..16ffa722813 100644 --- a/common/src/main/java/io/netty/util/concurrent/Future.java +++ b/common/src/main/java/io/netty/util/concurrent/Future.java @@ -63,7 +63,7 @@ public interface Future<V> extends java.util.concurrent.Future<V> { Future<V> addListeners(GenericFutureListener<? extends Future<? super V>>... listeners); /** - * Removes the specified listener from this future. + * Removes the first occurrence of the specified listener from this future. * The specified listener is no longer notified when this * future is {@linkplain #isDone() done}. If the specified * listener is not associated with this future, this method @@ -72,7 +72,7 @@ public interface Future<V> extends java.util.concurrent.Future<V> { Future<V> removeListener(GenericFutureListener<? extends Future<? super V>> listener); /** - * Removes the specified listeners from this future. + * Removes the first occurrence for each of the listeners from this future. * The specified listeners are no longer notified when this * future is {@linkplain #isDone() done}. If the specified * listeners are not associated with this future, this method
null
train
train
2016-06-08T21:23:19
"2016-06-04T19:40:56Z"
mickare
val
netty/netty/5387_5388
netty/netty
netty/netty/5387
netty/netty/5388
[ "timestamp(timedelta=299.0, similarity=0.868294864826654)" ]
a7496ed83df3cf13c24fa9b5ce9494914fcd07ec
b27edba263edb6ba1cdf8945957aa1ae302a9477
[ "@slandelle you are right... let me fix.\n", "Awesome, thanks!\nI can provide a PR if you want, but I first wanted to be sure it was a bug and not intended behavior.\n", "@slandelle a pr would be awesome! Please also add a test that we catch when `ChannelFactory.newChannel(...)` throws and notify the promise.\n", "PR on its way\n", "Actually, having a look at `BootstrapTest#testLateRegistrationConnect` makes me think that `connect` is expected to possibly throw exceptions.\n\nI can't say I like this behavior, but it is what it is, and IMHO can't be changed/fixed outside a new version.\n\nWDYT?\n", "Should Exception be thrown AND listener notified?\n", "@slandelle I think you misread the test case. It only throws because we do `syncUninterruptibly()`.\n", "My bad...\n", "@slandelle no worries!\n", "Done, see #5388\n", "Fixed by https://github.com/netty/netty/pull/5388\n" ]
[ "Please create the exception before the bootstrap and just throw it. This way you can use `assertSame(...)` below to check if the same exception. This is more robust compared to check for the message.\n", "remove empty line\n", "See below.\n", "See above.\n" ]
"2016-06-13T13:11:38Z"
[ "defect" ]
AbstractBootstrap#initAndRegister crash doesn't notify promise
`initAndRegister` can possibly crash, eg: ``` java.net.SocketException: Too many open files at sun.nio.ch.Net.socket0(Native Method) at sun.nio.ch.Net.socket(Net.java:411) at sun.nio.ch.Net.socket(Net.java:404) at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:105) at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60) at io.netty.channel.socket.nio.NioSocketChannel.newSocket(NioSocketChannel.java:60) ... 64 common frames omitted Wrapped by: io.netty.channel.ChannelException: Failed to open a socket. at io.netty.channel.socket.nio.NioSocketChannel.newSocket(NioSocketChannel.java:62) at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:72) at org.asynchttpclient.netty.channel.NioSocketChannelFactory.newChannel(NioSocketChannelFactory.java:25) at org.asynchttpclient.netty.channel.NioSocketChannelFactory.newChannel(NioSocketChannelFactory.java:19) at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:308) ``` The problem is that the `newSocket` isn't monitored with a try/catch, so promise isn't notify and the caller crash. IMHO, such Exception should be dealt with in the promise. Users shouldn't have to combine 2 different Exception handling mechanisms and both wrap the call with a try/catch AND check the future.
[ "transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java" ]
[ "transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java" ]
[ "transport/src/test/java/io/netty/bootstrap/BootstrapTest.java" ]
diff --git a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java index 519446c9e67..8513f20cb4c 100644 --- a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java @@ -314,11 +314,15 @@ public void operationComplete(ChannelFuture future) throws Exception { } final ChannelFuture initAndRegister() { - final Channel channel = channelFactory.newChannel(); + Channel channel = null; try { + channel = channelFactory.newChannel(); init(channel); } catch (Throwable t) { - channel.unsafe().closeForcibly(); + if (channel != null) { + // channel can be null if newChannel crashed (eg SocketException("too many open files")) + channel.unsafe().closeForcibly(); + } // as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor return new DefaultChannelPromise(channel, GlobalEventExecutor.INSTANCE).setFailure(t); }
diff --git a/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java b/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java index 81292f7cc5d..db00b984144 100644 --- a/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java +++ b/transport/src/test/java/io/netty/bootstrap/BootstrapTest.java @@ -69,7 +69,6 @@ public static void destroy() { @Test(timeout = 10000) public void testBindDeadLock() throws Exception { - final Bootstrap bootstrapA = new Bootstrap(); bootstrapA.group(groupA); bootstrapA.channel(LocalChannel.class); @@ -106,7 +105,6 @@ public void run() { @Test(timeout = 10000) public void testConnectDeadLock() throws Exception { - final Bootstrap bootstrapA = new Bootstrap(); bootstrapA.group(groupA); bootstrapA.channel(LocalChannel.class); @@ -234,7 +232,6 @@ public void testLateRegistrationConnect() throws Exception { @Test public void testAsyncResolutionSuccess() throws Exception { - final Bootstrap bootstrapA = new Bootstrap(); bootstrapA.group(groupA); bootstrapA.channel(LocalChannel.class); @@ -253,7 +250,6 @@ public void testAsyncResolutionSuccess() throws Exception { @Test public void testAsyncResolutionFailure() throws Exception { - final Bootstrap bootstrapA = new Bootstrap(); bootstrapA.group(groupA); bootstrapA.channel(LocalChannel.class); @@ -275,6 +271,28 @@ public void testAsyncResolutionFailure() throws Exception { assertThat(connectFuture.channel().isOpen(), is(false)); } + @Test + public void testChannelFactoryFailureNotifiesPromise() throws Exception { + final RuntimeException exception = new RuntimeException("newChannel crash"); + + final Bootstrap bootstrap = new Bootstrap() + .handler(dummyHandler) + .group(groupA) + .channelFactory(new ChannelFactory<Channel>() { + @Override + public Channel newChannel() { + throw exception; + } + }); + + ChannelFuture connectFuture = bootstrap.connect(LocalAddress.ANY); + + // Should fail with the RuntimeException. + assertThat(connectFuture.await(10000), is(true)); + assertThat(connectFuture.cause(), sameInstance((Throwable) exception)); + assertThat(connectFuture.channel(), is(nullValue())); + } + private static final class DelayedEventLoopGroup extends DefaultEventLoop { @Override public ChannelFuture register(final Channel channel, final ChannelPromise promise) {
train
train
2016-06-13T14:13:40
"2016-06-13T11:48:13Z"
slandelle
val
netty/netty/5382_5405
netty/netty
netty/netty/5382
netty/netty/5405
[ "timestamp(timedelta=73.0, similarity=0.882209135474392)" ]
5e86325a8cb3bbe78440662290980c597c1a2d67
468799eb925b9b019d192093cf4577d0d766eed4
[ "@mrokitka wow there are still people using HTTP/1.0 ? You are right we are currently only support it with HTTP/1.1, not sure if we will ever support HTTP/1.0. But at least we should \"skip\" HTTP/1.0 messages. Let me come up with a PR for this\n", "@normanmaurer - I've run into this since we're using Squid as a caching proxy to handle caching of content for outbound proxy requests from a Netty based proxy server I'm implementing. Squid doesn't fully support HTTP 1.1, so although it handles HTTP compression properly responses are always HTTP 1.0.\n\nKind of a drag, but if the compressor could handle the compression for HTTP 1.0 without the need for chunked encoding that would be great.\n", "@mrokitka the problem with this is that we would need buffer the full content in memory as we will need to set the \"Content-Length\" header. We could do this but not sure if this is really good :)\n", "@normanmaurer perhaps a flag that could be programmatically set on the compressor to indicate if we want to support HTTP 1.0? A true value would compress and buffer to set the content-length header for 1.0 responses, false would just skip 1.0 messages?\n", "@mrokitka maybe... let me see what I can come up with.\n", "Fixed by https://github.com/netty/netty/pull/5405 . \n\n@mrokitka I may come up with a follow up to support compression for http 1.0 as well but currently very busy. If you like you can also contribute a PR.\n", "https://github.com/netty/netty/pull/5405\n" ]
[]
"2016-06-16T19:56:41Z"
[ "defect" ]
HttpContentCompressor/HttpContentEncoder should not set chunked transfer-encoding for HTTP 1.0
Netty version: 4.1.0.Final Context: When using HttpContentCompressor and the HttpResponse is protocol version 1.0, HttpContentEncoder.encode() should not set the transfer-encoding header to chunked. Chunked transfer-encoding is not valid for HTTP 1.0 - this causes ERR_CONTENT_DECODING_FAILED errors in chrome and similar failures in IE. This is the snippet I'm referring to: // Make the response chunked to simplify content transformation. res.headers().remove(HttpHeaderNames.CONTENT_LENGTH); res.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java index a20a3fd7897..ec1856673c1 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java @@ -120,9 +120,11 @@ protected void encode(ChannelHandlerContext ctx, HttpObject msg, List<Object> ou * The HEAD method is identical to GET except that the server MUST NOT return a message-body * in the response. * - * This code is now inline with HttpClientDecoder.Decoder + * Also we should pass through HTTP/1.0 as transfer-encoding: chunked is not supported. + * + * See https://github.com/netty/netty/issues/5382 */ - if (isPassthru(code, acceptEncoding)) { + if (isPassthru(res.protocolVersion(), code, acceptEncoding)) { if (isFull) { out.add(ReferenceCountUtil.retain(res)); } else { @@ -203,9 +205,10 @@ protected void encode(ChannelHandlerContext ctx, HttpObject msg, List<Object> ou } } - private static boolean isPassthru(int code, CharSequence httpMethod) { + private static boolean isPassthru(HttpVersion version, int code, CharSequence httpMethod) { return code < 200 || code == 204 || code == 304 || - (httpMethod == ZERO_LENGTH_HEAD || (httpMethod == ZERO_LENGTH_CONNECT && code == 200)); + (httpMethod == ZERO_LENGTH_HEAD || (httpMethod == ZERO_LENGTH_CONNECT && code == 200)) || + version == HttpVersion.HTTP_1_0; } private static void ensureHeaders(HttpObject msg) {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java index dd774267564..660df3406e5 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java @@ -30,7 +30,10 @@ import static org.hamcrest.CoreMatchers.not; import static org.hamcrest.CoreMatchers.nullValue; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertSame; import static org.junit.Assert.assertThat; +import static org.junit.Assert.assertTrue; public class HttpContentEncoderTest { @@ -332,6 +335,31 @@ public void testConnectFailureResponse() throws Exception { assertThat(ch.readOutbound(), is(nullValue())); } + @Test + public void testHttp1_0() throws Exception { + EmbeddedChannel ch = new EmbeddedChannel(new TestEncoder()); + FullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.GET, "/"); + assertTrue(ch.writeInbound(req)); + + HttpResponse res = new DefaultHttpResponse(HttpVersion.HTTP_1_0, HttpResponseStatus.OK); + res.headers().set(HttpHeaderNames.CONTENT_LENGTH, HttpHeaderValues.ZERO); + assertTrue(ch.writeOutbound(res)); + assertTrue(ch.writeOutbound(LastHttpContent.EMPTY_LAST_CONTENT)); + assertTrue(ch.finish()); + + FullHttpRequest request = ch.readInbound(); + assertTrue(request.release()); + assertNull(ch.readInbound()); + + HttpResponse response = ch.readOutbound(); + assertSame(res, response); + + LastHttpContent content = ch.readOutbound(); + assertSame(LastHttpContent.EMPTY_LAST_CONTENT, content); + content.release(); + assertNull(ch.readOutbound()); + } + private static void assertEmptyResponse(EmbeddedChannel ch) { Object o = ch.readOutbound(); assertThat(o, is(instanceOf(HttpResponse.class)));
val
train
2016-06-15T18:51:58
"2016-06-10T02:21:46Z"
mrokitka
val
netty/netty/5386_5410
netty/netty
netty/netty/5386
netty/netty/5410
[ "timestamp(timedelta=12.0, similarity=0.9073717107325305)" ]
5e86325a8cb3bbe78440662290980c597c1a2d67
567bc3b6288c7dca90110d63cddd693f15827181
[ "@vietj so what ? I think there is not much we can do without doing some crazy jni stuff :(\n", "now in Vert.x we simply resolve localhost to `127.0.0.1` on Windows. Perhaps doing the same would make sense in Netty too.\n", "Yeah maybe… and logging a warning. Let me do this.\n\n> On 13 Jun 2016, at 11:49, Julien Viet [email protected] wrote:\n> \n> now in Vert.x we simply resolve localhost to 127.0.0.1 on Windows. Perhaps doing the same would make sense in Netty too.\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub https://github.com/netty/netty/issues/5386#issuecomment-225536427, or mute the thread https://github.com/notifications/unsubscribe/AAa0QrENFN-8OEAFmlqfsVSJxiXLgnmVks5qLSevgaJpZM4I0JNJ.\n", "a workaround is to add this to the list of parsed hosts when it does not exists using the blocking API (which is not a problem usually as this is done in the resolver itself)\n", "i.e you can still get the localhost value once and cache it in a static.\n", "```\n try {\n if (!entries.containsKey(\"localhost\")) {\n entries.put(\"localhost\", InetAddress.getLocalHost());\n }\n } catch (UnknownHostException ignore) {\n }\n builder.hostsFileEntriesResolver(entries::get);\n```\n", "Fixed by https://github.com/netty/netty/pull/5410\n" ]
[]
"2016-06-16T20:09:00Z"
[ "defect" ]
io.netty.resolver.dns.DnsNameResolver does not resolve localhost on Windows
On Windows localhost is not in hosts file and the DNS server does not resolve this address either, i.e it is handled by the Windows API. So using a `Bootstrap` (among others) with the resolver based on `DnsNameResolver` will not resolve localhost. http://serverfault.com/questions/4689/windows-7-localhost-name-resolution-is-handled-within-dns-itself-why
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java" ]
[]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java index 576a47229e6..a2d8101f812 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java @@ -40,6 +40,7 @@ import io.netty.util.concurrent.FastThreadLocal; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.Promise; +import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.UnstableApi; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -61,6 +62,8 @@ public class DnsNameResolver extends InetNameResolver { private static final InternalLogger logger = InternalLoggerFactory.getInstance(DnsNameResolver.class); + private static final String LOCALHOST = "localhost"; + private static final InetAddress LOCALHOST_ADDRESS; static final InetSocketAddress ANY_LOCAL_ADDR = new InetSocketAddress(0); @@ -71,10 +74,12 @@ public class DnsNameResolver extends InetNameResolver { if (Boolean.getBoolean("java.net.preferIPv6Addresses")) { DEFAULT_RESOLVE_ADDRESS_TYPES[0] = InternetProtocolFamily.IPv6; DEFAULT_RESOLVE_ADDRESS_TYPES[1] = InternetProtocolFamily.IPv4; + LOCALHOST_ADDRESS = NetUtil.LOCALHOST6; logger.debug("-Djava.net.preferIPv6Addresses: true"); } else { DEFAULT_RESOLVE_ADDRESS_TYPES[0] = InternetProtocolFamily.IPv4; DEFAULT_RESOLVE_ADDRESS_TYPES[1] = InternetProtocolFamily.IPv6; + LOCALHOST_ADDRESS = NetUtil.LOCALHOST4; logger.debug("-Djava.net.preferIPv6Addresses: false"); } } @@ -282,7 +287,18 @@ protected EventLoop executor() { } private InetAddress resolveHostsFileEntry(String hostname) { - return hostsFileEntriesResolver != null ? hostsFileEntriesResolver.address(hostname) : null; + if (hostsFileEntriesResolver == null) { + return null; + } else { + InetAddress address = hostsFileEntriesResolver.address(hostname); + if (address == null && PlatformDependent.isWindows() && LOCALHOST.equalsIgnoreCase(hostname)) { + // If we tried to resolve localhost we need workaround that windows removed localhost from its + // hostfile in later versions. + // See https://github.com/netty/netty/issues/5386 + return LOCALHOST_ADDRESS; + } + return address; + } } @Override
null
train
train
2016-06-15T18:51:58
"2016-06-13T09:27:20Z"
vietj
val
netty/netty/5391_5413
netty/netty
netty/netty/5391
netty/netty/5413
[ "timestamp(timedelta=11.0, similarity=0.9507042869265819)" ]
5e86325a8cb3bbe78440662290980c597c1a2d67
a3a56f4335217310bb8b53d541851bd1933c2fc2
[ "will check...\n\n@vietj thanks for reporting!\n", "it made my day epic :-)\n", "DNS epic day\n", "lol\n", "Fixed by https://github.com/netty/netty/pull/5413\n" ]
[ "Why need to create this `decodeName0` method? Is it for providing a hook method to allow people to change the behavior?\n", "Yes\n\n> Am 16.06.2016 um 22:38 schrieb Xiaoyan Lin [email protected]:\n> \n> In codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java:\n> \n> > @@ -106,7 +106,19 @@ protected DnsRecord decodeRecord(\n> > \\* @param in the byte buffer containing the DNS packet\n> > \\* @return the domain name for an entry\n> > */\n> > - protected String decodeName(ByteBuf in) {\n> > - protected String decodeName0(ByteBuf in) {\n> > Why need to create this decodeName0 method? Is it for providing a hook method to allow people to change the behavior?\n> \n> —\n> You are receiving this because you were assigned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n" ]
"2016-06-16T20:13:04Z"
[ "defect" ]
DnsNameResolver does not resolve property A+CNAME answer
The current DnsNameResolver fails to resolve an A+CNAME answer. For example: ``` dig moose.rmq.cloudamqp.com ... ;; ANSWER SECTION: moose.rmq.cloudamqp.com. 1800 IN CNAME ec2-54-152-221-139.compute-1.amazonaws.com. ec2-54-152-221-139.compute-1.amazonaws.com. 583612 IN A 54.152.221.139 ... ``` The resolver constructs a map of cnames but forgets the trailing "." in the values which lead to not resolve the A record: the DnsNameResolverContext#buildAliasMap returns a singleton map `moose.rmq.cloudamqp.com.->ec2-54-152-221-139.compute-1.amazonaws.com` without the trailing "." . Then the `onResponseAorAAAA` cannot resolve `moose.rmq.cloudamqp.com` to the address `54.152.221.139`. as `if (rName.equals(resolved))` evalutes the false instead of true. A possible fix is to change the last line of `decodeDomainName` to return `name.toString()` instead of `name.substring(0, name.length() - 1)`
[ "codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java" ]
[ "codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java" ]
[]
diff --git a/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java b/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java index d630413c843..3cafb477f35 100644 --- a/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java +++ b/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java @@ -92,7 +92,7 @@ protected DnsRecord decodeRecord( if (type == DnsRecordType.PTR) { in.setIndex(offset, offset + length); - return new DefaultDnsPtrRecord(name, dnsClass, timeToLive, decodeName(in)); + return new DefaultDnsPtrRecord(name, dnsClass, timeToLive, decodeName0(in)); } return new DefaultDnsRawRecord( name, type, dnsClass, timeToLive, in.retainedDuplicate().setIndex(offset, offset + length)); @@ -106,7 +106,19 @@ protected DnsRecord decodeRecord( * @param in the byte buffer containing the DNS packet * @return the domain name for an entry */ - protected String decodeName(ByteBuf in) { + protected String decodeName0(ByteBuf in) { + return decodeName(in); + } + + /** + * Retrieves a domain name given a buffer containing a DNS packet. If the + * name contains a pointer, the position of the buffer will be set to + * directly after the pointer's index after the name has been read. + * + * @param in the byte buffer containing the DNS packet + * @return the domain name for an entry + */ + public static String decodeName(ByteBuf in) { int position = -1; int checked = 0; final int end = in.writerIndex(); diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java index 8a10f2e56a1..cfeb057e362 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java @@ -20,6 +20,7 @@ import io.netty.buffer.ByteBufHolder; import io.netty.channel.AddressedEnvelope; import io.netty.channel.socket.InternetProtocolFamily; +import io.netty.handler.codec.CorruptedFrameException; import io.netty.handler.codec.dns.DefaultDnsQuestion; import io.netty.handler.codec.dns.DefaultDnsRecordDecoder; import io.netty.handler.codec.dns.DnsResponseCode; @@ -29,7 +30,6 @@ import io.netty.handler.codec.dns.DnsRecord; import io.netty.handler.codec.dns.DnsRecordType; import io.netty.handler.codec.dns.DnsResponse; -import io.netty.util.CharsetUtil; import io.netty.util.ReferenceCountUtil; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; @@ -438,53 +438,15 @@ private void finishResolve() { protected abstract boolean finishResolve( Class<? extends InetAddress> addressType, List<DnsCacheEntry> resolvedEntries); - /** - * Adapted from {@link DefaultDnsRecordDecoder#decodeName(ByteBuf)}. - */ - static String decodeDomainName(ByteBuf buf) { - buf.markReaderIndex(); + static String decodeDomainName(ByteBuf in) { + in.markReaderIndex(); try { - int position = -1; - int checked = 0; - final int end = buf.writerIndex(); - final StringBuilder name = new StringBuilder(buf.readableBytes() << 1); - for (int len = buf.readUnsignedByte(); buf.isReadable() && len != 0; len = buf.readUnsignedByte()) { - boolean pointer = (len & 0xc0) == 0xc0; - if (pointer) { - if (position == -1) { - position = buf.readerIndex() + 1; - } - - final int next = (len & 0x3f) << 8 | buf.readUnsignedByte(); - if (next >= end) { - // Should not happen. - return null; - } - buf.readerIndex(next); - - // check for loops - checked += 2; - if (checked >= end) { - // Name contains a loop; give up. - return null; - } - } else { - name.append(buf.toString(buf.readerIndex(), len, CharsetUtil.UTF_8)).append('.'); - buf.skipBytes(len); - } - } - - if (position != -1) { - buf.readerIndex(position); - } - - if (name.length() == 0) { - return null; - } - - return name.substring(0, name.length() - 1); + return DefaultDnsRecordDecoder.decodeName(in); + } catch (CorruptedFrameException e) { + // In this case we just return null. + return null; } finally { - buf.resetReaderIndex(); + in.resetReaderIndex(); } }
null
test
train
2016-06-15T18:51:58
"2016-06-13T18:32:04Z"
vietj
val
netty/netty/5375_5418
netty/netty
netty/netty/5375
netty/netty/5418
[ "timestamp(timedelta=3.0, similarity=0.8565223779274694)" ]
9687d77b5ab1ca6b3bfd57dab8b71b796ad45b52
16bf3e5e2129e72aabdb382514e96f856512d4b7
[ "Thanks for reporting @npordash! Are you in a situation where `listener.onDataRead` is called multiple times in `DelegatingDecompressorFrameListener`, and you are trying to call `consumeBytes(..)` for data which was not returned by previous calls to `Http2FrameListener.onDataRead`?\n", "> I tested this with both 4.1.0.Final and 4.1.1.Final.\n\nDo you have a unit test which you would be willing to contribute? I have a PR but it would be nice to verify against a unit test if you already have one.\n", "Thanks for the quick response!\n\nI'm not in a situation where `onDataRead` is being called multiple times, at least I have yet to observe that happening. The exception gets thrown on the first data frame received.\n\nMy use-case is that I'm exposing inbound data frames as an `rx.Observable<ByteBuf>` and I only want to report the processing of the bytes when a `ByteBuf` has been sent to a subscriber of the `Observable`. It's possible that the `ByteBuf` is sent to a subscriber inside the call to `onDataRead` if the processing is synchronous which means that the call to `consumeBytes` might happen before `onDataRead` returns as well.\n\nUnfortunately, I do not have an existing unit test for this yet. I stumbled across this while I was prototyping which involved manual testing with `nghttp` to send data to a netty http2 server 😦 \n", "> which means that the call to consumeBytes might happen before onDataRead returns as well.\n\nAs long as you are not double releasing the bytes you should be OK. I think I found the issue and I'll ask you to verify with the PR when its ready.\n", "Awesome, thanks again!\n" ]
[ "nit: spelling on the method name ... should be `...Bytes`\n", "Is this condition ever possible? I think it only would be if padding were `< 0`, which will never happen. Suggest either getting rid of this condition or moving it somewhere more clear.\n", "Same comment as below.\n", "Don't we also have to remove the property from the stream?\n", "Don't we have to mark these bytes as consumed?\n", "+1\n", "I guess this is no longer needed now that it is being used strictly to increment ... overflow should be guarded by flow control.\n", "removed\n", "There was an ordering issue related to the stream being removed if an error occurred and attempting to return bytes to flow control. So if we removed the property we would not be able to return bytes to flow control in this condition. Ideally we would remove it, but I don't think it hurts anything to leave it in the property map.\n", "Good point ... we should consume immediately instead of returning the bytes. I'll add a comment on the return statement too.\n", "Shouldn't this be a call to `decompressor.consumeBytes()`?\n", "The `decompressor` just keeps state to be able to convert bytes the users sees (decompressed) to bytes the flow controller knows about (compressed). `ConsumedBytesConverter` wraps the flow controller so that when ever bytes are returned it uses the `decompressor` to translate from what the user knows (decompressed) to what the flow controller knows (compressed). So I think we actually want to return bytes to the flow controller here ... this is effectively what is done when bytes are returned by this method (done by `DefaultHttp2ConnectionDecoder`) however bcz there may be multiple frames decompressed it makes sense to return the bytes immediately instead of waiting for the method to return.\n", "@nmittler - WDYT?\n", "@Scottmitch Ah I had forgotten that the flow controller is actually the `ConsumedBytesConverter`. Maybe add a comment?\n", "done\n" ]
"2016-06-16T20:18:24Z"
[ "defect" ]
DelegatingDecompressorFrameListener doesn't support deferral of processed bytes
`Http2FrameListener.onDataRead` states that reporting of processed bytes to the local flow controller can be deferred by returning something less than `data + padding` and then later doing something equivalent to `connection().local().flowController().consumeBytes(Http2Stream, int)`. This works fine until you start using a `DelegatingDecompressorFrameListener` because once you call `consumeBytes` you end up with the following exception: ``` io.netty.handler.codec.http2.Http2Exception: Error while returning bytes to flow control window at io.netty.handler.codec.http2.DelegatingDecompressorFrameListener$ConsumedBytesConverter.consumeBytes(DelegatingDecompressorFrameListener.java:361) ... Caused by: java.lang.IllegalArgumentException: processed bytes cannot be negative at io.netty.handler.codec.http2.DelegatingDecompressorFrameListener$Http2Decompressor.incrementProcessedBytes(DelegatingDecompressorFrameListener.java:409) at io.netty.handler.codec.http2.DelegatingDecompressorFrameListener$Http2Decompressor.consumeProcessedBytes(DelegatingDecompressorFrameListener.java:445) at io.netty.handler.codec.http2.DelegatingDecompressorFrameListener$ConsumedBytesConverter.consumeBytes(DelegatingDecompressorFrameListener.java:349) ``` As far as I can tell the reason for this is because `Http2Decompressor` is expecting `incrementProcessedBytes` to be called prior to `consumeProcessedBytes` and you can't consume more than it has been incremented, which makes sense, but `DelegatingDecompressorFrameListener.onDataRead` only calls `incrementProcessedBytes` _after_ all calls to `onDataRead` returns and only increments the processed bytes based on what was accumulated from the `onDataRead` calls which effectively means you need to fully consume the buffer in `onDataRead`. I tested this with both 4.1.0.Final and 4.1.1.Final.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java index 50439c9fe7a..78ef230c62f 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java @@ -44,7 +44,7 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor private final Http2Connection connection; private final boolean strict; private boolean flowControllerInitialized; - final Http2Connection.PropertyKey propertyKey; + private final Http2Connection.PropertyKey propertyKey; public DelegatingDecompressorFrameListener(Http2Connection connection, Http2FrameListener listener) { this(connection, listener, true); @@ -62,7 +62,7 @@ public DelegatingDecompressorFrameListener(Http2Connection connection, Http2Fram public void onStreamRemoved(Http2Stream stream) { final Http2Decompressor decompressor = decompressor(stream); if (decompressor != null) { - cleanup(stream, decompressor); + cleanup(decompressor); } } }); @@ -80,7 +80,6 @@ public int onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, int final EmbeddedChannel channel = decompressor.decompressor(); final int compressedBytes = data.readableBytes() + padding; - int processedBytes = 0; decompressor.incrementCompressedBytes(compressedBytes); try { // call retain here as it will call release after its written to the channel @@ -97,43 +96,44 @@ public int onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, int // not be provided with data and thus could not return how many bytes were processed. We will assume // there is more data coming which will complete the decompression block. To allow for more data we // return all bytes to the flow control window (so the peer can send more data). - decompressor.incrementDecompressedByes(compressedBytes); - processedBytes = compressedBytes; - } else { - try { - decompressor.incrementDecompressedByes(padding); - for (;;) { - ByteBuf nextBuf = nextReadableBuf(channel); - boolean decompressedEndOfStream = nextBuf == null && endOfStream; - if (decompressedEndOfStream && channel.finish()) { - nextBuf = nextReadableBuf(channel); - decompressedEndOfStream = nextBuf == null; - } - - decompressor.incrementDecompressedByes(buf.readableBytes()); - processedBytes += listener.onDataRead(ctx, streamId, buf, padding, decompressedEndOfStream); - if (nextBuf == null) { - break; - } - - padding = 0; // Padding is only communicated once on the first iteration - buf.release(); - buf = nextBuf; + decompressor.incrementDecompressedBytes(compressedBytes); + return compressedBytes; + } + try { + Http2LocalFlowController flowController = connection.local().flowController(); + decompressor.incrementDecompressedBytes(padding); + for (;;) { + ByteBuf nextBuf = nextReadableBuf(channel); + boolean decompressedEndOfStream = nextBuf == null && endOfStream; + if (decompressedEndOfStream && channel.finish()) { + nextBuf = nextReadableBuf(channel); + decompressedEndOfStream = nextBuf == null; + } + + decompressor.incrementDecompressedBytes(buf.readableBytes()); + // Immediately return the bytes back to the flow controller. ConsumedBytesConverter will convert + // from the decompressed amount which the user knows about to the compressed amount which flow + // control knows about. + flowController.consumeBytes(stream, + listener.onDataRead(ctx, streamId, buf, padding, decompressedEndOfStream)); + if (nextBuf == null) { + break; } - } finally { + + padding = 0; // Padding is only communicated once on the first iteration. buf.release(); + buf = nextBuf; } + // We consume bytes each time we call the listener to ensure if multiple frames are decompressed + // that the bytes are accounted for immediately. Otherwise the user may see an inconsistent state of + // flow control. + return 0; + } finally { + buf.release(); } - decompressor.incrementProcessedBytes(processedBytes); - // The processed bytes will be translated to pre-decompressed byte amounts by DecompressorGarbageCollector - return processedBytes; } catch (Http2Exception e) { - // Consider all the bytes consumed because there was an error - decompressor.incrementProcessedBytes(compressedBytes); throw e; } catch (Throwable t) { - // Consider all the bytes consumed because there was an error - decompressor.incrementProcessedBytes(compressedBytes); throw streamError(stream.id(), INTERNAL_ERROR, t, "Decompressor error detected while delegating data read on streamId %d", stream.id()); } @@ -250,24 +250,12 @@ Http2Decompressor decompressor(Http2Stream stream) { } /** - * Release remaining content from the {@link EmbeddedChannel} and remove the decompressor - * from the {@link Http2Stream}. + * Release remaining content from the {@link EmbeddedChannel}. * - * @param stream The stream for which {@code decompressor} is the decompressor for * @param decompressor The decompressor for {@code stream} */ - private void cleanup(Http2Stream stream, Http2Decompressor decompressor) { - final EmbeddedChannel channel = decompressor.decompressor(); - if (channel.finish()) { - for (;;) { - final ByteBuf buf = channel.readInbound(); - if (buf == null) { - break; - } - buf.release(); - } - } - decompressor = stream.removeProperty(propertyKey); + private static void cleanup(Http2Decompressor decompressor) { + decompressor.decompressor().finishAndReleaseAll(); } /** @@ -340,26 +328,18 @@ public void receiveFlowControlledFrame(Http2Stream stream, ByteBuf data, int pad @Override public boolean consumeBytes(Http2Stream stream, int numBytes) throws Http2Exception { Http2Decompressor decompressor = decompressor(stream); - Http2Decompressor copy = null; + if (decompressor != null) { + // Convert the decompressed bytes to compressed (on the wire) bytes. + numBytes = decompressor.consumeBytes(stream.id(), numBytes); + } try { - if (decompressor != null) { - // Make a copy before hand in case any exceptions occur we will roll back the state - copy = new Http2Decompressor(decompressor); - // Convert the uncompressed consumed bytes to compressed (on the wire) bytes. - numBytes = decompressor.consumeProcessedBytes(numBytes); - } return flowController.consumeBytes(stream, numBytes); } catch (Http2Exception e) { - if (copy != null) { - stream.setProperty(propertyKey, copy); - } throw e; } catch (Throwable t) { - if (copy != null) { - stream.setProperty(propertyKey, copy); - } - throw new Http2Exception(INTERNAL_ERROR, - "Error while returning bytes to flow control window", t); + // The stream should be closed at this point. We have already changed our state tracking the compressed + // bytes, and there is no guarantee we can recover if the underlying flow controller throws. + throw streamError(stream.id(), INTERNAL_ERROR, t, "Error while returning bytes to flow control window"); } } @@ -379,17 +359,9 @@ public int initialWindowSize(Http2Stream stream) { */ private static final class Http2Decompressor { private final EmbeddedChannel decompressor; - private int processed; private int compressed; private int decompressed; - Http2Decompressor(Http2Decompressor rhs) { - this(rhs.decompressor); - processed = rhs.processed; - compressed = rhs.compressed; - decompressed = rhs.decompressed; - } - Http2Decompressor(EmbeddedChannel decompressor) { this.decompressor = decompressor; } @@ -401,53 +373,49 @@ EmbeddedChannel decompressor() { return decompressor; } - /** - * Increment the number of decompressed bytes processed by the application. - */ - void incrementProcessedBytes(int delta) { - if (processed + delta < 0) { - throw new IllegalArgumentException("processed bytes cannot be negative"); - } - processed += delta; - } - /** * Increment the number of bytes received prior to doing any decompression. */ void incrementCompressedBytes(int delta) { - if (compressed + delta < 0) { - throw new IllegalArgumentException("compressed bytes cannot be negative"); - } + assert delta >= 0; compressed += delta; } /** - * Increment the number of bytes after the decompression process. Under normal circumstances this - * delta should not exceed {@link Http2Decompressor#processed)}. + * Increment the number of bytes after the decompression process. */ - void incrementDecompressedByes(int delta) { - if (decompressed + delta < 0) { - throw new IllegalArgumentException("decompressed bytes cannot be negative"); - } + void incrementDecompressedBytes(int delta) { + assert delta >= 0; decompressed += delta; } /** - * Decrements {@link Http2Decompressor#processed} by {@code processedBytes} and determines the ratio - * between {@code processedBytes} and {@link Http2Decompressor#decompressed}. + * Determines the ratio between {@code numBytes} and {@link Http2Decompressor#decompressed}. * This ratio is used to decrement {@link Http2Decompressor#decompressed} and * {@link Http2Decompressor#compressed}. - * @param processedBytes The number of post-decompressed bytes that have been processed. + * @param streamId the stream ID + * @param decompressedBytes The number of post-decompressed bytes to return to flow control * @return The number of pre-decompressed bytes that have been consumed. */ - int consumeProcessedBytes(int processedBytes) { - // Consume the processed bytes first to verify that is is a valid amount - incrementProcessedBytes(-processedBytes); - - double consumedRatio = processedBytes / (double) decompressed; + int consumeBytes(int streamId, int decompressedBytes) throws Http2Exception { + if (decompressedBytes < 0) { + throw new IllegalArgumentException("decompressedBytes must not be negative: " + decompressedBytes); + } + if (decompressed - decompressedBytes < 0) { + throw streamError(streamId, INTERNAL_ERROR, + "Attempting to return too many bytes for stream %d. decompressed: %d " + + "decompressedBytes: %d", streamId, decompressed, decompressedBytes); + } + double consumedRatio = decompressedBytes / (double) decompressed; int consumedCompressed = Math.min(compressed, (int) Math.ceil(compressed * consumedRatio)); - incrementDecompressedByes(-Math.min(decompressed, (int) Math.ceil(decompressed * consumedRatio))); - incrementCompressedBytes(-consumedCompressed); + if (compressed - consumedCompressed < 0) { + throw streamError(streamId, INTERNAL_ERROR, + "overflow when converting decompressed bytes to compressed bytes for stream %d." + + "decompressedBytes: %d decompressed: %d compressed: %d consumedCompressed: %d", + streamId, decompressedBytes, decompressed, compressed, consumedCompressed); + } + decompressed -= decompressedBytes; + compressed -= consumedCompressed; return consumedCompressed; }
null
val
train
2016-06-20T14:23:47
"2016-06-08T20:15:56Z"
npordash
val
netty/netty/5402_5419
netty/netty
netty/netty/5402
netty/netty/5419
[ "timestamp(timedelta=22.0, similarity=0.9334949369019993)" ]
ee0897a1d9d80119990964834d62f906f800afb8
8abd03214e34f657c6cf6b460294103f155c1d99
[ "@caillette yet that sounds not right... let me come up with a fix quickly.\n", "Fixed https://github.com/netty/netty/pull/5419\n" ]
[ "`HttpScheme.HTTPS.port()`\n", "consider using `HttpScheme` here\n", "nit: you could just do `return originValue + ':' + wsPort;`\n", "nit: HttpScheme also provides the `https` portion with `HttpScheme.name()`.\n" ]
"2016-06-16T20:20:40Z"
[ "defect" ]
sec-websocket-origin should mention HTTPS
I'm using Netty WebSockets with HTTPS. When performing the handshake, the header field `sec-websocket-origin` is set to a value starting with `http://`, which looks weird. There is nothing about `sec-websocket-origin` in the RFC. The string `http` is set inconditionnally in `io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13`, line 165. I'm using Netty-4.1.0.CR7.
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java index 4f88bc7cbad..f91e4c7be0d 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java @@ -32,6 +32,7 @@ import io.netty.handler.codec.http.HttpRequestEncoder; import io.netty.handler.codec.http.HttpResponse; import io.netty.handler.codec.http.HttpResponseDecoder; +import io.netty.handler.codec.http.HttpScheme; import io.netty.util.ReferenceCountUtil; import io.netty.util.internal.EmptyArrays; import io.netty.util.internal.StringUtil; @@ -444,4 +445,26 @@ static String rawPath(URI wsURL) { return path == null || path.isEmpty() ? "/" : path; } + + static int websocketPort(URI wsURL) { + // Format request + int wsPort = wsURL.getPort(); + // check if the URI contained a port if not set the correct one depending on the schema. + // See https://github.com/netty/netty/pull/1558 + if (wsPort == -1) { + return "wss".equals(wsURL.getScheme()) ? HttpScheme.HTTPS.port() : HttpScheme.HTTP.port(); + } + return wsPort; + } + + static CharSequence websocketOriginValue(String host, int wsPort) { + String originValue = (wsPort == HttpScheme.HTTPS.port() ? + HttpScheme.HTTPS.name() : HttpScheme.HTTP.name()) + "://" + host; + if (wsPort != HttpScheme.HTTP.port() && wsPort != HttpScheme.HTTPS.port()) { + // if the port is not standard (80/443) its needed to add the port to the header. + // See http://tools.ietf.org/html/rfc6454#section-6.2 + return originValue + ':' + wsPort; + } + return originValue; + } } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java index b077f9f543b..d0660690648 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java @@ -127,23 +127,16 @@ protected FullHttpRequest newHandshakeRequest() { // Get path URI wsURL = uri(); String path = rawPath(wsURL); + int wsPort = websocketPort(wsURL); + String host = wsURL.getHost(); // Format request FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); HttpHeaders headers = request.headers(); headers.add(HttpHeaderNames.UPGRADE, WEBSOCKET) .add(HttpHeaderNames.CONNECTION, HttpHeaderValues.UPGRADE) - .add(HttpHeaderNames.HOST, wsURL.getHost()); - - int wsPort = wsURL.getPort(); - String originValue = "http://" + wsURL.getHost(); - if (wsPort != 80 && wsPort != 443) { - // if the port is not standard (80/443) its needed to add the port to the header. - // See http://tools.ietf.org/html/rfc6454#section-6.2 - originValue = originValue + ':' + wsPort; - } - - headers.add(HttpHeaderNames.ORIGIN, originValue) + .add(HttpHeaderNames.HOST, host) + .add(HttpHeaderNames.ORIGIN, websocketOriginValue(host, wsPort)) .add(HttpHeaderNames.SEC_WEBSOCKET_KEY1, key1) .add(HttpHeaderNames.SEC_WEBSOCKET_KEY2, key2); diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java index c2618b2e47a..d8e670f87a2 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java @@ -141,6 +141,9 @@ protected FullHttpRequest newHandshakeRequest() { key, expectedChallengeResponseString); } + int wsPort = websocketPort(wsURL); + String host = wsURL.getHost(); + // Format request FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); HttpHeaders headers = request.headers(); @@ -148,16 +151,8 @@ protected FullHttpRequest newHandshakeRequest() { headers.add(HttpHeaderNames.UPGRADE, HttpHeaderValues.WEBSOCKET) .add(HttpHeaderNames.CONNECTION, HttpHeaderValues.UPGRADE) .add(HttpHeaderNames.SEC_WEBSOCKET_KEY, key) - .add(HttpHeaderNames.HOST, wsURL.getHost()); - - int wsPort = wsURL.getPort(); - String originValue = "http://" + wsURL.getHost(); - if (wsPort != 80 && wsPort != 443) { - // if the port is not standard (80/443) its needed to add the port to the header. - // See http://tools.ietf.org/html/rfc6454#section-6.2 - originValue = originValue + ':' + wsPort; - } - headers.add(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, originValue); + .add(HttpHeaderNames.HOST, host) + .add(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, websocketOriginValue(host, wsPort)); String expectedSubprotocol = expectedSubprotocol(); if (expectedSubprotocol != null && !expectedSubprotocol.isEmpty()) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java index d589b7729c6..e5ada055b4b 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java @@ -142,6 +142,9 @@ protected FullHttpRequest newHandshakeRequest() { key, expectedChallengeResponseString); } + int wsPort = websocketPort(wsURL); + String host = wsURL.getHost(); + // Format request FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); HttpHeaders headers = request.headers(); @@ -149,16 +152,8 @@ protected FullHttpRequest newHandshakeRequest() { headers.add(HttpHeaderNames.UPGRADE, HttpHeaderValues.WEBSOCKET) .add(HttpHeaderNames.CONNECTION, HttpHeaderValues.UPGRADE) .add(HttpHeaderNames.SEC_WEBSOCKET_KEY, key) - .add(HttpHeaderNames.HOST, wsURL.getHost()); - - int wsPort = wsURL.getPort(); - String originValue = "http://" + wsURL.getHost(); - if (wsPort != 80 && wsPort != 443) { - // if the port is not standard (80/443) its needed to add the port to the header. - // See http://tools.ietf.org/html/rfc6454#section-6.2 - originValue = originValue + ':' + wsPort; - } - headers.add(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, originValue); + .add(HttpHeaderNames.HOST, host) + .add(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, websocketOriginValue(host, wsPort)); String expectedSubprotocol = expectedSubprotocol(); if (expectedSubprotocol != null && !expectedSubprotocol.isEmpty()) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java index 37a77c72aa1..a311b83fbd2 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java @@ -143,32 +143,16 @@ protected FullHttpRequest newHandshakeRequest() { } // Format request - int wsPort = wsURL.getPort(); - // check if the URI contained a port if not set the correct one depending on the schema. - // See https://github.com/netty/netty/pull/1558 - if (wsPort == -1) { - if ("wss".equals(wsURL.getScheme())) { - wsPort = 443; - } else { - wsPort = 80; - } - } - + int wsPort = websocketPort(wsURL); + String host = wsURL.getHost(); FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); HttpHeaders headers = request.headers(); headers.add(HttpHeaderNames.UPGRADE, HttpHeaderValues.WEBSOCKET) .add(HttpHeaderNames.CONNECTION, HttpHeaderValues.UPGRADE) .add(HttpHeaderNames.SEC_WEBSOCKET_KEY, key) - .add(HttpHeaderNames.HOST, wsURL.getHost() + ':' + wsPort); - - String originValue = "http://" + wsURL.getHost(); - if (wsPort != 80 && wsPort != 443) { - // if the port is not standard (80/443) its needed to add the port to the header. - // See http://tools.ietf.org/html/rfc6454#section-6.2 - originValue = originValue + ':' + wsPort; - } - headers.add(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, originValue); + .add(HttpHeaderNames.HOST, host + ':' + wsPort) + .add(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, websocketOriginValue(host, wsPort)); String expectedSubprotocol = expectedSubprotocol(); if (expectedSubprotocol != null && !expectedSubprotocol.isEmpty()) {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java index 168a2458d17..91963102fde 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java @@ -15,11 +15,41 @@ */ package io.netty.handler.codec.http.websocketx; +import io.netty.handler.codec.http.FullHttpRequest; +import io.netty.handler.codec.http.HttpHeaderNames; +import org.junit.Test; + import java.net.URI; +import static org.junit.Assert.assertEquals; + public class WebSocketClientHandshaker07Test extends WebSocketClientHandshakerTest { @Override protected WebSocketClientHandshaker newHandshaker(URI uri) { return new WebSocketClientHandshaker07(uri, WebSocketVersion.V07, null, false, null, 1024); } + + @Test + public void testSecOriginWss() { + URI uri = URI.create("wss://localhost/path%20with%20ws"); + WebSocketClientHandshaker handshaker = newHandshaker(uri); + FullHttpRequest request = handshaker.newHandshakeRequest(); + try { + assertEquals("https://localhost", request.headers().get(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN)); + } finally { + request.release(); + } + } + + @Test + public void testSecOriginWs() { + URI uri = URI.create("ws://localhost/path%20with%20ws"); + WebSocketClientHandshaker handshaker = newHandshaker(uri); + FullHttpRequest request = handshaker.newHandshakeRequest(); + try { + assertEquals("http://localhost", request.headers().get(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN)); + } finally { + request.release(); + } + } } diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java index 249bd958fb0..4ce8016adda 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java @@ -17,7 +17,7 @@ import java.net.URI; -public class WebSocketClientHandshaker08Test extends WebSocketClientHandshakerTest { +public class WebSocketClientHandshaker08Test extends WebSocketClientHandshaker07Test { @Override protected WebSocketClientHandshaker newHandshaker(URI uri) { return new WebSocketClientHandshaker07(uri, WebSocketVersion.V08, null, false, null, 1024); diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java index 2bc2e691b22..ad89fde6bc1 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java @@ -17,7 +17,7 @@ import java.net.URI; -public class WebSocketClientHandshaker13Test extends WebSocketClientHandshakerTest { +public class WebSocketClientHandshaker13Test extends WebSocketClientHandshaker07Test { @Override protected WebSocketClientHandshaker newHandshaker(URI uri) { return new WebSocketClientHandshaker13(uri, WebSocketVersion.V13, null, false, null, 1024);
train
train
2016-06-20T07:43:06
"2016-06-16T17:34:16Z"
caillette
val
netty/netty/5431_5437
netty/netty
netty/netty/5431
netty/netty/5437
[ "timestamp(timedelta=14.0, similarity=0.908510300263825)" ]
b4d4c0034d6f9e5e5884cc70e625d9f2655008bd
12e416bbca7dd753731403fb5db879d7f056ea3c
[ "@trustin PTAL\n\n> Am 20.06.2016 um 16:09 schrieb Franck Chevassu [email protected]:\n> \n> [ netty-4.1.1.Final ]\n> \n> There seem to be a couple of things in HttpProxyHandler that do not comply with RFC 2817 (Upgrading to TLS Within HTTP/1.1)\n> \n> The HTTP version required to support the CONNECT method is 1.1,\n> whereas Netty's HttpProxyHandler class specifies a 1.0 version (line 123)\n> \n> The \"Host\" header should apparently hold the name of the target host,\n> whereas Netty specifies the proxy address instead, (lines 127-131).\n> \n> Changing the original code in lines 122-131 :\n> \n> FullHttpRequest req = new DefaultFullHttpRequest(\n> HttpVersion.HTTP_1_0, HttpMethod.CONNECT,\n> rhost + ':' + raddr.getPort(),\n> Unpooled.EMPTY_BUFFER, false);\n> \n> SocketAddress proxyAddress = proxyAddress();\n> if (proxyAddress instanceof InetSocketAddress) {\n> InetSocketAddress hostAddr = (InetSocketAddress) proxyAddress;\n> req.headers().set(HttpHeaderNames.HOST, hostAddr.getHostString() + ':' + hostAddr.getPort());\n> }\n> To something like this would solve both issues :\n> \n> final String host = rhost + ':' + raddr.getPort();\n> \n> FullHttpRequest req = new DefaultFullHttpRequest(\n> HttpVersion.HTTP_1_1, HttpMethod.CONNECT,\n> host,\n> Unpooled.EMPTY_BUFFER, false);\n> \n> req.headers().set(HttpHeaderNames.HOST, host);\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@fchevassu Care to send a PR? I don't have enough bandwidth these days.\n", "Fixed by https://github.com/netty/netty/pull/5437\n" ]
[]
"2016-06-22T12:39:59Z"
[ "defect" ]
Incorrect protocol version & "host" header in HttpProxyHandler
[ netty-4.1.1.Final ] There seem to be a couple of things in HttpProxyHandler that do not comply with [RFC 2817](http://www.faqs.org/rfcs/rfc2817.html) (Upgrading to TLS Within HTTP/1.1) - The HTTP version required to support the CONNECT method is 1.1, whereas Netty's HttpProxyHandler class specifies a 1.0 version (line 123) - The "Host" header should apparently hold the name of the target host, whereas Netty specifies the proxy address instead, (lines 127-131). Changing the original code in lines 122-131 : ``` FullHttpRequest req = new DefaultFullHttpRequest( HttpVersion.HTTP_1_0, HttpMethod.CONNECT, rhost + ':' + raddr.getPort(), Unpooled.EMPTY_BUFFER, false); SocketAddress proxyAddress = proxyAddress(); if (proxyAddress instanceof InetSocketAddress) { InetSocketAddress hostAddr = (InetSocketAddress) proxyAddress; req.headers().set(HttpHeaderNames.HOST, hostAddr.getHostString() + ':' + hostAddr.getPort()); } ``` To something like this would solve both issues : ``` final String host = rhost + ':' + raddr.getPort(); FullHttpRequest req = new DefaultFullHttpRequest( HttpVersion.HTTP_1_1, HttpMethod.CONNECT, host, Unpooled.EMPTY_BUFFER, false); req.headers().set(HttpHeaderNames.HOST, host); ```
[ "handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java" ]
[ "handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java" ]
[]
diff --git a/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java b/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java index 376b319c02a..53d6fbb823d 100644 --- a/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java +++ b/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java @@ -119,16 +119,13 @@ protected Object newInitialMessage(ChannelHandlerContext ctx) throws Exception { rhost = raddr.getAddress().getHostAddress(); } + final String host = rhost + ':' + raddr.getPort(); FullHttpRequest req = new DefaultFullHttpRequest( - HttpVersion.HTTP_1_0, HttpMethod.CONNECT, - rhost + ':' + raddr.getPort(), + HttpVersion.HTTP_1_1, HttpMethod.CONNECT, + host, Unpooled.EMPTY_BUFFER, false); - SocketAddress proxyAddress = proxyAddress(); - if (proxyAddress instanceof InetSocketAddress) { - InetSocketAddress hostAddr = (InetSocketAddress) proxyAddress; - req.headers().set(HttpHeaderNames.HOST, hostAddr.getHostString() + ':' + hostAddr.getPort()); - } + req.headers().set(HttpHeaderNames.HOST, host); if (authorization != null) { req.headers().set(HttpHeaderNames.PROXY_AUTHORIZATION, authorization);
null
train
train
2016-06-22T14:26:05
"2016-06-20T14:09:27Z"
fchevassu
val
netty/netty/5436_5446
netty/netty
netty/netty/5436
netty/netty/5446
[ "timestamp(timedelta=41.0, similarity=1.0000000000000002)" ]
2562ef7cbebe5eb4345cdde6323aeb0113c3f86c
9a057aee7e63a3d0e5fdebb1b4526b1d6f63d661
[ "`HpackUtil.equals` currently implements an equality comparison which \"doesn't leak timing information\" [1]. @jpinner - can you provide some motivation for why this type of comparison is necessary in this specific case?\n\n[1] https://github.com/twitter/hpack/commit/84e3438ae60efad587c7058c87b04064d7d6a536#diff-d23f44552b6381004428efb52e1667baR26\n", "@Scottmitch I think it has something to do with security http://httpwg.org/specs/rfc7541.html#Security\n", "@buchgr that's correct\n@Scottmitch it's to help prevent timing-based side-channel attacks on header field values\n", "Understood it is to prevent timing side-channel attacks. I guess the compromise was made to leak when the length is correct, but not how many characters are matching.\n", "Thanks for looking into this Scott! I believe this is quite an urgent issue for gRPC that would be great to have in the next Netty 4.1.2. So please do let us know if you don't have enough cycles, we are happy to help out! cc: @carl-mastrangelo \n", "@buchgr - Np. PR is pending.\n", "This might be a dumb question, but does the constant time compare really need to be used for non-security sensitive headers? HPACK provides a means to not use indexing and never indexing, which seems like it implies they sensitive. Most of the time we do want the fast, time-leaking compare.\n", "@carl-mastrangelo - +1\n\n@jpinner - WDYT?\n", "I'd have to take a pass through the code and see exactly how it's used. The compare does predate the inclusion of the \"never index\" flag. That being said I can certainly think of \"sensitive\" headers that you might want to index for performance in the table (e.g. domain cookies sent on every request).\n\nI personally would be cautious here.\n\nSent from my iPhone\n\n> On Jun 22, 2016, at 2:21 PM, Scott Mitchell [email protected] wrote:\n> \n> @carl-mastrangelo - +1\n> \n> @jpinner - WDYT?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n" ]
[ "@Scottmitch could you not just use the methods in PlatformDependent0 ?\n", "can you clarify which method you are suggesting I use? `PlatformDependent0` doesn't have a equals which accepts a `CharSequence`.\n", "I ran benchmarks comparing some equals functions, and found that it is significantly faster to use an int rather than a byte:\n\n```\nHeadersBenchmark.arrays 4096 sample 129777 1227.096 ± 4.670 ns/op\nHeadersBenchmark.digest 4096 sample 127528 2488.555 ± 6.749 ns/op\nHeadersBenchmark.theBytes 4096 sample 99710 3170.983 ± 9.587 ns/op\nHeadersBenchmark.theChars 4096 sample 105714 2992.226 ± 8.468 ns/op\nHeadersBenchmark.theInts 4096 sample 126811 2497.360 ± 7.106 ns/op\nHeadersBenchmark.theLongs 4096 sample 99672 3181.179 ± 8.380 ns/op\nHeadersBenchmark.theShorts 4096 sample 100008 3158.683 ± 8.164 ns/op\n```\n\n`arrays` is Arrays.equals (for comparison)\n`digest` is MessageDigest.isEqual (broken upto in partly java6, fixed in 2009)\nAll the others are copy paste versions of MessageDigest.isEqual using different accumulators.\n", "The [Go](https://golang.org/src/crypto/subtle/constant_time.go?s=490:531#L2) implementation uses a constant time comparison for the final check here. You might consider using it too.\n", "thanks @carl-mastrangelo !\n", "never mind ... I thought this two methods are a copy of what you have implemented...\n", "@carl-mastrangelo - I repeated your benchmark when charsequence is used and not surprisingly I see the same results ... i'll make the same change here too\n\n```\nBenchmark Mode Cnt Score Error Units\nTestBenchmark.theChars sample 152044 2105.563 ± 7.328 ns/op\nTestBenchmark.theInts sample 141490 1155.404 ± 6.013 ns/op\nTestBenchmark.theLongs sample 108386 1498.885 ± 8.700 ns/op\nTestBenchmark.theShorts sample 151317 2119.505 ± 8.116 ns/op\n```\n", "+1. Done\n", "@carl-mastrangelo - note this change ... before it was skipping out early if the name didn't match ... which may yield some timing info.\n\nI propose we do a follow up investigation if you think we can reduce the amount of \"constant time comparisons\" that we must do ... or just generally if we are using the \"constant time comparisons\" where/how we should be.\n", "I asked internally how \"constant\" a constant function needs to be, and so far the answer has been \"nothing is constant and will basically be impossible with an optimizing compiler / processor\".\n\nThis is best effort, and copying behavior of other security reviewed code feels like said effort.\n", "Are you using & to avoid short circuit behavior? If so, that's subtle and deserves a comment. If not, then && seems appropriate.\n", "Yes. I can add a comment (this preserves existing behavior).\n", "That should be placed into `equalsConstantTime` in `ConstantTimeUtils` and `PlatformDependent` as that's generally useful for have in an equals method. Shorter strings are smaller or so.\n", "I think the name is misleading, as it's not constant time equals?\n", "We should probably come up with a better name for this class ... `EqualsUtils`?\n", "Can you clarify your suggestion? `ConstantTimeUtils` for `CharSequence` does have this check. However the variant which operates on `byte[]` accepts a length parameter ... so it is implied the length which will be iterated is the same (and bounds checks are omitted are assumed to be outside the scope of the method per the docs).\n", "Previously I was using `Consistent` instead of `Constant`. However [Go](https://golang.org/src/crypto/subtle/constant_time.go?s=490:531#L2) and various internet searches seem to associate `constant time` with this concept so I figured I would use the prevailing nomenclature ... do you have another suggestion?\n", "This comment seems to be related to https://github.com/netty/netty/pull/5446#discussion_r68375984. lets discuss their.\n", "The name constant implies that the contents of the array does not affect the time it takes to process. There are always going to be compiler tricks or processor tricks or OS tricks that will mess with the timing. The important thing is to not have different behavior based on the contents. That's where the \"constant\" comes from.\n\nThat said, to @buchgr's comment, I think you may still need to do a bounds check. Something like bytes1.length - startPos1 == bytes2.length - start2Pos.\n", "> I think you may still need to do a bounds check. Something like bytes1.length - startPos1 == bytes2.length - start2Pos.\n\nThe bounds check is done through the length check. The \"no bounds checking behavior\" is also specified in the [javadocs](https://github.com/netty/netty/pull/5446/files#diff-ea8860513cb8af4c4392d1e295326fc6R639), PlatformDependent/ConstantTimeUtils are part of the internal API, and this behavior is consistent with [PlatformDependent.equals](https://github.com/netty/netty/blame/4.1/common/src/main/java/io/netty/util/internal/PlatformDependent.java#L625).\n", "Sorry to be nitpicky, but my intuition says that this will be slower than the forward path just because it would be easier for the cache to take advantage of this. \n\nAlso, how are you sure that byte arrays are word aligned? Doing an unaligned access is pretty slow. Arrays that are stack allocated may not have the same alignment guarantees as heap allocated. \n\nIs there benchmarking for this method? \n\nAnd fyi, I am proposing a similar change for Go: https://go-review.googlesource.com/#/c/24422/1/src/crypto/subtle/constant_time.go\n", "> Sorry to be nitpicky, but my intuition says that this will be slower than the forward path just because it would be easier for the cache to take advantage of this.\n> \n> Is there benchmarking for this method?\n\n`PlatformDependentBenchmark` exists which I will run for a before/after. This was shown to provide benefits on smaller inputs when running benchmarks on `equalsConstantTime`. The overhead of allocating `i`, `j`, and `end` can be avoided for smaller inputs that don't need these variables.\n\n> Also, how are you sure that byte arrays are word aligned?\n\n`PlatformDependent.equalsConstantTime` checks if aligned access is allowed in general, but no check is done on the individual arrays. Do you know how I would check, and an example allocation which I can use to verify this method?\n", "@carl-mastrangelo - Here are the results of `PlatformDependentBenchmark`. I switched the measurement to average time and added `1` for a size.\n\nWe gain on the smaller inputs, but start to perform worse around input size of `100`. However the gain may be negligible (or likely a loss) if there are larger headers involved ... let me revert this.\n\n```\nWith this PR\nBenchmark (size) Mode Cnt Score Error Units\nPlatformDependentBenchmark.unsafeBytesEqual 1 avgt 5 4.453 ± 0.081 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 7 avgt 5 4.756 ± 0.139 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 10 avgt 5 5.493 ± 0.223 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 50 avgt 5 9.630 ± 0.262 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 100 avgt 5 13.559 ± 0.306 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 1000 avgt 5 107.538 ± 1.410 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 10000 avgt 5 995.796 ± 16.488 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 100000 avgt 5 9942.613 ± 350.996 ns/op\n\nBefore This PR\nBenchmark (size) Mode Cnt Score Error Units\nPlatformDependentBenchmark.unsafeBytesEqual 1 avgt 5 4.906 ± 0.404 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 7 avgt 5 5.469 ± 0.099 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 10 avgt 5 6.051 ± 0.098 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 50 avgt 5 9.634 ± 0.235 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 100 avgt 5 12.056 ± 0.455 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 1000 avgt 5 73.085 ± 1.784 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 10000 avgt 5 649.883 ± 23.863 ns/op\nPlatformDependentBenchmark.unsafeBytesEqual 100000 avgt 5 7254.138 ± 475.926 ns/op\n```\n", "@carl-mastrangelo - The following was collapsed but still open ...\n\n> Also, how are you sure that byte arrays are word aligned?\n\n`PlatformDependent.equalsConstantTime` checks if unaligned access is allowed in general, but no check is done on the individual arrays. Do you know how I would check, and an example allocation which I can use to verify this method?\n\n> And fyi, I am proposing a similar change for Go: https://go-review.googlesource.com/#/c/24422/1/src/crypto/subtle/constant_time.go\n\nI guess this would depend upon the answer to the first question ... maybe investigate this as a followup?\n", "Not sure how you would check, but being able to do unaligned access has a noticeble performance degradation. Intel lets you do it, but just does it slower. (i.e. has to load two cache lines instead of one, and takes longer to go through the pipeline). \n\nThe benchmark numbers on the Go version saw a boost (~460ns -> 320ns comparing two 4k buffers) by having correct alignment. I have no idea how Java tells you if something is aligned, since I don't know how the memory allocator works. Since java has a compacting GC, any pointers to data are an extra level of indirection away right?\n", "I think array contents on hotspot 64-bit intel should by default be 8 byte aligned. Due to compressed oops the array object is 8 byte aligned, followed by a 12 byte object header and the 4 byte array length, followed by array contents 😄 \n\n> Since java has a compacting GC, any pointers to data are an extra level of indirection away right?\n\nafaik the references to an object are updated when moved. so an array variable holds a pointer to the array contents (minus some offset for the object header + size).\n\nThat being said, apparently you can get the address of an object/array [somehow](http://hg.openjdk.java.net/code-tools/jol/file/30c3fbadf049/jol-samples/src/main/java/org/openjdk/jol/samples/JOLSample_17_Allocation.java#l64) (best guess via Unsafe)\n\nHowever, since java GC can relocate an object between getting the address and doing the comparison, you might never know whether it's the right address at the time of comparison :-).\n\nI think the whole complexity might not be worth it, especially since for headers we are likely not comparing 4K buffers but a few dozen bytes at a time.\n", "@buchgr - Thanks for the details. I agree it doesn't seem practical to do this in java and may be more trouble than it is worth for this use case anyways.\n", "@buchgr are you sure it has the same alignment, even when the array is stack allocated due to escape analysis? Also, there were still about ~12% perf gain by using aligned access. Whether or not this matters is debatable.\n" ]
"2016-06-22T23:25:49Z"
[ "improvement" ]
HpackUtil.equals performance improvement
PR https://github.com/netty/netty/pull/5355 modified interfaces to reduce GC related to the HPACK code. However this came with an anticipated performance regression related to `HpackUtil.equals` due to `AsciiString`'s increase cost of `charAt(..)`. We should mitigate this performance regression.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HeaderField.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HpackUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/StaticTable.java", "common/src/main/java/io/netty/util/internal/PlatformDependent.java", "common/src/main/java/io/netty/util/internal/PlatformDependent0.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HeaderField.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HpackUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/StaticTable.java", "common/src/main/java/io/netty/util/internal/ConstantTimeUtils.java", "common/src/main/java/io/netty/util/internal/PlatformDependent.java", "common/src/main/java/io/netty/util/internal/PlatformDependent0.java", "microbench/src/main/java/io/netty/microbench/http2/internal/hpack/HpackUtilBenchmark.java" ]
[ "common/src/test/java/io/netty/util/internal/PlatformDependentTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java index abc2948a451..ba0aa01bc28 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java @@ -40,6 +40,7 @@ import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.INCREMENTAL; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.NEVER; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.NONE; +import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.equalsConstantTime; public final class Encoder { @@ -304,9 +305,8 @@ private HeaderEntry getEntry(CharSequence name, CharSequence value) { int h = hash(name); int i = index(h); for (HeaderEntry e = headerFields[i]; e != null; e = e.next) { - if (e.hash == h && - HpackUtil.equals(name, e.name) && - HpackUtil.equals(value, e.value)) { + // To avoid short circuit behavior a bitwise operator is used instead of a boolean operator. + if (e.hash == h && (equalsConstantTime(name, e.name) & equalsConstantTime(value, e.value)) != 0) { return e; } } @@ -325,7 +325,7 @@ private int getIndex(CharSequence name) { int i = index(h); int index = -1; for (HeaderEntry e = headerFields[i]; e != null; e = e.next) { - if (e.hash == h && HpackUtil.equals(name, e.name)) { + if (e.hash == h && equalsConstantTime(name, e.name) != 0) { index = e.index; break; } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HeaderField.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HeaderField.java index 41b82436be7..5c57037434b 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HeaderField.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HeaderField.java @@ -31,6 +31,7 @@ */ package io.netty.handler.codec.http2.internal.hpack; +import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.equalsConstantTime; import static io.netty.util.internal.ObjectUtil.checkNotNull; class HeaderField { @@ -72,9 +73,8 @@ public boolean equals(Object obj) { return false; } HeaderField other = (HeaderField) obj; - boolean nameEquals = HpackUtil.equals(name, other.name); - boolean valueEquals = HpackUtil.equals(value, other.value); - return nameEquals && valueEquals; + // To avoid short circuit behavior a bitwise operator is used instead of a boolean operator. + return (equalsConstantTime(name, other.name) & equalsConstantTime(value, other.value)) != 0; } @Override diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HpackUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HpackUtil.java index 620ea2f43a6..658b422abe3 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HpackUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HpackUtil.java @@ -31,19 +31,38 @@ */ package io.netty.handler.codec.http2.internal.hpack; +import io.netty.util.AsciiString; +import io.netty.util.internal.ConstantTimeUtils; +import io.netty.util.internal.PlatformDependent; + final class HpackUtil { /** - * A string compare that doesn't leak timing information. + * Compare two {@link CharSequence} objects without leaking timing information. + * <p> + * The {@code int} return type is intentional and is designed to allow cascading of constant time operations: + * <pre> + * String s1 = "foo"; + * String s2 = "foo"; + * String s3 = "foo"; + * String s4 = "goo"; + * boolean equals = (equalsConstantTime(s1, s2) & equalsConstantTime(s3, s4)) != 0; + * </pre> + * @param s1 the first value. + * @param s2 the second value. + * @return {@code 0} if not equal. {@code 1} if equal. */ - static boolean equals(CharSequence s1, CharSequence s2) { - if (s1.length() != s2.length()) { - return false; - } - char c = 0; - for (int i = 0; i < s1.length(); i++) { - c |= s1.charAt(i) ^ s2.charAt(i); + static int equalsConstantTime(CharSequence s1, CharSequence s2) { + if (s1 instanceof AsciiString && s2 instanceof AsciiString) { + if (s1.length() != s2.length()) { + return 0; + } + AsciiString s1Ascii = (AsciiString) s1; + AsciiString s2Ascii = (AsciiString) s2; + return PlatformDependent.equalsConstantTime(s1Ascii.array(), s1Ascii.arrayOffset(), + s2Ascii.array(), s2Ascii.arrayOffset(), s1.length()); } - return c == 0; + + return ConstantTimeUtils.equalsConstantTime(s1, s2); } // Section 6.2. Literal Header Field Representation diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/StaticTable.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/StaticTable.java index bf1aa70fb61..b95a2a2f0ed 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/StaticTable.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/StaticTable.java @@ -38,6 +38,8 @@ import java.util.Arrays; import java.util.List; +import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.equalsConstantTime; + final class StaticTable { // Appendix A: Static Table @@ -153,10 +155,10 @@ static int getIndex(CharSequence name, CharSequence value) { // Note this assumes all entries for a given header field are sequential. while (index <= length) { HeaderField entry = getEntry(index); - if (!HpackUtil.equals(name, entry.name)) { + if (equalsConstantTime(name, entry.name) == 0) { break; } - if (HpackUtil.equals(value, entry.value)) { + if (equalsConstantTime(value, entry.value) != 0) { return index; } index++; diff --git a/common/src/main/java/io/netty/util/internal/ConstantTimeUtils.java b/common/src/main/java/io/netty/util/internal/ConstantTimeUtils.java new file mode 100644 index 00000000000..8bddb629e49 --- /dev/null +++ b/common/src/main/java/io/netty/util/internal/ConstantTimeUtils.java @@ -0,0 +1,131 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.util.internal; + +public final class ConstantTimeUtils { + private ConstantTimeUtils() { } + + /** + * Compare two {@code int}s without leaking timing information. + * <p> + * The {@code int} return type is intentional and is designed to allow cascading of constant time operations: + * <pre> + * int v1 = 1; + * int v1 = 1; + * int v1 = 1; + * int v1 = 500; + * boolean equals = (equalsConstantTime(l1, l2) & equalsConstantTime(l3, l4)) != 0; + * </pre> + * @param x the first value. + * @param y the second value. + * @return {@code 0} if not equal. {@code 1} if equal. + */ + public static int equalsConstantTime(int x, int y) { + int z = -1 ^ (x ^ y); + z &= z >> 16; + z &= z >> 8; + z &= z >> 4; + z &= z >> 2; + z &= z >> 1; + return z & 1; + } + + /** + * Compare two {@code longs}s without leaking timing information. + * <p> + * The {@code int} return type is intentional and is designed to allow cascading of constant time operations: + * <pre> + * long v1 = 1; + * long v1 = 1; + * long v1 = 1; + * long v1 = 500; + * boolean equals = (equalsConstantTime(l1, l2) & equalsConstantTime(l3, l4)) != 0; + * </pre> + * @param x the first value. + * @param y the second value. + * @return {@code 0} if not equal. {@code 1} if equal. + */ + public static int equalsConstantTime(long x, long y) { + long z = -1L ^ (x ^ y); + z &= z >> 32; + z &= z >> 16; + z &= z >> 8; + z &= z >> 4; + z &= z >> 2; + z &= z >> 1; + return (int) (z & 1); + } + + /** + * Compare two {@code byte} arrays for equality without leaking timing information. + * For performance reasons no bounds checking on the parameters is performed. + * <p> + * The {@code int} return type is intentional and is designed to allow cascading of constant time operations: + * <pre> + * byte[] s1 = new {1, 2, 3}; + * byte[] s2 = new {1, 2, 3}; + * byte[] s3 = new {1, 2, 3}; + * byte[] s4 = new {4, 5, 6}; + * boolean equals = (equalsConstantTime(s1, 0, s2, 0, s1.length) & + * equalsConstantTime(s3, 0, s4, 0, s3.length)) != 0; + * </pre> + * @param bytes1 the first byte array. + * @param startPos1 the position (inclusive) to start comparing in {@code bytes1}. + * @param bytes2 the second byte array. + * @param startPos2 the position (inclusive) to start comparing in {@code bytes2}. + * @param length the amount of bytes to compare. This is assumed to be validated as not going out of bounds + * by the caller. + * @return {@code 0} if not equal. {@code 1} if equal. + */ + public static int equalsConstantTime(byte[] bytes1, int startPos1, + byte[] bytes2, int startPos2, int length) { + // Benchmarking demonstrates that using an int to accumulate is faster than other data types. + int b = 0; + final int end = startPos1 + length; + for (int i = startPos1, j = startPos2; i < end; ++i, ++j) { + b |= bytes1[i] ^ bytes2[j]; + } + return equalsConstantTime(b, 0); + } + + /** + * Compare two {@link CharSequence} objects without leaking timing information. + * <p> + * The {@code int} return type is intentional and is designed to allow cascading of constant time operations: + * <pre> + * String s1 = "foo"; + * String s2 = "foo"; + * String s3 = "foo"; + * String s4 = "goo"; + * boolean equals = (equalsConstantTime(s1, s2) & equalsConstantTime(s3, s4)) != 0; + * </pre> + * @param s1 the first value. + * @param s2 the second value. + * @return {@code 0} if not equal. {@code 1} if equal. + */ + public static int equalsConstantTime(CharSequence s1, CharSequence s2) { + if (s1.length() != s2.length()) { + return 0; + } + + // Benchmarking demonstrates that using an int to accumulate is faster than other data types. + int c = 0; + for (int i = 0; i < s1.length(); ++i) { + c |= s1.charAt(i) ^ s2.charAt(i); + } + return equalsConstantTime(c, 0); + } +} diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent.java b/common/src/main/java/io/netty/util/internal/PlatformDependent.java index 47d054c0d03..361e1ca1422 100644 --- a/common/src/main/java/io/netty/util/internal/PlatformDependent.java +++ b/common/src/main/java/io/netty/util/internal/PlatformDependent.java @@ -59,6 +59,7 @@ import static io.netty.util.internal.PlatformDependent0.hashCodeAsciiCompute; import static io.netty.util.internal.PlatformDependent0.hashCodeAsciiSanitize; import static io.netty.util.internal.PlatformDependent0.hashCodeAsciiSanitizeAsByte; +import static io.netty.util.internal.PlatformDependent0.unalignedAccess; /** * Utility that detects various properties specific to the current runtime @@ -633,10 +634,36 @@ public static boolean useDirectBufferNoCleaner() { * by the caller. */ public static boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { - if (!hasUnsafe() || !PlatformDependent0.unalignedAccess()) { - return equalsSafe(bytes1, startPos1, bytes2, startPos2, length); - } - return PlatformDependent0.equals(bytes1, startPos1, bytes2, startPos2, length); + return !hasUnsafe() || !unalignedAccess() ? + equalsSafe(bytes1, startPos1, bytes2, startPos2, length) : + PlatformDependent0.equals(bytes1, startPos1, bytes2, startPos2, length); + } + + /** + * Compare two {@code byte} arrays for equality without leaking timing information. + * For performance reasons no bounds checking on the parameters is performed. + * <p> + * The {@code int} return type is intentional and is designed to allow cascading of constant time operations: + * <pre> + * byte[] s1 = new {1, 2, 3}; + * byte[] s2 = new {1, 2, 3}; + * byte[] s3 = new {1, 2, 3}; + * byte[] s4 = new {4, 5, 6}; + * boolean equals = (equalsConstantTime(s1, 0, s2, 0, s1.length) & + * equalsConstantTime(s3, 0, s4, 0, s3.length)) != 0; + * </pre> + * @param bytes1 the first byte array. + * @param startPos1 the position (inclusive) to start comparing in {@code bytes1}. + * @param bytes2 the second byte array. + * @param startPos2 the position (inclusive) to start comparing in {@code bytes2}. + * @param length the amount of bytes to compare. This is assumed to be validated as not going out of bounds + * by the caller. + * @return {@code 0} if not equal. {@code 1} if equal. + */ + public static int equalsConstantTime(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { + return !hasUnsafe() || !unalignedAccess() ? + ConstantTimeUtils.equalsConstantTime(bytes1, startPos1, bytes2, startPos2, length) : + PlatformDependent0.equalsConstantTime(bytes1, startPos1, bytes2, startPos2, length); } /** @@ -649,7 +676,7 @@ public static boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int st * The resulting hash code will be case insensitive. */ public static int hashCodeAscii(byte[] bytes, int startPos, int length) { - if (!hasUnsafe() || !PlatformDependent0.unalignedAccess()) { + if (!hasUnsafe() || !unalignedAccess()) { return hashCodeAsciiSafe(bytes, startPos, length); } return PlatformDependent0.hashCodeAscii(bytes, startPos, length); @@ -666,7 +693,7 @@ public static int hashCodeAscii(byte[] bytes, int startPos, int length) { * The resulting hash code will be case insensitive. */ public static int hashCodeAscii(CharSequence bytes) { - if (!hasUnsafe() || !PlatformDependent0.unalignedAccess()) { + if (!hasUnsafe() || !unalignedAccess()) { return hashCodeAsciiSafe(bytes); } else if (PlatformDependent0.hasCharArray(bytes)) { return PlatformDependent0.hashCodeAscii(PlatformDependent0.charArray(bytes)); diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java index 98801528fd1..c6266fe898a 100644 --- a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java +++ b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java @@ -381,9 +381,6 @@ static void setMemory(Object o, long offset, long bytes, byte value) { } static boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { - if (length == 0) { - return true; - } final long baseOffset1 = BYTE_ARRAY_BASE_OFFSET + startPos1; final long baseOffset2 = BYTE_ARRAY_BASE_OFFSET + startPos2; final int remainingBytes = length & 7; @@ -418,6 +415,47 @@ static boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2 } } + static int equalsConstantTime(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { + long result = 0; + final long baseOffset1 = BYTE_ARRAY_BASE_OFFSET + startPos1; + final long baseOffset2 = BYTE_ARRAY_BASE_OFFSET + startPos2; + final int remainingBytes = length & 7; + final long end = baseOffset1 + remainingBytes; + for (long i = baseOffset1 - 8 + length, j = baseOffset2 - 8 + length; i >= end; i -= 8, j -= 8) { + result |= UNSAFE.getLong(bytes1, i) ^ UNSAFE.getLong(bytes2, j); + } + switch (remainingBytes) { + case 7: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getInt(bytes1, baseOffset1 + 3) ^ UNSAFE.getInt(bytes2, baseOffset2 + 3)) | + (UNSAFE.getChar(bytes1, baseOffset1 + 1) ^ UNSAFE.getChar(bytes2, baseOffset2 + 1)) | + (UNSAFE.getByte(bytes1, baseOffset1) ^ UNSAFE.getByte(bytes2, baseOffset2)), 0); + case 6: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getInt(bytes1, baseOffset1 + 2) ^ UNSAFE.getInt(bytes2, baseOffset2 + 2)) | + (UNSAFE.getChar(bytes1, baseOffset1) ^ UNSAFE.getChar(bytes2, baseOffset2)), 0); + case 5: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getInt(bytes1, baseOffset1 + 1) ^ UNSAFE.getInt(bytes2, baseOffset2 + 1)) | + (UNSAFE.getByte(bytes1, baseOffset1) ^ UNSAFE.getByte(bytes2, baseOffset2)), 0); + case 4: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getInt(bytes1, baseOffset1) ^ UNSAFE.getInt(bytes2, baseOffset2)), 0); + case 3: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getChar(bytes1, baseOffset1 + 1) ^ UNSAFE.getChar(bytes2, baseOffset2 + 1)) | + (UNSAFE.getByte(bytes1, baseOffset1) ^ UNSAFE.getByte(bytes2, baseOffset2)), 0); + case 2: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getChar(bytes1, baseOffset1) ^ UNSAFE.getChar(bytes2, baseOffset2)), 0); + case 1: + return ConstantTimeUtils.equalsConstantTime(result | + (UNSAFE.getByte(bytes1, baseOffset1) ^ UNSAFE.getByte(bytes2, baseOffset2)), 0); + default: + return ConstantTimeUtils.equalsConstantTime(result, 0); + } + } + static int hashCodeAscii(byte[] bytes) { return hashCodeAscii(bytes, 0, bytes.length); } diff --git a/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/HpackUtilBenchmark.java b/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/HpackUtilBenchmark.java new file mode 100644 index 00000000000..2adf73566f2 --- /dev/null +++ b/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/HpackUtilBenchmark.java @@ -0,0 +1,94 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.microbench.http2.internal.hpack; + +import io.netty.microbench.util.AbstractMicrobenchmark; +import io.netty.util.AsciiString; +import io.netty.util.internal.ConstantTimeUtils; +import io.netty.util.internal.PlatformDependent; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.Level; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Param; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.Threads; +import org.openjdk.jmh.annotations.Warmup; + +import java.util.List; + +@Threads(1) +@Warmup(iterations = 5) +@Measurement(iterations = 5) +public class HpackUtilBenchmark extends AbstractMicrobenchmark { + @Param + public HeadersSize size; + + private List<Header> headers; + + @Setup(Level.Trial) + public void setup() { + headers = Util.headers(size, false); + } + + @Benchmark + public int oldEquals() { + int count = 0; + for (int i = 0; i < headers.size(); ++i) { + Header header = headers.get(i); + if (oldEquals(header.name, header.name)) { + ++count; + } + } + return count; + } + + @Benchmark + public int newEquals() { + int count = 0; + for (int i = 0; i < headers.size(); ++i) { + Header header = headers.get(i); + if (newEquals(header.name, header.name)) { + ++count; + } + } + return count; + } + + private static boolean oldEquals(CharSequence s1, CharSequence s2) { + if (s1.length() != s2.length()) { + return false; + } + char c = 0; + for (int i = 0; i < s1.length(); i++) { + c |= s1.charAt(i) ^ s2.charAt(i); + } + return c == 0; + } + + private static boolean newEquals(CharSequence s1, CharSequence s2) { + if (s1 instanceof AsciiString && s2 instanceof AsciiString) { + if (s1.length() != s2.length()) { + return false; + } + AsciiString s1Ascii = (AsciiString) s1; + AsciiString s2Ascii = (AsciiString) s2; + return PlatformDependent.equalsConstantTime(s1Ascii.array(), s1Ascii.arrayOffset(), + s2Ascii.array(), s2Ascii.arrayOffset(), s1.length()) != 0; + } + + return ConstantTimeUtils.equalsConstantTime(s1, s2) != 0; + } +}
diff --git a/common/src/test/java/io/netty/util/internal/PlatformDependentTest.java b/common/src/test/java/io/netty/util/internal/PlatformDependentTest.java index b31e36c0086..53ea6a846f6 100644 --- a/common/src/test/java/io/netty/util/internal/PlatformDependentTest.java +++ b/common/src/test/java/io/netty/util/internal/PlatformDependentTest.java @@ -19,6 +19,8 @@ import java.util.Random; +import static io.netty.util.internal.PlatformDependent.hashCodeAscii; +import static io.netty.util.internal.PlatformDependent.hashCodeAsciiSafe; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotSame; @@ -26,34 +28,57 @@ public class PlatformDependentTest { private static final Random r = new Random(); + @Test + public void testEqualsConsistentTime() { + testEquals(new EqualityChecker() { + @Override + public boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { + return PlatformDependent.equalsConstantTime(bytes1, startPos1, bytes2, startPos2, length) != 0; + } + }); + } + @Test public void testEquals() { + testEquals(new EqualityChecker() { + @Override + public boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { + return PlatformDependent.equals(bytes1, startPos1, bytes2, startPos2, length); + } + }); + } + + private interface EqualityChecker { + boolean equals(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length); + } + + private void testEquals(EqualityChecker equalsChecker) { byte[] bytes1 = {'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd'}; byte[] bytes2 = {'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd'}; assertNotSame(bytes1, bytes2); - assertTrue(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes1.length)); - assertTrue(PlatformDependent.equals(bytes1, 2, bytes2, 2, bytes1.length - 2)); + assertTrue(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes1.length)); + assertTrue(equalsChecker.equals(bytes1, 2, bytes2, 2, bytes1.length - 2)); bytes1 = new byte[] {1, 2, 3, 4, 5, 6}; bytes2 = new byte[] {1, 2, 3, 4, 5, 6, 7}; assertNotSame(bytes1, bytes2); - assertFalse(PlatformDependent.equals(bytes1, 0, bytes2, 1, bytes1.length)); - assertTrue(PlatformDependent.equals(bytes2, 0, bytes1, 0, bytes1.length)); + assertFalse(equalsChecker.equals(bytes1, 0, bytes2, 1, bytes1.length)); + assertTrue(equalsChecker.equals(bytes2, 0, bytes1, 0, bytes1.length)); bytes1 = new byte[] {1, 2, 3, 4}; bytes2 = new byte[] {1, 2, 3, 5}; - assertFalse(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes1.length)); - assertTrue(PlatformDependent.equals(bytes1, 0, bytes2, 0, 3)); + assertFalse(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes1.length)); + assertTrue(equalsChecker.equals(bytes1, 0, bytes2, 0, 3)); bytes1 = new byte[] {1, 2, 3, 4}; bytes2 = new byte[] {1, 3, 3, 4}; - assertFalse(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes1.length)); - assertTrue(PlatformDependent.equals(bytes1, 2, bytes2, 2, bytes1.length - 2)); + assertFalse(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes1.length)); + assertTrue(equalsChecker.equals(bytes1, 2, bytes2, 2, bytes1.length - 2)); bytes1 = new byte[0]; bytes2 = new byte[0]; assertNotSame(bytes1, bytes2); - assertTrue(PlatformDependent.equals(bytes1, 0, bytes2, 0, 0)); + assertTrue(equalsChecker.equals(bytes1, 0, bytes2, 0, 0)); bytes1 = new byte[100]; bytes2 = new byte[100]; @@ -61,23 +86,23 @@ public void testEquals() { bytes1[i] = (byte) i; bytes2[i] = (byte) i; } - assertTrue(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes1.length)); + assertTrue(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes1.length)); bytes1[50] = 0; - assertFalse(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes1.length)); - assertTrue(PlatformDependent.equals(bytes1, 51, bytes2, 51, bytes1.length - 51)); - assertTrue(PlatformDependent.equals(bytes1, 0, bytes2, 0, 50)); + assertFalse(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes1.length)); + assertTrue(equalsChecker.equals(bytes1, 51, bytes2, 51, bytes1.length - 51)); + assertTrue(equalsChecker.equals(bytes1, 0, bytes2, 0, 50)); bytes1 = new byte[]{1, 2, 3, 4, 5}; bytes2 = new byte[]{3, 4, 5}; - assertFalse(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes2.length)); - assertTrue(PlatformDependent.equals(bytes1, 2, bytes2, 0, bytes2.length)); - assertTrue(PlatformDependent.equals(bytes2, 0, bytes1, 2, bytes2.length)); + assertFalse(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes2.length)); + assertTrue(equalsChecker.equals(bytes1, 2, bytes2, 0, bytes2.length)); + assertTrue(equalsChecker.equals(bytes2, 0, bytes1, 2, bytes2.length)); for (int i = 0; i < 1000; ++i) { bytes1 = new byte[i]; r.nextBytes(bytes1); bytes2 = bytes1.clone(); - assertTrue(PlatformDependent.equals(bytes1, 0, bytes2, 0, bytes1.length)); + assertTrue(equalsChecker.equals(bytes1, 0, bytes2, 0, bytes1.length)); } } @@ -97,14 +122,14 @@ public void testHashCodeAscii() { } String string = new String(bytesChar); assertEquals("length=" + i, - PlatformDependent.hashCodeAsciiSafe(bytes, 0, bytes.length), - PlatformDependent.hashCodeAscii(bytes, 0, bytes.length)); + hashCodeAsciiSafe(bytes, 0, bytes.length), + hashCodeAscii(bytes, 0, bytes.length)); assertEquals("length=" + i, - PlatformDependent.hashCodeAsciiSafe(string), - PlatformDependent.hashCodeAscii(string)); + hashCodeAsciiSafe(string), + hashCodeAscii(string)); assertEquals("length=" + i, - PlatformDependent.hashCodeAscii(bytes, 0, bytes.length), - PlatformDependent.hashCodeAscii(string)); + hashCodeAscii(bytes, 0, bytes.length), + hashCodeAscii(string)); } } }
val
train
2016-06-27T13:43:06
"2016-06-22T06:26:23Z"
Scottmitch
val
netty/netty/5434_5448
netty/netty
netty/netty/5434
netty/netty/5448
[ "timestamp(timedelta=5.0, similarity=0.8870567172644754)" ]
731f52fdf73852a8fc597b1f08ed86b63caee375
863413fdb35b160b2020f27678c450555c250568
[ "We also need to audit outbound path.\n", "cc: @Scottmitch, @normanmaurer, @nmittler \n", "@ejona86 should I take this?\n", "@buchgr, please do.\n\nThis will also need to be updated:\nhttps://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java#L72\n", "If we add 1 to readPadding() we will have be sure to account that the byte is already read and the [int dataLength = payload.readableBytes() - padding;](https://github.com/netty/netty/blob/netty-4.1.1.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java#L403) must account for this (and also for the other usages of `readPadding()`). An alternative would be to just adding `1` when we call the FrameListener [1] to ensure it is accounted for in flow control ... then we don't have to update usages of `readPadding()`. We should also update the comments on the `FrameListener` if we are going to include the \"padding length byte\" in the `padding` value.\n\n[1] https://github.com/netty/netty/blob/netty-4.1.1.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java#L410\n", "ok i ll take this. will see what turns out best in code. can discuss on the pr.\n", "thanks @buchgr !\n", "A quick look at the outbound path seems to indicate, that the pad length field is also not counted towards flow control there.\n\nInterestingly, the spec seems to allow zero length padding, that is the pad field is present with a value of zero.\n\n> Note: A frame can be increased in size by one octet by including a Pad Length field with a value of zero.\n\nI have to take a more detailed look tomorrow morning (11pm here), but this might indicate that we are doing padding wrong as when padding is 1, we really add two bytes (pad length and 1 byte padding). So probably we should just have padding to be a value in [0, 256], with a padding of 1 being special cased on the frame writer as pad length field present with value zero.\n" ]
[ "Either keep the return value int or don't cast to short?\n", "You might comment that the +1 is due to the pad length, which comes before most of the payload whereas the padding itself comes after.\n", "Probably should add a bit of javadoc to make it clear how it is different from `padding`. The implementation is \"obvious\" in what it does, but not quite _why_ it does.\n", "can we share this method? It seems to be used in a few spots. Maybe put it in `Http2CodecUtil`?\n", "explain how this padding value will be translated to bytes on the wire. it may be unexpected that 256 is allowed when the frame definition only allows for 8 byte length.\n" ]
"2016-06-23T12:38:25Z"
[ "defect" ]
http2: Off-by-one accounting with flow control and padding
Per [the spec](http://httpwg.org/specs/rfc7540.html#DATA): > The entire DATA frame payload is included in flow control, including the Pad Length and Padding fields if present. However, it seems the pad length isn't being accounted for. https://github.com/netty/netty/blob/netty-4.1.1.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java#L397 https://github.com/netty/netty/blob/netty-4.1.1.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L203 I suggest we [add 1 to readPadding(), when padding is present](https://github.com/netty/netty/blob/netty-4.1.1.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java#L598). This does mean that the `padding` argument could be 256 (instead of maxing out at 255). @carl-mastrangelo, FYI
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersFrame.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2DataWriter.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameWriter.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersFrame.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2DataWriter.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameWriter.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java index 84c99e86fd0..fe879c98e23 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2DataFrame.java @@ -20,6 +20,7 @@ import io.netty.util.IllegalReferenceCountException; import io.netty.util.internal.UnstableApi; +import static io.netty.handler.codec.http2.Http2CodecUtil.verifyPadding; import static io.netty.util.internal.ObjectUtil.checkNotNull; /** @@ -64,14 +65,13 @@ public DefaultHttp2DataFrame(ByteBuf content, boolean endStream) { * * @param content non-{@code null} payload * @param endStream whether this data should terminate the stream - * @param padding additional bytes that should be added to obscure the true content size + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). */ public DefaultHttp2DataFrame(ByteBuf content, boolean endStream, int padding) { this.content = checkNotNull(content, "content"); this.endStream = endStream; - if (padding < 0 || padding > Http2CodecUtil.MAX_UNSIGNED_BYTE) { - throw new IllegalArgumentException("padding must be non-negative and less than 256"); - } + verifyPadding(padding); this.padding = padding; } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java index 55e7f6d060f..312789ecb24 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java @@ -396,11 +396,11 @@ private void verifyContinuationFrame() throws Http2Exception { private void readDataFrame(ChannelHandlerContext ctx, ByteBuf payload, Http2FrameListener listener) throws Http2Exception { - short padding = readPadding(payload); + int padding = readPadding(payload); // Determine how much data there is to read by removing the trailing // padding. - int dataLength = payload.readableBytes() - padding; + int dataLength = lengthWithoutTrailingPadding(payload.readableBytes(), padding); if (dataLength < 0) { throw streamError(streamId, FRAME_SIZE_ERROR, "Frame payload too small for padding."); @@ -424,7 +424,7 @@ private void readHeadersFrame(final ChannelHandlerContext ctx, ByteBuf payload, final boolean exclusive = (word1 & 0x80000000L) != 0; final int streamDependency = (int) (word1 & 0x7FFFFFFFL); final short weight = (short) (payload.readUnsignedByte() + 1); - final ByteBuf fragment = payload.readSlice(payload.readableBytes() - padding); + final ByteBuf fragment = payload.readSlice(lengthWithoutTrailingPadding(payload.readableBytes(), padding)); // Create a handler that invokes the listener when the header block is complete. headersContinuation = new HeadersContinuation() { @@ -471,7 +471,7 @@ public void processFragment(boolean endOfHeaders, ByteBuf fragment, }; // Process the initial fragment, invoking the listener's callback if end of headers. - final ByteBuf fragment = payload.readSlice(payload.readableBytes() - padding); + final ByteBuf fragment = payload.readSlice(lengthWithoutTrailingPadding(payload.readableBytes(), padding)); headersContinuation.processFragment(flags.endOfHeaders(), fragment, listener); } @@ -542,7 +542,7 @@ public void processFragment(boolean endOfHeaders, ByteBuf fragment, }; // Process the initial fragment, invoking the listener's callback if end of headers. - final ByteBuf fragment = payload.readSlice(payload.readableBytes() - padding); + final ByteBuf fragment = payload.readSlice(lengthWithoutTrailingPadding(payload.readableBytes(), padding)); headersContinuation.processFragment(flags.endOfHeaders(), fragment, listener); } @@ -589,13 +589,24 @@ private void readUnknownFrame(ChannelHandlerContext ctx, ByteBuf payload, Http2F } /** - * If padding is present in the payload, reads the next byte as padding. Otherwise, returns zero. + * If padding is present in the payload, reads the next byte as padding. The padding also includes the one byte + * width of the pad length field. Otherwise, returns zero. */ - private short readPadding(ByteBuf payload) { + private int readPadding(ByteBuf payload) { if (!flags.paddingPresent()) { return 0; } - return payload.readUnsignedByte(); + return payload.readUnsignedByte() + 1; + } + + /** + * The padding parameter consists of the 1 byte pad length field and the trailing padding bytes. This method + * returns the number of readable bytes without the trailing padding. + */ + private static int lengthWithoutTrailingPadding(int readableBytes, int padding) { + return padding == 0 + ? readableBytes + : readableBytes - (padding - 1); } /** diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java index 67c90c7f587..ecf623f6197 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java @@ -46,6 +46,7 @@ import static io.netty.handler.codec.http2.Http2CodecUtil.isMaxFrameSizeValid; import static io.netty.handler.codec.http2.Http2CodecUtil.writeFrameHeaderInternal; import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedInt; +import static io.netty.handler.codec.http2.Http2CodecUtil.verifyPadding; import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedShort; import static io.netty.handler.codec.http2.Http2Error.FRAME_SIZE_ERROR; import static io.netty.handler.codec.http2.Http2Exception.connectionError; @@ -71,7 +72,7 @@ public class DefaultHttp2FrameWriter implements Http2FrameWriter, Http2FrameSize private static final String STREAM_ID = "Stream ID"; private static final String STREAM_DEPENDENCY = "Stream Dependency"; /** - * This buffer is allocated to the maximum padding size needed, and filled with padding. + * This buffer is allocated to the maximum size of the padding field, and filled with zeros. * When padding is needed it can be taken as a slice of this buffer. Users should call {@link ByteBuf#retain()} * before using their slice. */ @@ -163,8 +164,8 @@ public ChannelFuture writeData(ChannelHandlerContext ctx, int streamId, ByteBuf ctx.write(lastFrame ? frameData : frameData.retain(), promiseAggregator.newPromise()); // Write the frame padding. - if (framePaddingBytes > 0) { - ctx.write(ZERO_BUFFER.slice(0, framePaddingBytes), promiseAggregator.newPromise()); + if (paddingBytes(framePaddingBytes) > 0) { + ctx.write(ZERO_BUFFER.slice(0, paddingBytes(framePaddingBytes)), promiseAggregator.newPromise()); } } while (!lastFrame); } catch (Throwable t) { @@ -302,7 +303,7 @@ public ChannelFuture writePushPromise(ChannelHandlerContext ctx, int streamId, // Read the first fragment (possibly everything). Http2Flags flags = new Http2Flags().paddingPresent(padding > 0); // INT_FIELD_LENGTH is for the length of the promisedStreamId - int nonFragmentLength = INT_FIELD_LENGTH + padding + flags.getPaddingPresenceFieldLength(); + int nonFragmentLength = INT_FIELD_LENGTH + padding; int maxFragmentLength = maxFrameSize - nonFragmentLength; ByteBuf fragment = headerBlock.readRetainedSlice(min(headerBlock.readableBytes(), maxFragmentLength)); @@ -320,8 +321,9 @@ public ChannelFuture writePushPromise(ChannelHandlerContext ctx, int streamId, // Write the first fragment. ctx.write(fragment, promiseAggregator.newPromise()); - if (padding > 0) { // Write out the padding, if any. - ctx.write(ZERO_BUFFER.slice(0, padding), promiseAggregator.newPromise()); + // Write out the padding, if any. + if (paddingBytes(padding) > 0) { + ctx.write(ZERO_BUFFER.slice(0, paddingBytes(padding)), promiseAggregator.newPromise()); } if (!flags.endOfHeaders()) { @@ -426,7 +428,7 @@ private ChannelFuture writeHeadersInternal(ChannelHandlerContext ctx, new Http2Flags().endOfStream(endStream).priorityPresent(hasPriority).paddingPresent(padding > 0); // Read the first fragment (possibly everything). - int nonFragmentBytes = padding + flags.getNumPriorityBytes() + flags.getPaddingPresenceFieldLength(); + int nonFragmentBytes = padding + flags.getNumPriorityBytes(); int maxFragmentLength = maxFrameSize - nonFragmentBytes; ByteBuf fragment = headerBlock.readRetainedSlice(min(headerBlock.readableBytes(), maxFragmentLength)); @@ -450,8 +452,9 @@ private ChannelFuture writeHeadersInternal(ChannelHandlerContext ctx, // Write the first fragment. ctx.write(fragment, promiseAggregator.newPromise()); - if (padding > 0) { // Write out the padding, if any. - ctx.write(ZERO_BUFFER.slice(0, padding), promiseAggregator.newPromise()); + // Write out the padding, if any. + if (paddingBytes(padding) > 0) { + ctx.write(ZERO_BUFFER.slice(0, paddingBytes(padding)), promiseAggregator.newPromise()); } if (!flags.endOfHeaders()) { @@ -473,8 +476,7 @@ private ChannelFuture writeHeadersInternal(ChannelHandlerContext ctx, private ChannelFuture writeContinuationFrames(ChannelHandlerContext ctx, int streamId, ByteBuf headerBlock, int padding, SimpleChannelPromiseAggregator promiseAggregator) { Http2Flags flags = new Http2Flags().paddingPresent(padding > 0); - int nonFragmentLength = padding + flags.getPaddingPresenceFieldLength(); - int maxFragmentLength = maxFrameSize - nonFragmentLength; + int maxFragmentLength = maxFrameSize - padding; // TODO: same padding is applied to all frames, is this desired? if (maxFragmentLength <= 0) { return promiseAggregator.setFailure(new IllegalArgumentException( @@ -484,7 +486,7 @@ private ChannelFuture writeContinuationFrames(ChannelHandlerContext ctx, int str if (headerBlock.isReadable()) { // The frame header (and padding) only changes on the last frame, so allocate it once and re-use int fragmentReadableBytes = min(headerBlock.readableBytes(), maxFragmentLength); - int payloadLength = fragmentReadableBytes + nonFragmentLength; + int payloadLength = fragmentReadableBytes + padding; ByteBuf buf = ctx.alloc().buffer(CONTINUATION_FRAME_HEADER_LENGTH); writeFrameHeaderInternal(buf, payloadLength, CONTINUATION, flags, streamId); writePaddingLength(buf, padding); @@ -493,7 +495,7 @@ private ChannelFuture writeContinuationFrames(ChannelHandlerContext ctx, int str fragmentReadableBytes = min(headerBlock.readableBytes(), maxFragmentLength); ByteBuf fragment = headerBlock.readRetainedSlice(fragmentReadableBytes); - payloadLength = fragmentReadableBytes + nonFragmentLength; + payloadLength = fragmentReadableBytes + padding; if (headerBlock.isReadable()) { ctx.write(buf.retain(), promiseAggregator.newPromise()); } else { @@ -509,18 +511,28 @@ private ChannelFuture writeContinuationFrames(ChannelHandlerContext ctx, int str ctx.write(fragment, promiseAggregator.newPromise()); // Write out the padding, if any. - if (padding > 0) { - ctx.write(ZERO_BUFFER.slice(0, padding), promiseAggregator.newPromise()); + if (paddingBytes(padding) > 0) { + ctx.write(ZERO_BUFFER.slice(0, paddingBytes(padding)), promiseAggregator.newPromise()); } } while(headerBlock.isReadable()); } return promiseAggregator; } - private static void writePaddingLength(ByteBuf buf, int paddingLength) { - if (paddingLength > 0) { + /** + * Returns the number of padding bytes that should be appended to the end of a frame. + */ + private static int paddingBytes(int padding) { + // The padding parameter contains the 1 byte pad length field as well as the trailing padding bytes. + // Subtract 1, so to only get the number of padding bytes that need to be appended to the end of a frame. + return padding - 1; + } + + private static void writePaddingLength(ByteBuf buf, int padding) { + if (padding > 0) { // It is assumed that the padding length has been bounds checked before this - buf.writeByte(paddingLength); + // Minus 1, as the pad length field is included in the padding parameter and is 1 byte wide. + buf.writeByte(padding - 1); } } @@ -536,12 +548,6 @@ private static void verifyStreamOrConnectionId(int streamId, String argumentName } } - private static void verifyPadding(int padding) { - if (padding < 0 || padding > MAX_UNSIGNED_BYTE) { - throw new IllegalArgumentException("Invalid padding value: " + padding); - } - } - private static void verifyWeight(short weight) { if (weight < MIN_WEIGHT || weight > MAX_WEIGHT) { throw new IllegalArgumentException("Invalid weight: " + weight); @@ -601,7 +607,7 @@ ByteBuf slice(int data, int padding, boolean endOfStream) { flags.endOfStream(endOfStream); frameHeader = buffer.readSlice(DATA_FRAME_HEADER_LENGTH).writerIndex(0); - int payloadLength = data + padding + flags.getPaddingPresenceFieldLength(); + int payloadLength = data + padding; writeFrameHeaderInternal(frameHeader, payloadLength, DATA, flags, streamId); writePaddingLength(frameHeader, padding); } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersFrame.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersFrame.java index d12b4365e34..6ad54e097f8 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersFrame.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersFrame.java @@ -17,6 +17,7 @@ import io.netty.util.internal.UnstableApi; +import static io.netty.handler.codec.http2.Http2CodecUtil.verifyPadding; import static io.netty.util.internal.ObjectUtil.checkNotNull; /** @@ -51,14 +52,13 @@ public DefaultHttp2HeadersFrame(Http2Headers headers, boolean endStream) { * * @param headers the non-{@code null} headers to send * @param endStream whether these headers should terminate the stream - * @param padding additional bytes that should be added to obscure the true content size + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). */ public DefaultHttp2HeadersFrame(Http2Headers headers, boolean endStream, int padding) { this.headers = checkNotNull(headers, "headers"); this.endStream = endStream; - if (padding < 0 || padding > Http2CodecUtil.MAX_UNSIGNED_BYTE) { - throw new IllegalArgumentException("padding must be non-negative and less than 256"); - } + verifyPadding(padding); this.padding = padding; } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java index b1f608482c0..8079bd00f7a 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java @@ -46,6 +46,11 @@ public final class Http2CodecUtil { public static final int PING_FRAME_PAYLOAD_LENGTH = 8; public static final short MAX_UNSIGNED_BYTE = 0xFF; + /** + * The maximum number of padding bytes. That is the 255 padding bytes appended to the end of a frame and the 1 byte + * pad length field. + */ + public static final int MAX_PADDING = 256; public static final int MAX_UNSIGNED_SHORT = 0xFFFF; public static final long MAX_UNSIGNED_INT = 0xFFFFFFFFL; public static final int FRAME_HEADER_LENGTH = 9; @@ -340,5 +345,11 @@ private boolean tryPromise() { } } + public static void verifyPadding(int padding) { + if (padding < 0 || padding > MAX_PADDING) { + throw new IllegalArgumentException(String.format("Invalid padding '%d'. Padding must be between 0 and " + + "%d (inclusive).", padding, MAX_PADDING)); + } + } private Http2CodecUtil() { } } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2DataWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2DataWriter.java index 911e336220d..26bfab7bd49 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2DataWriter.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2DataWriter.java @@ -32,7 +32,10 @@ public interface Http2DataWriter { * @param ctx the context to use for writing. * @param streamId the stream for which to send the frame. * @param data the payload of the frame. This will be released by this method. - * @param padding the amount of padding to be added to the end of the frame + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). A 1 byte padding is encoded as just the pad length field with value 0. + * A 256 byte padding is encoded as the pad length field with value 255 and 255 padding bytes + * appended to the end of the frame. * @param endStream indicates if this is the last frame to be sent for the stream. * @param promise the promise for the write. * @return the future for the write. diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java index 39d51f38776..f4de4f51406 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java @@ -30,7 +30,8 @@ public interface Http2FrameListener { * @param ctx the context from the handler where the frame was read. * @param streamId the subject stream for the frame. * @param data payload buffer for the frame. This buffer will be released by the codec. - * @param padding the number of padding bytes found at the end of the frame. + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param endOfStream Indicates whether this is the last frame to be sent from the remote endpoint for this stream. * @return the number of bytes that have been processed by the application. The returned bytes are used by the * inbound flow controller to determine the appropriate time to expand the inbound flow control window (i.e. send @@ -60,7 +61,8 @@ int onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, int paddin * @param ctx the context from the handler where the frame was read. * @param streamId the subject stream for the frame. * @param headers the received headers. - * @param padding the number of padding bytes found at the end of the frame. + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param endOfStream Indicates whether this is the last frame to be sent from the remote endpoint * for this stream. */ @@ -89,7 +91,8 @@ void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers * connection. * @param weight the new weight for the stream. * @param exclusive whether or not the stream should be the exclusive dependent of its parent. - * @param padding the number of padding bytes found at the end of the frame. + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param endOfStream Indicates whether this is the last frame to be sent from the remote endpoint * for this stream. */ @@ -176,7 +179,8 @@ void onPriorityRead(ChannelHandlerContext ctx, int streamId, int streamDependenc * @param streamId the stream the frame was sent on. * @param promisedStreamId the ID of the promised stream. * @param headers the received headers. - * @param padding the number of padding bytes found at the end of the frame. + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). */ void onPushPromiseRead(ChannelHandlerContext ctx, int streamId, int promisedStreamId, Http2Headers headers, int padding) throws Http2Exception; diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameWriter.java index 48fd7d58df3..111e6ba2c52 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameWriter.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameWriter.java @@ -51,7 +51,8 @@ interface Configuration { * @param ctx the context to use for writing. * @param streamId the stream for which to send the frame. * @param headers the headers to be sent. - * @param padding the amount of padding to be added to the end of the frame + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param endStream indicates if this is the last frame to be sent for the stream. * @param promise the promise for the write. * @return the future for the write. @@ -69,7 +70,8 @@ ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers * depend on the connection. * @param weight the weight for this stream. * @param exclusive whether this stream should be the exclusive dependant of its parent. - * @param padding the amount of padding to be added to the end of the frame + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param endStream indicates if this is the last frame to be sent for the stream. * @param promise the promise for the write. * @return the future for the write. @@ -145,7 +147,8 @@ ChannelFuture writePing(ChannelHandlerContext ctx, boolean ack, ByteBuf data, * @param streamId the stream for which to send the frame. * @param promisedStreamId the ID of the promised stream. * @param headers the headers to be sent. - * @param padding the amount of padding to be added to the end of the frame + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param promise the promise for the write. * @return the future for the write. */ diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java index a3b2bc6ceef..69e11f845c7 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java @@ -42,7 +42,8 @@ public interface Http2LocalFlowController extends Http2FlowController { * stream} is {@code null} or closed, flow control should only be applied to the connection window and the bytes are * immediately consumed. * @param data payload buffer for the frame. - * @param padding the number of padding bytes found at the end of the frame. + * @param padding additional bytes that should be added to obscure the true content size. Must be between 0 and + * 256 (inclusive). * @param endOfStream Indicates whether this is the last frame to be sent from the remote endpoint for this stream. * @throws Http2Exception if any flow control errors are encountered. */
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java index 99f285d9194..3e771b459d4 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java @@ -39,7 +39,7 @@ import java.util.List; import static io.netty.buffer.Unpooled.EMPTY_BUFFER; -import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_MAX_HEADER_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.*; import static io.netty.handler.codec.http2.Http2TestUtil.randomString; import static io.netty.util.CharsetUtil.UTF_8; import static java.lang.Math.min; @@ -148,17 +148,17 @@ public void emptyDataShouldMatch() throws Exception { @Test public void dataShouldMatch() throws Exception { final ByteBuf data = data(10); - writer.writeData(ctx, STREAM_ID, data.slice(), 0, false, ctx.newPromise()); + writer.writeData(ctx, STREAM_ID, data.slice(), 1, false, ctx.newPromise()); readFrames(); - verify(listener).onDataRead(eq(ctx), eq(STREAM_ID), eq(data), eq(0), eq(false)); + verify(listener).onDataRead(eq(ctx), eq(STREAM_ID), eq(data), eq(1), eq(false)); } @Test public void dataWithPaddingShouldMatch() throws Exception { final ByteBuf data = data(10); - writer.writeData(ctx, STREAM_ID, data.slice(), 0xFF, true, ctx.newPromise()); + writer.writeData(ctx, STREAM_ID, data.slice(), MAX_PADDING, true, ctx.newPromise()); readFrames(); - verify(listener).onDataRead(eq(ctx), eq(STREAM_ID), eq(data), eq(0xFF), eq(true)); + verify(listener).onDataRead(eq(ctx), eq(STREAM_ID), eq(data), eq(MAX_PADDING), eq(true)); } @Test @@ -210,9 +210,9 @@ public void emptyHeadersShouldMatch() throws Exception { @Test public void emptyHeadersWithPaddingShouldMatch() throws Exception { final Http2Headers headers = EmptyHttp2Headers.INSTANCE; - writer.writeHeaders(ctx, STREAM_ID, headers, 0xFF, true, ctx.newPromise()); + writer.writeHeaders(ctx, STREAM_ID, headers, MAX_PADDING, true, ctx.newPromise()); readFrames(); - verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(headers), eq(0xFF), eq(true)); + verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(headers), eq(MAX_PADDING), eq(true)); } @Test @@ -243,18 +243,18 @@ public void headersFrameWithPriorityShouldMatch() throws Exception { @Test public void headersWithPaddingWithoutPriorityShouldMatch() throws Exception { final Http2Headers headers = headers(); - writer.writeHeaders(ctx, STREAM_ID, headers, 0xFF, true, ctx.newPromise()); + writer.writeHeaders(ctx, STREAM_ID, headers, MAX_PADDING, true, ctx.newPromise()); readFrames(); - verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(headers), eq(0xFF), eq(true)); + verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(headers), eq(MAX_PADDING), eq(true)); } @Test public void headersWithPaddingWithPriorityShouldMatch() throws Exception { final Http2Headers headers = headers(); - writer.writeHeaders(ctx, STREAM_ID, headers, 2, (short) 3, true, 0xFF, true, ctx.newPromise()); + writer.writeHeaders(ctx, STREAM_ID, headers, 2, (short) 3, true, 1, true, ctx.newPromise()); readFrames(); verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(headers), eq(2), eq((short) 3), eq(true), - eq(0xFF), eq(true)); + eq(1), eq(true)); } @Test @@ -269,16 +269,16 @@ public void continuedHeadersShouldMatch() throws Exception { @Test public void continuedHeadersWithPaddingShouldMatch() throws Exception { final Http2Headers headers = largeHeaders(); - writer.writeHeaders(ctx, STREAM_ID, headers, 2, (short) 3, true, 0xFF, true, ctx.newPromise()); + writer.writeHeaders(ctx, STREAM_ID, headers, 2, (short) 3, true, MAX_PADDING, true, ctx.newPromise()); readFrames(); verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(headers), eq(2), eq((short) 3), eq(true), - eq(0xFF), eq(true)); + eq(MAX_PADDING), eq(true)); } @Test public void headersThatAreTooBigShouldFail() throws Exception { final Http2Headers headers = headersOfSize(DEFAULT_MAX_HEADER_SIZE + 1); - writer.writeHeaders(ctx, STREAM_ID, headers, 2, (short) 3, true, 0xFF, true, ctx.newPromise()); + writer.writeHeaders(ctx, STREAM_ID, headers, 2, (short) 3, true, MAX_PADDING, true, ctx.newPromise()); try { readFrames(); fail(); @@ -308,9 +308,9 @@ public void pushPromiseFrameShouldMatch() throws Exception { @Test public void pushPromiseWithPaddingShouldMatch() throws Exception { final Http2Headers headers = headers(); - writer.writePushPromise(ctx, STREAM_ID, 2, headers, 0xFF, ctx.newPromise()); + writer.writePushPromise(ctx, STREAM_ID, 2, headers, MAX_PADDING, ctx.newPromise()); readFrames(); - verify(listener).onPushPromiseRead(eq(ctx), eq(STREAM_ID), eq(2), eq(headers), eq(0xFF)); + verify(listener).onPushPromiseRead(eq(ctx), eq(STREAM_ID), eq(2), eq(headers), eq(MAX_PADDING)); } @Test
val
train
2016-06-24T17:08:30
"2016-06-21T15:51:54Z"
ejona86
val
netty/netty/5054_5451
netty/netty
netty/netty/5054
netty/netty/5451
[ "timestamp(timedelta=23.0, similarity=0.9354724155765581)" ]
c9f963dc8e75d9abb89750807ec6627b40f7aada
3ccdb6f1095056e8b59c2092e3bde6e86df76f07
[ "checking...\n", "Verified that the issue not exists in 4.x. Looking into 3.10.x now.\n", "Hi @normanmaurer I would like to know if there are any plans to fix this soon as I would like to see 3.10.6.Final released as soon as it is humanly possible.\n", "I had no time yet... If you want help submit a PR 👍\n\n> Am 11.04.2016 um 15:30 schrieb Guido Medina [email protected]:\n> \n> Hi @normanmaurer I would like to know if there are any plans to fix this soon as I would like to see 3.10.6.Final released as soon as it is humanly possible.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly or view it on GitHub\n", "Fixed by https://github.com/netty/netty/pull/5451\n" ]
[ "This is a copy & paste from `4.0` branch, I noticed it preferred direct access vs `for-each byte` which creates an iterator.\n", "This is the actual bug fix, to not return true when `mask == -1` which is also a copy & paste from `4.0` branch.\n" ]
"2016-06-25T14:05:38Z"
[ "defect" ]
IpV4Subnet.contains() always returns true for single-address subnets (3.10.4)
I may have this wrong (since I'm fairly new to network programming) but my understanding is that a /32 CIDR block represents a single IP address, so I would expect an IpV4Subnet instance with this range to only match that exact IP - but, for example new IpV4Subnet("1.1.1.1/32").contains("10.0.16.64") both return true (at least in the version I tested, netty-3.10.4) The same is true for an IpV4Subnet instance constructed with mask notation (eg 1.1.1.1/255.255.255.255).
[ "src/main/java/org/jboss/netty/handler/ipfilter/IpV4Subnet.java" ]
[ "src/main/java/org/jboss/netty/handler/ipfilter/IpV4Subnet.java" ]
[]
diff --git a/src/main/java/org/jboss/netty/handler/ipfilter/IpV4Subnet.java b/src/main/java/org/jboss/netty/handler/ipfilter/IpV4Subnet.java index 09dfdf2bb25..51d30964c04 100644 --- a/src/main/java/org/jboss/netty/handler/ipfilter/IpV4Subnet.java +++ b/src/main/java/org/jboss/netty/handler/ipfilter/IpV4Subnet.java @@ -15,8 +15,6 @@ */ package org.jboss.netty.handler.ipfilter; -import org.jboss.netty.logging.InternalLogger; -import org.jboss.netty.logging.InternalLoggerFactory; import org.jboss.netty.util.internal.StringUtil; import java.net.InetAddress; @@ -44,13 +42,8 @@ */ public class IpV4Subnet implements IpSet, Comparable<IpV4Subnet> { - private static final InternalLogger logger = - InternalLoggerFactory.getInstance(IpV4Subnet.class); - private static final int SUBNET_MASK = 0x80000000; - private static final int BYTE_ADDRESS_MASK = 0xFF; - private InetAddress inetAddress; private int subnet; @@ -142,13 +135,13 @@ private void setNetId(String netId) throws UnknownHostException { * @return the integer representation */ private static int toInt(InetAddress inetAddress1) { - byte[] address = inetAddress1.getAddress(); - int net = 0; - for (byte addres : address) { - net <<= 8; - net |= addres & BYTE_ADDRESS_MASK; - } - return net; + byte[] octets = inetAddress1.getAddress(); + assert octets.length == 4; + + return (octets[0] & 0xff) << 24 | + (octets[1] & 0xff) << 16 | + (octets[2] & 0xff) << 8 | + octets[3] & 0xff; } /** Sets the BaseAdress of the Subnet. */ @@ -210,10 +203,6 @@ public boolean contains(String ipAddr) throws UnknownHostException { * set network. */ public boolean contains(InetAddress inetAddress1) { - if (mask == -1) { - // ANY - return true; - } return (toInt(inetAddress1) & mask) == subnet; }
null
val
train
2016-03-07T16:15:14
"2016-03-29T23:19:55Z"
ashleymercer
val
netty/netty/5457_5461
netty/netty
netty/netty/5457
netty/netty/5461
[ "timestamp(timedelta=13.0, similarity=0.948771872386056)" ]
5e649850898889a8d2f1e526db610a8fca19c1ff
a34c77c29e9eff8ab5bf22c6fdce0b2bdcde06f7
[ "@vietj +1 ... Want to provide a pr?\n", "will do soon @normanmaurer \n", "@vietj thanks a lot!\n", "I'm seing an issue in tests because the channel is not yet initialized on registration, i.e the code:\n\n```\n b.handler(new ChannelInitializer<DatagramChannel>() {\n @Override\n protected void initChannel(DatagramChannel ch) throws Exception {\n ch.pipeline().addLast(DECODER, ENCODER, responseHandler);\n }\n });\nb.option(ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION, true);\n\nChannelFuture channelFuture = b.register();\n```\n\ndoes not guarantee that the `initChannel` is invoked when the `channelFuture` is succeeded. As consequence sometimes during tests the `DatagramDnsQuery` are not encoded as the `ENCODER` is not yet added to the pipeline.\n\nOne solution is to use a promise that is resolved after the pipeline has been modified instead of using the channelFuture.\n\nThoughts ?\n", "@vietj sounds good.\n", "@vietj I think even better would be to have the promise be notified once channelActive is called.\n", "Fixed by https://github.com/netty/netty/pull/5461\n" ]
[ "woups\n", "@vietj make the constructor package private\n" ]
"2016-06-28T10:14:57Z"
[ "defect" ]
DnsNameResolver should not bind locally
Netty version: 4.1.1.Final (and master) Context: Dns resolution failures when using the `DnsNameResolver` and the JVM is not authorized to bind datagram channels. Steps to reproduce: Use the `DnsNameResolver` with an environment that does not allow to bind datagram channels. The current `DnsNameResolver` binds locally a `DatagramChannel` which is not necessary (and not always permitted). The bind is used to obtain a channel and it is possible to create a `DatagramChannel` directly without the need to bind. The method `newChannel` in `DnsNameResolver` can be changed from: ``` private ChannelFuture newChannel( ChannelFactory<? extends DatagramChannel> channelFactory, InetSocketAddress localAddress) { Bootstrap b = new Bootstrap(); b.group(executor()); b.channelFactory(channelFactory); final DnsResponseHandler responseHandler = new DnsResponseHandler(); b.handler(new ChannelInitializer<DatagramChannel>() { @Override protected void initChannel(DatagramChannel ch) throws Exception { ch.pipeline().addLast(DECODER, ENCODER, responseHandler); } }); ChannelFuture bindFuture = b.bind(localAddress); bindFuture.channel().closeFuture().addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { resolveCache.clear(); } }); return bindFuture; } ``` to ``` private ChannelFuture newChannel( ChannelFactory<? extends DatagramChannel> channelFactory, InetSocketAddress localAddress) { DatagramChannel channel = channelFactory.newChannel(); channel.config().setOption(ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION, true); channel.config().setRecvByteBufAllocator(new FixedRecvByteBufAllocator(maxPayloadSize)); final DnsResponseHandler responseHandler = new DnsResponseHandler(); channel.pipeline().addLast(DECODER, ENCODER, responseHandler); channel.closeFuture().addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { resolveCache.clear(); } }); return executor().register(channel); } ``` to achieve the same without binding.
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java" ]
[]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java index 26f99905e47..27f7d2ab011 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsAddressResolverGroup.java @@ -34,7 +34,6 @@ import java.util.List; import java.util.concurrent.ConcurrentMap; -import static io.netty.resolver.dns.DnsNameResolver.ANY_LOCAL_ADDR; import static io.netty.util.internal.PlatformDependent.newConcurrentHashMap; /** @@ -44,33 +43,21 @@ public class DnsAddressResolverGroup extends AddressResolverGroup<InetSocketAddress> { private final ChannelFactory<? extends DatagramChannel> channelFactory; - private final InetSocketAddress localAddress; private final DnsServerAddresses nameServerAddresses; private final ConcurrentMap<String, Promise<InetAddress>> resolvesInProgress = newConcurrentHashMap(); private final ConcurrentMap<String, Promise<List<InetAddress>>> resolveAllsInProgress = newConcurrentHashMap(); - public DnsAddressResolverGroup( - Class<? extends DatagramChannel> channelType, DnsServerAddresses nameServerAddresses) { - this(channelType, ANY_LOCAL_ADDR, nameServerAddresses); - } - public DnsAddressResolverGroup( Class<? extends DatagramChannel> channelType, - InetSocketAddress localAddress, DnsServerAddresses nameServerAddresses) { - this(new ReflectiveChannelFactory<DatagramChannel>(channelType), localAddress, nameServerAddresses); - } - - public DnsAddressResolverGroup( - ChannelFactory<? extends DatagramChannel> channelFactory, DnsServerAddresses nameServerAddresses) { - this(channelFactory, ANY_LOCAL_ADDR, nameServerAddresses); + DnsServerAddresses nameServerAddresses) { + this(new ReflectiveChannelFactory<DatagramChannel>(channelType), nameServerAddresses); } public DnsAddressResolverGroup( ChannelFactory<? extends DatagramChannel> channelFactory, - InetSocketAddress localAddress, DnsServerAddresses nameServerAddresses) { + DnsServerAddresses nameServerAddresses) { this.channelFactory = channelFactory; - this.localAddress = localAddress; this.nameServerAddresses = nameServerAddresses; } @@ -83,20 +70,20 @@ protected final AddressResolver<InetSocketAddress> newResolver(EventExecutor exe " (expected: " + StringUtil.simpleClassName(EventLoop.class)); } - return newResolver((EventLoop) executor, channelFactory, localAddress, nameServerAddresses); + return newResolver((EventLoop) executor, channelFactory, nameServerAddresses); } /** - * @deprecated Override {@link #newNameResolver(EventLoop, ChannelFactory, InetSocketAddress, DnsServerAddresses)}. + * @deprecated Override {@link #newNameResolver(EventLoop, ChannelFactory, DnsServerAddresses)}. */ @Deprecated protected AddressResolver<InetSocketAddress> newResolver( EventLoop eventLoop, ChannelFactory<? extends DatagramChannel> channelFactory, - InetSocketAddress localAddress, DnsServerAddresses nameServerAddresses) throws Exception { + DnsServerAddresses nameServerAddresses) throws Exception { final NameResolver<InetAddress> resolver = new InflightNameResolver<InetAddress>( eventLoop, - newNameResolver(eventLoop, channelFactory, localAddress, nameServerAddresses), + newNameResolver(eventLoop, channelFactory, nameServerAddresses), resolvesInProgress, resolveAllsInProgress); @@ -109,11 +96,9 @@ protected AddressResolver<InetSocketAddress> newResolver( */ protected NameResolver<InetAddress> newNameResolver(EventLoop eventLoop, ChannelFactory<? extends DatagramChannel> channelFactory, - InetSocketAddress localAddress, DnsServerAddresses nameServerAddresses) throws Exception { return new DnsNameResolverBuilder(eventLoop) .channelFactory(channelFactory) - .localAddress(localAddress) .nameServerAddresses(nameServerAddresses) .build(); } diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java index 9fed94cbf1c..1fc557ac078 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java @@ -17,12 +17,14 @@ import io.netty.bootstrap.Bootstrap; import io.netty.channel.AddressedEnvelope; +import io.netty.channel.Channel; import io.netty.channel.ChannelFactory; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandlerAdapter; import io.netty.channel.ChannelInitializer; +import io.netty.channel.ChannelOption; import io.netty.channel.EventLoop; import io.netty.channel.FixedRecvByteBufAllocator; import io.netty.channel.socket.DatagramChannel; @@ -65,8 +67,6 @@ public class DnsNameResolver extends InetNameResolver { private static final String LOCALHOST = "localhost"; private static final InetAddress LOCALHOST_ADDRESS; - static final InetSocketAddress ANY_LOCAL_ADDR = new InetSocketAddress(0); - static final InternetProtocolFamily[] DEFAULT_RESOLVE_ADDRESS_TYPES = new InternetProtocolFamily[2]; static { @@ -88,7 +88,7 @@ public class DnsNameResolver extends InetNameResolver { private static final DatagramDnsQueryEncoder ENCODER = new DatagramDnsQueryEncoder(); final DnsServerAddresses nameServerAddresses; - final ChannelFuture bindFuture; + final Future<Channel> channelFuture; final DatagramChannel ch; /** @@ -123,7 +123,6 @@ protected DnsServerAddressStream initialValue() throws Exception { * * @param eventLoop the {@link EventLoop} which will perform the communication with the DNS servers * @param channelFactory the {@link ChannelFactory} that will create a {@link DatagramChannel} - * @param localAddress the local address of the {@link DatagramChannel} * @param nameServerAddresses the addresses of the DNS server. For each DNS query, a new stream is created from * this to determine which DNS server should be contacted for the next retry in case * of failure. @@ -140,9 +139,8 @@ protected DnsServerAddressStream initialValue() throws Exception { public DnsNameResolver( EventLoop eventLoop, ChannelFactory<? extends DatagramChannel> channelFactory, - InetSocketAddress localAddress, DnsServerAddresses nameServerAddresses, - DnsCache resolveCache, + final DnsCache resolveCache, long queryTimeoutMillis, InternetProtocolFamily[] resolvedAddressTypes, boolean recursionDesired, @@ -154,7 +152,6 @@ public DnsNameResolver( super(eventLoop); checkNotNull(channelFactory, "channelFactory"); - checkNotNull(localAddress, "localAddress"); this.nameServerAddresses = checkNotNull(nameServerAddresses, "nameServerAddresses"); this.queryTimeoutMillis = checkPositive(queryTimeoutMillis, "queryTimeoutMillis"); this.resolvedAddressTypes = checkNonEmpty(resolvedAddressTypes, "resolvedAddressTypes"); @@ -166,18 +163,11 @@ public DnsNameResolver( this.hostsFileEntriesResolver = checkNotNull(hostsFileEntriesResolver, "hostsFileEntriesResolver"); this.resolveCache = resolveCache; - bindFuture = newChannel(channelFactory, localAddress); - ch = (DatagramChannel) bindFuture.channel(); - ch.config().setRecvByteBufAllocator(new FixedRecvByteBufAllocator(maxPayloadSize)); - } - - private ChannelFuture newChannel( - ChannelFactory<? extends DatagramChannel> channelFactory, InetSocketAddress localAddress) { - Bootstrap b = new Bootstrap(); b.group(executor()); b.channelFactory(channelFactory); - final DnsResponseHandler responseHandler = new DnsResponseHandler(); + b.option(ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION, true); + final DnsResponseHandler responseHandler = new DnsResponseHandler(executor().<Channel>newPromise()); b.handler(new ChannelInitializer<DatagramChannel>() { @Override protected void initChannel(DatagramChannel ch) throws Exception { @@ -185,15 +175,16 @@ protected void initChannel(DatagramChannel ch) throws Exception { } }); - ChannelFuture bindFuture = b.bind(localAddress); - bindFuture.channel().closeFuture().addListener(new ChannelFutureListener() { + channelFuture = responseHandler.channelActivePromise; + ch = (DatagramChannel) b.register().channel(); + ch.config().setRecvByteBufAllocator(new FixedRecvByteBufAllocator(maxPayloadSize)); + + ch.closeFuture().addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { resolveCache.clear(); } }); - - return bindFuture; } /** @@ -606,6 +597,13 @@ private static Promise<AddressedEnvelope<DnsResponse, InetSocketAddress>> cast(P } private final class DnsResponseHandler extends ChannelInboundHandlerAdapter { + + private final Promise<Channel> channelActivePromise; + + DnsResponseHandler(Promise<Channel> channelActivePromise) { + this.channelActivePromise = channelActivePromise; + } + @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { try { @@ -628,6 +626,12 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception } } + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + super.channelActive(ctx); + channelActivePromise.setSuccess(ctx.channel()); + } + @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { logger.warn("{} Unexpected exception: ", ch, cause); diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java index 6e9a56f3bb7..f6887e69dfc 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java @@ -26,7 +26,6 @@ import io.netty.util.internal.InternalThreadLocalMap; import io.netty.util.internal.UnstableApi; -import java.net.InetSocketAddress; import java.util.List; import static io.netty.util.internal.ObjectUtil.checkNotNull; @@ -39,7 +38,6 @@ public final class DnsNameResolverBuilder { private final EventLoop eventLoop; private ChannelFactory<? extends DatagramChannel> channelFactory; - private InetSocketAddress localAddress = DnsNameResolver.ANY_LOCAL_ADDR; private DnsServerAddresses nameServerAddresses = DnsServerAddresses.defaultAddresses(); private DnsCache resolveCache; private Integer minTtl; @@ -86,17 +84,6 @@ public DnsNameResolverBuilder channelType(Class<? extends DatagramChannel> chann return channelFactory(new ReflectiveChannelFactory<DatagramChannel>(channelType)); } - /** - * Sets the local address of the {@link DatagramChannel} - * - * @param localAddress the local address - * @return {@code this} - */ - public DnsNameResolverBuilder localAddress(InetSocketAddress localAddress) { - this.localAddress = localAddress; - return this; - } - /** * Sets the addresses of the DNS server. * @@ -318,7 +305,6 @@ public DnsNameResolver build() { return new DnsNameResolver( eventLoop, channelFactory, - localAddress, nameServerAddresses, cache, queryTimeoutMillis, diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java index ae72b09545f..897d2f3b40e 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java @@ -17,6 +17,7 @@ import io.netty.buffer.Unpooled; import io.netty.channel.AddressedEnvelope; +import io.netty.channel.Channel; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; import io.netty.handler.codec.dns.DatagramDnsQuery; @@ -27,6 +28,8 @@ import io.netty.handler.codec.dns.DnsRecordType; import io.netty.handler.codec.dns.DnsResponse; import io.netty.handler.codec.dns.DnsSection; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.GenericFutureListener; import io.netty.util.concurrent.Promise; import io.netty.util.concurrent.ScheduledFuture; import io.netty.util.internal.StringUtil; @@ -107,12 +110,12 @@ void query() { } private void sendQuery(final DnsQuery query) { - if (parent.bindFuture.isDone()) { + if (parent.channelFuture.isDone()) { writeQuery(query); } else { - parent.bindFuture.addListener(new ChannelFutureListener() { + parent.channelFuture.addListener(new GenericFutureListener<Future<? super Channel>>() { @Override - public void operationComplete(ChannelFuture future) throws Exception { + public void operationComplete(Future<? super Channel> future) throws Exception { if (future.isSuccess()) { writeQuery(query); } else {
null
val
train
2016-06-28T09:34:01
"2016-06-27T23:52:46Z"
vietj
val
netty/netty/5357_5466
netty/netty
netty/netty/5357
netty/netty/5466
[ "timestamp(timedelta=19.0, similarity=0.9095846852576694)" ]
804e058e27e16e346b565ce4490f6177574e267d
1fabea9975fc92e603ae9aa0973c7bd2853ade7b
[ "For `hash(CharSequence)`, it's a bit dangerous to use `hashCode()`. In `CharSequence`'s Javadoc:\n\n> This interface does not refine the general contracts of the equals and hashCode methods. The result of comparing two objects that implement CharSequence is therefore, in general, undefined.\n", "@ejona86 - Great point. `name.hashCode()` would not work. I've updated the issue description to reflect this.\n" ]
[]
"2016-06-29T01:16:12Z"
[]
HPACK Encoder.java data structure improvements
Encoder.java's data structure looks very similar to the data structure used in `DefaultHeaders`. We should investigate making some of the same improvements in Encoder.java as were made in `DefaultHeaders`: - The underlying array should be a power of 2. Then we can use a mask `&` operation to index into the array. - The `hash(CharSequence)` function ~~should just use the `name.hashCode()`~~ could re-use the [CASE_INSENSITIVE_HASHER](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/AsciiString.java#L1353) instead of inventing a new hash algorithm. - Also the negative value checks can be removed from `hash`. - Consider exposing the size of the hash array as a parameter to the constructor. For folks that don't want a Dynamic table, or if they anticipate it will be small ... we could limit the size consumed by the underlying array (this would be similar to [DefaultHeaders](https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java#L123).
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java", "codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java", "common/src/main/java/io/netty/util/AsciiString.java", "microbench/src/main/java/io/netty/microbench/http2/internal/hpack/EncoderBenchmark.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java", "codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java", "common/src/main/java/io/netty/util/AsciiString.java", "microbench/src/main/java/io/netty/microbench/http2/internal/hpack/EncoderBenchmark.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/TestCase.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java index ba0aa01bc28..4051dc37046 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java @@ -41,21 +41,22 @@ import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.NEVER; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.NONE; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.equalsConstantTime; +import static io.netty.util.internal.MathUtil.findNextPositivePowerOfTwo; +import static java.lang.Math.max; +import static java.lang.Math.min; public final class Encoder { - - private static final int BUCKET_SIZE = 17; - // for testing private final boolean useIndexing; private final boolean forceHuffmanOn; private final boolean forceHuffmanOff; - private final HuffmanEncoder huffmanEncoder = new HuffmanEncoder(); // a linked hash map of header fields - private final HeaderEntry[] headerFields = new HeaderEntry[BUCKET_SIZE]; + private final HeaderEntry[] headerFields; private final HeaderEntry head = new HeaderEntry(-1, AsciiString.EMPTY_STRING, AsciiString.EMPTY_STRING, Integer.MAX_VALUE, null); + private final HuffmanEncoder huffmanEncoder = new HuffmanEncoder(); + private final byte hashMask; private int size; private int capacity; @@ -63,7 +64,14 @@ public final class Encoder { * Creates a new encoder. */ public Encoder(int maxHeaderTableSize) { - this(maxHeaderTableSize, true, false, false); + this(maxHeaderTableSize, 16); + } + + /** + * Creates a new encoder. + */ + public Encoder(int maxHeaderTableSize, int arraySizeHint) { + this(maxHeaderTableSize, true, false, false, arraySizeHint); } /** @@ -73,7 +81,8 @@ public Encoder(int maxHeaderTableSize) { int maxHeaderTableSize, boolean useIndexing, boolean forceHuffmanOn, - boolean forceHuffmanOff + boolean forceHuffmanOff, + int arraySizeHint ) { if (maxHeaderTableSize < 0) { throw new IllegalArgumentException("Illegal Capacity: " + maxHeaderTableSize); @@ -82,6 +91,10 @@ public Encoder(int maxHeaderTableSize) { this.forceHuffmanOn = forceHuffmanOn; this.forceHuffmanOff = forceHuffmanOff; capacity = maxHeaderTableSize; + // Enforce a bound of [2, 128] because hashMask is a byte. The max possible value of hashMask is one less + // than the length of this array, and we want the mask to be > 0. + headerFields = new HeaderEntry[findNextPositivePowerOfTwo(max(2, min(arraySizeHint, 128)))]; + hashMask = (byte) (headerFields.length - 1); head.before = head.after = head; } @@ -302,7 +315,7 @@ private HeaderEntry getEntry(CharSequence name, CharSequence value) { if (length() == 0 || name == null || value == null) { return null; } - int h = hash(name); + int h = AsciiString.hashCode(name); int i = index(h); for (HeaderEntry e = headerFields[i]; e != null; e = e.next) { // To avoid short circuit behavior a bitwise operator is used instead of a boolean operator. @@ -321,26 +334,21 @@ private int getIndex(CharSequence name) { if (length() == 0 || name == null) { return -1; } - int h = hash(name); + int h = AsciiString.hashCode(name); int i = index(h); - int index = -1; for (HeaderEntry e = headerFields[i]; e != null; e = e.next) { if (e.hash == h && equalsConstantTime(name, e.name) != 0) { - index = e.index; - break; + return getIndex(e.index); } } - return getIndex(index); + return -1; } /** * Compute the index into the dynamic table given the index in the header entry. */ private int getIndex(int index) { - if (index == -1) { - return -1; - } - return index - head.before.index + 1; + return index == -1 ? -1 : index - head.before.index + 1; } /** @@ -362,7 +370,7 @@ private void add(CharSequence name, CharSequence value) { remove(); } - int h = hash(name); + int h = AsciiString.hashCode(name); int i = index(h); HeaderEntry old = headerFields[i]; HeaderEntry e = new HeaderEntry(h, name, value, head.before.index - 1, old); @@ -410,28 +418,11 @@ private void clear() { size = 0; } - /** - * Returns the hash code for the given header field name. - */ - private static int hash(CharSequence name) { - int h = 0; - for (int i = 0; i < name.length(); i++) { - h = 31 * h + name.charAt(i); - } - if (h > 0) { - return h; - } else if (h == Integer.MIN_VALUE) { - return Integer.MAX_VALUE; - } else { - return -h; - } - } - /** * Returns the index into the hash table for the hash code h. */ - private static int index(int h) { - return h % BUCKET_SIZE; + private int index(int h) { + return h & hashMask; } /** diff --git a/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java b/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java index b54015e8e9b..1a70f2b53f5 100644 --- a/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java +++ b/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java @@ -49,12 +49,6 @@ * @param <T> the type to use for return values when the intention is to return {@code this} object. */ public class DefaultHeaders<K, V, T extends Headers<K, V, T>> implements Headers<K, V, T> { - /** - * Enforce an upper bound of 128 because {@link #hashMask} is a byte. - * The max possible value of {@link #hashMask} is one less than this value. - */ - private static final int ARRAY_SIZE_HINT_MAX = min(128, - max(1, SystemPropertyUtil.getInt("io.netty.DefaultHeaders.arraySizeHintMax", 16))); /** * Constant used to seed the hash code generation. Could be anything but this was borrowed from murmur3. */ @@ -120,7 +114,9 @@ public DefaultHeaders(HashingStrategy<K> nameHashingStrategy, this.valueConverter = checkNotNull(valueConverter, "valueConverter"); this.nameValidator = checkNotNull(nameValidator, "nameValidator"); this.hashingStrategy = checkNotNull(nameHashingStrategy, "nameHashingStrategy"); - entries = new DefaultHeaders.HeaderEntry[findNextPositivePowerOfTwo(min(arraySizeHint, ARRAY_SIZE_HINT_MAX))]; + // Enforce a bound of [2, 128] because hashMask is a byte. The max possible value of hashMask is one less + // than the length of this array, and we want the mask to be > 0. + entries = new DefaultHeaders.HeaderEntry[findNextPositivePowerOfTwo(max(2, min(arraySizeHint, 128)))]; hashMask = (byte) (entries.length - 1); head = new HeaderEntry<K, V>(); } diff --git a/common/src/main/java/io/netty/util/AsciiString.java b/common/src/main/java/io/netty/util/AsciiString.java index 98c9b3aee88..92cb6f8d412 100644 --- a/common/src/main/java/io/netty/util/AsciiString.java +++ b/common/src/main/java/io/netty/util/AsciiString.java @@ -1394,7 +1394,7 @@ public static int hashCode(CharSequence value) { return 0; } if (value.getClass() == AsciiString.class) { - return ((AsciiString) value).hashCode(); + return value.hashCode(); } return PlatformDependent.hashCodeAscii(value); diff --git a/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/EncoderBenchmark.java b/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/EncoderBenchmark.java index 7b4cdd98508..37200e9a19b 100644 --- a/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/EncoderBenchmark.java +++ b/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/EncoderBenchmark.java @@ -36,16 +36,24 @@ import io.netty.microbench.util.AbstractMicrobenchmark; import org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; import org.openjdk.jmh.annotations.Level; +import org.openjdk.jmh.annotations.Measurement; import org.openjdk.jmh.annotations.Mode; import org.openjdk.jmh.annotations.Param; import org.openjdk.jmh.annotations.Setup; import org.openjdk.jmh.annotations.TearDown; +import org.openjdk.jmh.annotations.Threads; +import org.openjdk.jmh.annotations.Warmup; import org.openjdk.jmh.infra.Blackhole; import java.io.IOException; import java.util.List; +@Fork(1) +@Threads(1) +@Warmup(iterations = 5) +@Measurement(iterations = 5) public class EncoderBenchmark extends AbstractMicrobenchmark { @Param
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/TestCase.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/TestCase.java index f4cb467ea98..6e33536cefe 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/TestCase.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/TestCase.java @@ -161,7 +161,7 @@ private Encoder createEncoder() { maxHeaderTableSize = Integer.MAX_VALUE; } - return new Encoder(maxHeaderTableSize, useIndexing, forceHuffmanOn, forceHuffmanOff); + return new Encoder(maxHeaderTableSize, useIndexing, forceHuffmanOn, forceHuffmanOff, 16); } private Decoder createDecoder() {
train
train
2016-06-28T15:39:42
"2016-06-06T17:47:42Z"
Scottmitch
val
netty/netty/5514_5527
netty/netty
netty/netty/5514
netty/netty/5527
[ "timestamp(timedelta=38.0, similarity=0.8489490830934392)" ]
196540ca1da50465f899cfd6887d49a69e43269d
9bcea87aa5943ca013eeb61319d293c40f33e307
[ "yes the equals implementation does seem suspect ... @veebs wdyt?\n", "Fixed by https://github.com/netty/netty/pull/5527\n" ]
[ "nit: kill line\n", "do we have any context for how this was intended to be used? It seems strange to compare the length, but not compare the actual content. If we don't do this consider just making this method `return upload1.getName().compareToIgnoreCase(upload2.getName());`\n", "@Scottmitch no context here.. Let me just remove the TODO and change to what you suggested.\n" ]
"2016-07-13T04:33:47Z"
[ "defect" ]
MemoryFileUpload & DiskFileUpload NOT equals itself
test code: ``` @Test public final void testMemoryFileUploadEquals() { final MemoryFileUpload f1 = new MemoryFileUpload("m1", "m1", "application/json", null, null, 100); assertEquals(f1, f1); } @Test public final void testDiskFileUploadEquals() { final DiskFileUpload f2 = new DiskFileUpload("d1", "d1", "application/json", null, null, 100); assertEquals(f2, f2); } ``` The two testcase above have failed. please review equals's code ([MemoryFileUpload.java](https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java#L71)) ``` @Override public boolean equals(Object o) { if (!(o instanceof Attribute)) { return false; } Attribute attribute = (Attribute) o; return getName().equalsIgnoreCase(attribute.getName()); } ```
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskFileUpload.java", "codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskFileUpload.java", "codec-http/src/main/java/io/netty/handler/codec/http/multipart/FileUploadUtil.java", "codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/multipart/DiskFileUploadTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/multipart/MemoryFileUploadTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskFileUpload.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskFileUpload.java index 1932004dd6b..8c253b43d23 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskFileUpload.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskFileUpload.java @@ -69,16 +69,12 @@ public void setFilename(String filename) { @Override public int hashCode() { - return getName().hashCode(); + return FileUploadUtil.hashCode(this); } @Override public boolean equals(Object o) { - if (!(o instanceof Attribute)) { - return false; - } - Attribute attribute = (Attribute) o; - return getName().equalsIgnoreCase(attribute.getName()); + return o instanceof FileUpload && FileUploadUtil.equals(this, (FileUpload) o); } @Override @@ -91,13 +87,7 @@ public int compareTo(InterfaceHttpData o) { } public int compareTo(FileUpload o) { - int v; - v = getName().compareToIgnoreCase(o.getName()); - if (v != 0) { - return v; - } - // TODO should we compare size ? - return v; + return FileUploadUtil.compareTo(this, o); } @Override diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/FileUploadUtil.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/FileUploadUtil.java new file mode 100644 index 00000000000..11b9b85e4a6 --- /dev/null +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/FileUploadUtil.java @@ -0,0 +1,33 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.http.multipart; + +final class FileUploadUtil { + + private FileUploadUtil() { } + + static int hashCode(FileUpload upload) { + return upload.getName().hashCode(); + } + + static boolean equals(FileUpload upload1, FileUpload upload2) { + return upload1.getName().equalsIgnoreCase(upload2.getName()); + } + + static int compareTo(FileUpload upload1, FileUpload upload2) { + return upload1.getName().compareToIgnoreCase(upload2.getName()); + } +} diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java index c644e84db28..141bdd55bbe 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/MemoryFileUpload.java @@ -63,16 +63,12 @@ public void setFilename(String filename) { @Override public int hashCode() { - return getName().hashCode(); + return FileUploadUtil.hashCode(this); } @Override public boolean equals(Object o) { - if (!(o instanceof Attribute)) { - return false; - } - Attribute attribute = (Attribute) o; - return getName().equalsIgnoreCase(attribute.getName()); + return o instanceof FileUpload && FileUploadUtil.equals(this, (FileUpload) o); } @Override @@ -85,13 +81,7 @@ public int compareTo(InterfaceHttpData o) { } public int compareTo(FileUpload o) { - int v; - v = getName().compareToIgnoreCase(o.getName()); - if (v != 0) { - return v; - } - // TODO should we compare size for instance ? - return v; + return FileUploadUtil.compareTo(this, o); } @Override
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/DiskFileUploadTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/DiskFileUploadTest.java new file mode 100644 index 00000000000..fa5661e379d --- /dev/null +++ b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/DiskFileUploadTest.java @@ -0,0 +1,29 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.http.multipart; + +import org.junit.Assert; +import org.junit.Test; + +public class DiskFileUploadTest { + + @Test + public final void testDiskFileUploadEquals() { + DiskFileUpload f2 = + new DiskFileUpload("d1", "d1", "application/json", null, null, 100); + Assert.assertEquals(f2, f2); + } +} diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/MemoryFileUploadTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/MemoryFileUploadTest.java new file mode 100644 index 00000000000..4d53f49e55b --- /dev/null +++ b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/MemoryFileUploadTest.java @@ -0,0 +1,29 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.http.multipart; + +import org.junit.Assert; +import org.junit.Test; + +public class MemoryFileUploadTest { + + @Test + public final void testMemoryFileUploadEquals() { + MemoryFileUpload f1 = + new MemoryFileUpload("m1", "m1", "application/json", null, null, 100); + Assert.assertEquals(f1, f1); + } +}
train
train
2016-07-13T21:33:16
"2016-07-08T11:09:40Z"
isdom
val
netty/netty/5530_5537
netty/netty
netty/netty/5530
netty/netty/5537
[ "timestamp(timedelta=158290.0, similarity=0.8964804195974887)" ]
047f6aed289219e7461b4c6bbcbb6c88d7082b06
211da398f01de335416017c9e21c37deaa622bf4
[ "@olix0r nope... A PR would be awesome :)\n", "@olix0r actually @Scottmitch made a good point. Can you wrap the code into a try / catch block and fail the promise on error ?\n" ]
[ "@buchgr should we remove the `exceptionCaught(...)` here ? I think if we propagate to the promise we should not also call `exceptionCaught(...)`. Also @nmittler @Scottmitch WDYT ?\n" ]
"2016-07-14T15:19:09Z"
[]
codec-http2: Http2FrameCodec never satisfies promise on Http2WindowUpdateFrame
I noticed that when Http2WindowUpdateFrame is written to Http2FrameCodec, the write [promise is never satisfied](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java#L144). Is this intentional? If not, I'm happy to submit a fix.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java index 23769cfc9a9..b9cd604063a 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java @@ -141,8 +141,7 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) try { if (msg instanceof Http2WindowUpdateFrame) { Http2WindowUpdateFrame frame = (Http2WindowUpdateFrame) msg; - consumeBytes(frame.streamId(), frame.windowSizeIncrement()); - promise.setSuccess(); + consumeBytes(frame.streamId(), frame.windowSizeIncrement(), promise); } else if (msg instanceof Http2StreamFrame) { writeStreamFrame((Http2StreamFrame) msg, promise); } else if (msg instanceof Http2GoAwayFrame) { @@ -155,13 +154,14 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) } } - private void consumeBytes(int streamId, int bytes) { + private void consumeBytes(int streamId, int bytes, ChannelPromise promise) { try { Http2Stream stream = http2Handler.connection().stream(streamId); http2Handler.connection().local().flowController() .consumeBytes(stream, bytes); + promise.setSuccess(); } catch (Throwable t) { - exceptionCaught(ctx, t); + promise.setFailure(t); } }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java index e404e396f08..c8fabd3004e 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java @@ -402,9 +402,24 @@ public void windowUpdateFrameDecrementsConsumedBytes() throws Exception { frameListener.onDataRead(http2HandlerCtx, 3, releaseLater(data), 0, true); int before = connection.local().flowController().unconsumedBytes(stream); - channel.writeOutbound(new DefaultHttp2WindowUpdateFrame(100).setStreamId(stream.id())); + ChannelFuture f = channel.write(new DefaultHttp2WindowUpdateFrame(100).setStreamId(stream.id())); int after = connection.local().flowController().unconsumedBytes(stream); assertEquals(100, before - after); + assertTrue(f.isSuccess()); + } + + @Test + public void windowUpdateMayFail() throws Exception { + frameListener.onHeadersRead(http2HandlerCtx, 3, request, 31, false); + Http2Connection connection = framingCodec.connectionHandler().connection(); + Http2Stream stream = connection.stream(3); + assertNotNull(stream); + + // Fails, cause trying to return too many bytes to the flow controller + ChannelFuture f = channel.write(new DefaultHttp2WindowUpdateFrame(100).setStreamId(stream.id())); + assertTrue(f.isDone()); + assertFalse(f.isSuccess()); + assertThat(f.cause(), instanceOf(Http2Exception.class)); } private static ChannelPromise anyChannelPromise() {
train
train
2016-07-15T09:09:13
"2016-07-13T17:06:15Z"
olix0r
val
netty/netty/5539_5545
netty/netty
netty/netty/5539
netty/netty/5545
[ "timestamp(timedelta=19.0, similarity=0.8874743946475105)" ]
2ce1d29d4deb30ca60dc94e0db25047ea2a639d8
6d20b61454ac24c0df4d98627c08de7a79c5caf6
[ "BTW, I have tried the 4.1.3.Final-SNAPSHOT version, it came out the same error too.\n", "@xsir let us know... I have not much knowledge in terms of osgi.\n", "I have figured it out finally. The root cause is: the `netty-tcnative` library is loaded by a ClassLoader that is fully independent of its bundle ClassLoader.\n\nSpecifically, the `netty-tcnative` is a bundle, which is loaded by OSGi framework with a separated ClassLoader. And the `netty-handler` is a bundle too, which contains the `OpenSsl` utility class. When the `OpenSsl` class is accessed, the static initialization will try to load the `netty-tcnative` library, which is embedded in `netty-tcnative` bundle. The OpenSsl loads the `tcnative` library through the `NativeLibraryLoader.loadFirstAvailable()` helper method, which will try to find the library in **PATH**, then in the JAR if failed. As we know, the library file is embedded in the bundle JAR, the `NativeLibraryLoader` will extract it to a tmp file, then call `System.load()` with the tmp file absolute path. The point is: in the SUN implementation, both `System.load()` and `System.loadLibrary()` are **CallerSensitive**, that means the native library is loaded by the caller class's ClassLoader! That's it, the `netty-tcnative` library is loaded by `NativeLibraryLoader.class`'s ClassLoader -- the netty-comon bundle's ClassLoader. But when the native method is called, it using the method definition class's ClassLoader, it can NOT find the native library in itself, then the `java.lang.UnsatisfiedLinkError: org.apache.tomcat.jni.Library.version(I)I` will be threw.\n\nYes, the root cause is the abused ClassLoader, it is discussed by @jjpp in [netty-tcnative issue 136](https://github.com/netty/netty-tcnative/issues/136). \n\nThen, the solution is clear: try to use the same ClassLoader to load the `netty-tcnative` library as the ClassLoader to initialize the native method. \n\nI hope I have made myself clear:) I'll submit a PR soon, thank you.\n", "Fixed by https://github.com/netty/netty/pull/5545\n" ]
[ "does this need to be public? can this just be `private static` and live in `NativeLibraryLoader`?\n", "can we do the reflection initialization parts bits statically when initializing `NativeLibraryLoader`?\n", "Hi, @Scottmitch ! That was a good idea. But, the helper class's duty is to delegate the calling to System.load() and it will be injected into the native library's ClassLoader, then when its method is invoked, the native library would be loaded into the native library's ClassLoader, NOT the netty-common's ClassLoader.\nSo, **the helper class should be as simple as possible, and should not be a inner class, otherwise we have to load the outer class and its dependencies**.\nBut, the helper class can be package.\n", "The key point of this method is to load the helper class into the specified ClassLoader **if it not found in the ClassLoader**. Yes, the define class process is invoked only once if every thing is fine.\n", "@xsir might be worth adding a java doc explaining this?\n", "remove `/*package*/`\n", "`return absolute ? System.load(libName) : System.loadLibrary(libname);`\n", "`{@link System#load(String)}`, `{@link System#System.loadLibrary(String)}`\n", "2016\n", "put the `close()` calls into a `try { } catch (IOException ignore)` block so we not swallow other exceptions.\n", "`{@link Class}` ... `{@link ClassLoader}` \n", "`IllegalStateException(...)` ?\n", "@jasontedor can you comment about the impact of `SecurityManager` here ? Seems like you know this stuff in and out :)\n", "is NOT\n", "Use `{@link }`\n", "We usually use `//` for comments.\n", "IllegalStateException ?\n", "Throwable ?\n", "The System.load() will throw a UnsatisfiedLinkError if the library is not found. If we catch Throwable here, the System.load() will be called twice. That is why I catch Exception here. WDYT?\n", "The System.load() is void too.\n", "@xsir - Is it not possible / not worth it to move `Method defineClass` initialization here into a static context, and if so can you comment? It doesn't seem to depend upon any local state.\n", "env -> environment?\n", "To my view, the answer is not possible. Because the `NativeLibraryLoader` is a utility. Its method such as `load(String, ClassLoader)` is invoked with a **target** `ClassLoader` and we need to define the helper class into it, that is a dynamic process. For example, the `netty-tcnative` library is loaded by `OpenSsl` in `netty-handler` with the `SSL`'s `ClassLoader`, we have to inject the helper class into `SSL`'s `ClassLoader`. But for `netty-transport-native-epoll` library is loaded by `Native` with its `ClassLoader`, then we have to inject the helper class into `Native`'s `ClassLoader`.\nThe helper class injection process is nothing to do with `NativeLibraryLoader` itself . I have no idea how to make it static.\n\nBTW, there are two native library involved in netty, both have the same OSGi loading issue. And there has another solution to this issue:\n- `netty-tcnative` case, which provided a loading helper class `org.apache.tomcat.jni.Library`. Just like 9f2a2135, we should modify `OpenSsl`, instead of calling the `NativeLibraryLoader`, trying the helper class first (Depends on a properly `Bundle-NativeCode` manifest entry). _The helper class `Library` in `netty-tcnative` does NOT support loading library by absolute path._\n- `netty-transport-native-epoll` case, we should modify `Native`, instead of calling the `NativeLibraryLoader`, trying the `System.load()` first (Depends on a properly `Bundle-NativeCode` manifest entry).\n\nThis solution is more simple, but there has a drawback: the library loading process is diff between OSGi and non-OSGi. In the OSGi, the library in `$PATH` has a higher priority than the one in the bundle. The non-OSGi is just the reverse. That's why I submitted a more complicated solution:)\nThanks for all of the comments!\n", "thanks for the explanation ... fine as is\n", "nit: consider restructuring to reduce the conditionals\n\n``` java\nint r;\nwhile ((r = in.read(buf)) != -1) {\n out.write(buf, 0, r);\n}\n```\n", "should we use a larger default size?\n", "what I meant was using a bigger size for `ByteArrayOutputStream` (e.g `ByteArrayOutputStream(4096)`).\n", "The security manager implications here are fine. \n", "@jasontedor so I understand you right that you think its fine to pull in in terms of `SecurityManager` stuff ? \n" ]
"2016-07-15T17:29:24Z"
[ "defect" ]
Tcnative init failed with org.apache.tomcat.jni.Library.version(I)I in OSGi
I am working a OSGi project with netty-tcnative + TLS and encounter a java.lang.UnsatisfiedLinkError in OpenSsl initialization. Here is the log: java.lang.UnsatisfiedLinkError: org.apache.tomcat.jni.Library.version(I)I at org.apache.tomcat.jni.Library.version(Native Method) at org.apache.tomcat.jni.Library.initialize(Library.java:176) at io.netty.handler.ssl.OpenSsl.initializeTcNative(OpenSsl.java:243) at io.netty.handler.ssl.OpenSsl.<clinit>(OpenSsl.java:76) We running a Apache Karaf 4.0.5, and here is the subset of dependent bundles: 30 | Active | 80 | 4.1.1.Final | Netty/Codec 31 | Active | 80 | 4.1.1.Final | Netty/Handler 32 | Active | 80 | 4.1.1.Final | Netty/Resolver 34 | Active | 80 | 4.1.1.Final | Netty/Common 35 | Active | 80 | 4.1.1.Final | Netty/Transport 37 | Active | 80 | 4.1.1.Final | Netty/Buffer 39 | Active | 80 | 1.1.33.Fork19 | Netty/TomcatNative [BoringSSL - Static] BTW, the netty-tcnative(bundle 39) has the Bundle-NativeCode manifest entry added, according to the commit [ea27386](https://github.com/netty/netty-tcnative/commit/ea27386540f857032873292183f1c35e0c0cdbc6). And the system information: $ uname -a Linux work.xsir 4.4.0-28-generic #47-Ubuntu SMP Fri Jun 24 10:09:13 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux $ java -version java version "1.8.0_91" Java(TM) SE Runtime Environment (build 1.8.0_91-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode) I'm trying to debug the initialization of OpenSsl and netty-tcnative code. If I figured it out, I'll submit a PR ASAP.
[ "common/src/main/java/io/netty/util/internal/NativeLibraryLoader.java" ]
[ "common/src/main/java/io/netty/util/internal/NativeLibraryLoader.java", "common/src/main/java/io/netty/util/internal/NativeLibraryUtil.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/NativeLibraryLoader.java b/common/src/main/java/io/netty/util/internal/NativeLibraryLoader.java index 2f5a00836bb..9a1ba33cb2e 100644 --- a/common/src/main/java/io/netty/util/internal/NativeLibraryLoader.java +++ b/common/src/main/java/io/netty/util/internal/NativeLibraryLoader.java @@ -18,12 +18,16 @@ import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; +import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; +import java.lang.reflect.Method; import java.net.URL; +import java.security.AccessController; +import java.security.PrivilegedAction; import java.util.Arrays; import java.util.Locale; @@ -186,7 +190,7 @@ public static void load(String name, ClassLoader loader) { if (url == null) { // Fall back to normal loading of JNI stuff - System.loadLibrary(name); + loadLibrary(loader, name, false); return; } @@ -211,7 +215,7 @@ public static void load(String name, ClassLoader loader) { out.close(); out = null; - System.load(tmpFile.getPath()); + loadLibrary(loader, tmpFile.getPath(), true); loaded = true; } catch (Exception e) { throw (UnsatisfiedLinkError) new UnsatisfiedLinkError( @@ -243,6 +247,117 @@ public static void load(String name, ClassLoader loader) { } } + /** + * Loading the native library into the specified {@link ClassLoader}. + * @param loader - The {@link ClassLoader} where the native library will be loaded into + * @param name - The native library path or name + * @param absolute - Whether the native library will be loaded by path or by name + */ + private static void loadLibrary(final ClassLoader loader, final String name, final boolean absolute) { + try { + // Make sure the helper is belong to the target ClassLoader. + final Class<?> newHelper = tryToLoadClass(loader, NativeLibraryUtil.class); + loadLibraryByHelper(newHelper, name, absolute); + } catch (Exception e) { // Should by pass the UnsatisfiedLinkError here! + logger.debug("Unable to load the library: " + name + '.', e); + NativeLibraryUtil.loadLibrary(name, absolute); // Fallback to local helper class. + } + } + + private static void loadLibraryByHelper(final Class<?> helper, final String name, final boolean absolute) { + AccessController.doPrivileged(new PrivilegedAction<Object>() { + @Override + public Object run() { + try { + // Invoke the helper to load the native library, if succeed, then the native + // library belong to the specified ClassLoader. + Method method = helper.getMethod("loadLibrary", + new Class<?>[] { String.class, boolean.class }); + method.setAccessible(true); + return method.invoke(null, name, absolute); + } catch (Exception e) { + throw new IllegalStateException("Load library failed!", e); + } + } + }); + } + + /** + * Try to load the helper {@link Class} into specified {@link ClassLoader}. + * @param loader - The {@link ClassLoader} where to load the helper {@link Class} + * @param helper - The helper {@link Class} + * @return A new helper Class defined in the specified ClassLoader. + * @throws ClassNotFoundException Helper class not found or loading failed + */ + private static Class<?> tryToLoadClass(final ClassLoader loader, final Class<?> helper) + throws ClassNotFoundException { + try { + return loader.loadClass(helper.getName()); + } catch (ClassNotFoundException e) { + // The helper class is NOT found in target ClassLoader, we have to define the helper class. + final byte[] classBinary = classToByteArray(helper); + return AccessController.doPrivileged(new PrivilegedAction<Class<?>>() { + @Override + public Class<?> run() { + try { + // Define the helper class in the target ClassLoader, + // then we can call the helper to load the native library. + Method defineClass = ClassLoader.class.getDeclaredMethod("defineClass", String.class, + byte[].class, int.class, int.class); + defineClass.setAccessible(true); + return (Class<?>) defineClass.invoke(loader, helper.getName(), classBinary, 0, + classBinary.length); + } catch (Exception e) { + throw new IllegalStateException("Define class failed!", e); + } + } + }); + } + } + + /** + * Load the helper {@link Class} as a byte array, to be redefined in specified {@link ClassLoader}. + * @param clazz - The helper {@link Class} provided by this bundle + * @return The binary content of helper {@link Class}. + * @throws ClassNotFoundException Helper class not found or loading failed + */ + private static byte[] classToByteArray(Class<?> clazz) throws ClassNotFoundException { + String fileName = clazz.getName(); + int lastDot = fileName.lastIndexOf('.'); + if (lastDot > 0) { + fileName = fileName.substring(lastDot + 1); + } + URL classUrl = clazz.getResource(fileName + ".class"); + if (classUrl == null) { + throw new ClassNotFoundException(clazz.getName()); + } + byte[] buf = new byte[1024]; + ByteArrayOutputStream out = new ByteArrayOutputStream(4096); + InputStream in = null; + try { + in = classUrl.openStream(); + for (int r; (r = in.read(buf)) != -1;) { + out.write(buf, 0, r); + } + return out.toByteArray(); + } catch (IOException ex) { + throw new ClassNotFoundException(clazz.getName(), ex); + } finally { + closeQuietly(in); + closeQuietly(out); + } + } + + private static void closeQuietly(java.io.Closeable c) { + if (c != null) { + try { + c.close(); + } catch (IOException ignore) { + // ignore + } + } + } + private NativeLibraryLoader() { // Utility } diff --git a/common/src/main/java/io/netty/util/internal/NativeLibraryUtil.java b/common/src/main/java/io/netty/util/internal/NativeLibraryUtil.java new file mode 100644 index 00000000000..1f9bf854574 --- /dev/null +++ b/common/src/main/java/io/netty/util/internal/NativeLibraryUtil.java @@ -0,0 +1,45 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.util.internal; + +/** + * A Utility to Call the {@link System#load(String)} or {@link System#loadLibrary(String)}. + * Because the {@link System#load(String)} and {@link System#loadLibrary(String)} are both + * CallerSensitive, it will load the native library into its caller's {@link ClassLoader}. + * In OSGi environment, we need this helper to delegate the calling to {@link System#load(String)} + * and it should be as simple as possible. It will be injected into the native library's + * ClassLoader when it is undefined. And therefore, when the defined new helper is invoked, + * the native library would be loaded into the native library's ClassLoader, not the + * caller's ClassLoader. + */ +final class NativeLibraryUtil { + /** + * Delegate the calling to {@link System#load(String)} or {@link System#loadLibrary(String)}. + * @param libName - The native library path or name + * @param absolute - Whether the native library will be loaded by path or by name + */ + public static void loadLibrary(String libName, boolean absolute) { + if (absolute) { + System.load(libName); + } else { + System.loadLibrary(libName); + } + } + + private NativeLibraryUtil() { + // Utility + } +}
null
train
train
2016-07-22T09:14:54
"2016-07-14T17:47:54Z"
xsir
val
netty/netty/5553_5557
netty/netty
netty/netty/5553
netty/netty/5557
[ "timestamp(timedelta=11.0, similarity=0.968899664563478)" ]
e00b797936a02fea1c43b04ba23a3400e5b3f639
b9f73461ea8d1d92a91124eb27fbf6b3ab5e6ae4
[ "@rkapsi good catch! Let me come up with a fix\n", "@rkapsi PTAL https://github.com/netty/netty/pull/5557\n", "Fixed by https://github.com/netty/netty/pull/5557\n" ]
[]
"2016-07-20T03:54:47Z"
[ "defect" ]
SimpleChannelPool#notifyConnect() may leak Channels
Hey, I just stumbled onto this one. The `SimpleChannelPool#notifyConnect()` method will leak Channels if the user cancelled the Promise. ``` java private static void notifyConnect(ChannelFuture future, Promise<Channel> promise) { if (future.isSuccess()) { // This will leak the Channel if the Promise is complete promise.setSuccess(future.channel()); } else { promise.setFailure(future.cause()); } } ``` ``` java private static void notifyConnect(ChannelFuture future, Promise<Channel> promise) { if (future.isSuccess()) { // Possible fix Channel channel = future.channel(); if (!promise.trySuccess(channel)) { release(channel); } } else { promise.setFailure(future.cause()); } } ``` ``` java 2016-07-19 14:14:48:400 EDT [32647,NIO-ConnectorThread-0,] WARN DefaultPromise - An exception was thrown by io.netty.channel.pool.SimpleChannelPool$2.operationComplete() java.lang.IllegalStateException: complete already: DefaultPromise@78ba125f(failure: java.util.concurrent.CancellationException) at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:105) at io.netty.channel.pool.SimpleChannelPool.notifyConnect(SimpleChannelPool.java:159) at io.netty.channel.pool.SimpleChannelPool.access$000(SimpleChannelPool.java:42) at io.netty.channel.pool.SimpleChannelPool$2.operationComplete(SimpleChannelPool.java:134) at io.netty.channel.pool.SimpleChannelPool$2.operationComplete(SimpleChannelPool.java:131) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:507) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:486) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:427) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:111) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:300) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:588) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:512) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:426) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) at java.lang.Thread.run(Thread.java:745) ```
[ "transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java" ]
[ "transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java index 85767f32c6c..9fcf0d8d6f0 100644 --- a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java +++ b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java @@ -154,9 +154,13 @@ public void run() { return promise; } - private static void notifyConnect(ChannelFuture future, Promise<Channel> promise) { + private void notifyConnect(ChannelFuture future, Promise<Channel> promise) { if (future.isSuccess()) { - promise.setSuccess(future.channel()); + Channel channel = future.channel(); + if (!promise.trySuccess(channel)) { + // Promise was completed in the meantime (like cancelled), just release the channel again + release(channel); + } } else { promise.setFailure(future.cause()); }
null
test
train
2016-07-19T18:51:25
"2016-07-19T18:26:02Z"
rkapsi
val
netty/netty/5565_5567
netty/netty
netty/netty/5565
netty/netty/5567
[ "timestamp(timedelta=20.0, similarity=0.8538622961285909)" ]
9151739577b9270ee8ae50cbda705942c927e8ff
24095a0d628b3e3daa01970c1727b2fbc5e1a163
[ "I added a PR for this: #5567 \n", "Fixed by https://github.com/netty/netty/pull/5567\n" ]
[]
"2016-07-22T00:39:35Z"
[ "defect" ]
OpenSslServerContext NullPointEexception when using a KeyManagerFactory
Support for using a KeyManagerFactory with OpenSSL was added by [PR 5493](https://github.com/netty/netty/pull/5439). However, there are a couple lingering `checkNotNull()` [assertions](https://github.com/netty/netty/blob/4.1/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java#L352) that cause a NullPointerException when using a KeyManagerFactory instead of a certificate-chain/private-key pair. I think it's intended that these variables can be null now since there's an `if (keyCertChain != null)` check on line 370.
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java index 9db007a7d6b..4986388d07a 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java @@ -349,9 +349,6 @@ private OpenSslServerContext( // Create a new SSL_CTX and configure it. boolean success = false; try { - checkNotNull(keyCertChain, "keyCertChainFile"); - checkNotNull(key, "keyFile"); - synchronized (OpenSslContext.class) { try { SSLContext.setVerify(ctx, SSL.SSL_CVERIFY_NONE, VERIFY_DEPTH); @@ -373,9 +370,7 @@ private OpenSslServerContext( } if (keyManagerFactory != null) { - X509KeyManager keyManager = chooseX509KeyManager( - buildKeyManagerFactory(keyCertChain, key, keyPassword, keyManagerFactory) - .getKeyManagers()); + X509KeyManager keyManager = chooseX509KeyManager(keyManagerFactory.getKeyManagers()); keyMaterialManager = useExtendedKeyManager(keyManager) ? new OpenSslExtendedKeyMaterialManager( (X509ExtendedKeyManager) keyManager, keyPassword) :
null
val
train
2016-07-21T11:39:56
"2016-07-21T22:21:53Z"
JackOfMostTrades
val
netty/netty/5570_5571
netty/netty
netty/netty/5570
netty/netty/5571
[ "timestamp(timedelta=17.0, similarity=0.9533093199440464)" ]
94d7557dead5e7780769d44bbb4aa7530149ca34
afec40bf610c84fe9cd71cb81bf336b39cedfaab
[ "Fixed by https://github.com/netty/netty/pull/5571\n" ]
[ "Shouldnt this stay outside of the rose block?\n", "I don't understand what you mean.\n", "Outside of the else block.... Thanks to autocorrect\n", "no, it's the same behavior than in the foreach loop that returns immediately after calling internalResolve.\n", "got it.\n", "call resolver.close() at the end of the test ?\n" ]
"2016-07-23T07:58:40Z"
[ "defect" ]
Allow ndots=0 in DnsNameResolver and search domains
Motivation: The `ndots = 0` is a valid value for ndots, it means that when using a non dotted name, the resolution should first try using a search and if it fails then use subdomains. Currently it is not allowed. Docker compose uses this when wiring up containers as names have usually no dots inside.. An example of docker compose `resolv.conf`: ``` search local nameserver 127.0.0.11 options ndots:0 ``` Modification: Modify `DnsNameResolver` to accept `ndots = 0` and handle the case in the resolution procedure. In this case a direct search is done and then a fallback on the search path is performed. Result: The `ndots = 0` case is implemented.
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java" ]
[ "resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java" ]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java index 6f2e4192d76..b6148ddc89b 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java @@ -191,7 +191,7 @@ public DnsNameResolver( this.hostsFileEntriesResolver = checkNotNull(hostsFileEntriesResolver, "hostsFileEntriesResolver"); this.resolveCache = resolveCache; this.searchDomains = checkNotNull(searchDomains, "searchDomains").clone(); - this.ndots = checkPositive(ndots, "ndots"); + this.ndots = checkPositiveOrZero(ndots, "ndots"); Bootstrap b = new Bootstrap(); b.group(executor()); diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java index ffba72547c0..477ccab99e4 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverBuilder.java @@ -321,6 +321,7 @@ public DnsNameResolverBuilder searchDomains(Iterable<String> searchDomains) { /** * Set the number of dots which must appear in a name before an initial absolute query is made. + * The default value is {@code 1}. * * @param ndots the ndots value * @return {@code this} diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java index fabb9da7e8e..08ba335a6ad 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java @@ -123,14 +123,18 @@ public void operationComplete(Future<T> future) throws Exception { } } }); - int dots = 0; - for (int idx = hostname.length() - 1; idx >= 0; idx--) { - if (hostname.charAt(idx) == '.' && ++dots >= parent.ndots()) { - internalResolve(promise); - return; + if (parent.ndots() == 0) { + internalResolve(promise); + } else { + int dots = 0; + for (int idx = hostname.length() - 1; idx >= 0; idx--) { + if (hostname.charAt(idx) == '.' && ++dots >= parent.ndots()) { + internalResolve(promise); + return; + } } + promise.tryFailure(new UnknownHostException(hostname)); } - promise.tryFailure(new UnknownHostException(hostname)); } }
diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java index c3e677b4c9a..da89173848c 100644 --- a/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java +++ b/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java @@ -48,6 +48,7 @@ private DnsNameResolverBuilder newResolver() { private TestDnsServer dnsServer; private EventLoopGroup group; + private DnsNameResolver resolver; @Before public void before() { @@ -60,6 +61,9 @@ public void destroy() { dnsServer.stop(); dnsServer = null; } + if (resolver != null) { + resolver.close(); + } group.shutdownGracefully(); } @@ -77,7 +81,7 @@ public void testResolve() throws Exception { dnsServer = new TestDnsServer(store); dnsServer.start(); - DnsNameResolver resolver = newResolver().searchDomains(Collections.singletonList("foo.com")).build(); + resolver = newResolver().searchDomains(Collections.singletonList("foo.com")).build(); String a = "host1.foo.com"; String resolved = assertResolve(resolver, a); @@ -124,7 +128,7 @@ public void testResolveAll() throws Exception { dnsServer = new TestDnsServer(store); dnsServer.start(); - DnsNameResolver resolver = newResolver().searchDomains(Collections.singletonList("foo.com")).build(); + resolver = newResolver().searchDomains(Collections.singletonList("foo.com")).build(); String a = "host1.foo.com"; List<String> resolved = assertResolveAll(resolver, a); @@ -169,7 +173,7 @@ public void testMultipleSearchDomain() throws Exception { dnsServer = new TestDnsServer(store); dnsServer.start(); - DnsNameResolver resolver = newResolver().searchDomains(Arrays.asList("foo.com", "bar.com")).build(); + resolver = newResolver().searchDomains(Arrays.asList("foo.com", "bar.com")).build(); // "host1" resolves via the "foo.com" search path String resolved = assertResolve(resolver, "host1"); @@ -198,7 +202,7 @@ public void testSearchDomainWithNdots2() throws Exception { dnsServer = new TestDnsServer(store); dnsServer.start(); - DnsNameResolver resolver = newResolver().searchDomains(Collections.singleton("foo.com")).ndots(2).build(); + resolver = newResolver().searchDomains(Collections.singleton("foo.com")).ndots(2).build(); String resolved = assertResolve(resolver, "host1.sub"); assertEquals(store.getAddress("host1.sub.foo.com"), resolved); @@ -208,6 +212,32 @@ public void testSearchDomainWithNdots2() throws Exception { assertEquals(store.getAddress("host2.sub.foo.com"), resolved); } + @Test + public void testSearchDomainWithNdots0() throws Exception { + Set<String> domains = new HashSet<String>(); + domains.add("host1"); + domains.add("host1.foo.com"); + domains.add("host2.foo.com"); + + TestDnsServer.MapRecordStoreA store = new TestDnsServer.MapRecordStoreA(domains); + dnsServer = new TestDnsServer(store); + dnsServer.start(); + + resolver = newResolver().searchDomains(Collections.singleton("foo.com")).ndots(0).build(); + + // "host1" resolves directly as ndots = 0 + String resolved = assertResolve(resolver, "host1"); + assertEquals(store.getAddress("host1"), resolved); + + // "host1.foo.com" resolves to host1.foo + resolved = assertResolve(resolver, "host1.foo.com"); + assertEquals(store.getAddress("host1.foo.com"), resolved); + + // "host2" resolves to host2.foo.com with the foo.com search domain + resolved = assertResolve(resolver, "host2"); + assertEquals(store.getAddress("host2.foo.com"), resolved); + } + private void assertNotResolve(DnsNameResolver resolver, String inetHost) throws InterruptedException { Future<InetAddress> fut = resolver.resolve(inetHost); assertTrue(fut.await(10, TimeUnit.SECONDS));
val
train
2016-07-22T20:42:05
"2016-07-22T18:20:41Z"
vietj
val
netty/netty/5590_5591
netty/netty
netty/netty/5590
netty/netty/5591
[ "timestamp(timedelta=63.0, similarity=0.8996140167226422)" ]
cebf255951a257068bbcab4781d4c6085e3289bd
ee1cae0e6f6acc21dfbee998737b28652238731a
[]
[]
"2016-07-27T05:28:25Z"
[]
QueryStringDecoder#path should decode the path info
Netty version: 4.1.3.Final, 4.0.39.Final For example, given "foo%20bar", it should return "foo bar", not "foo%20bar".
[ "codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java index fdffb2e217e..b4996267788 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java @@ -178,14 +178,10 @@ public String uri() { public String path() { if (path == null) { if (!hasPath) { - return path = ""; - } - - int pathEndPos = uri.indexOf('?'); - if (pathEndPos < 0) { - path = uri; + path = ""; } else { - return path = uri.substring(0, pathEndPos); + int pathEndPos = uri.indexOf('?'); + path = decodeComponent(pathEndPos < 0 ? uri : uri.substring(0, pathEndPos), this.charset); } } return path; @@ -197,16 +193,18 @@ public String path() { public Map<String, List<String>> parameters() { if (params == null) { if (hasPath) { - int pathLength = path().length(); - if (uri.length() == pathLength) { - return Collections.emptyMap(); + int pathEndPos = uri.indexOf('?'); + if (pathEndPos >= 0 && pathEndPos < uri.length() - 1) { + decodeParams(uri.substring(pathEndPos + 1)); + } else { + params = Collections.emptyMap(); } - decodeParams(uri.substring(pathLength + 1)); } else { if (uri.isEmpty()) { - return Collections.emptyMap(); + params = Collections.emptyMap(); + } else { + decodeParams(uri); } - decodeParams(uri); } } return params;
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java index 49b069f6dba..caee98a0806 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java @@ -38,6 +38,14 @@ public void testBasicUris() throws URISyntaxException { public void testBasic() throws Exception { QueryStringDecoder d; + d = new QueryStringDecoder("/foo"); + Assert.assertEquals("/foo", d.path()); + Assert.assertEquals(0, d.parameters().size()); + + d = new QueryStringDecoder("/foo%20bar"); + Assert.assertEquals("/foo bar", d.path()); + Assert.assertEquals(0, d.parameters().size()); + d = new QueryStringDecoder("/foo?a=b=c"); Assert.assertEquals("/foo", d.path()); Assert.assertEquals(1, d.parameters().size()); @@ -51,6 +59,13 @@ public void testBasic() throws Exception { Assert.assertEquals("1", d.parameters().get("a").get(0)); Assert.assertEquals("2", d.parameters().get("a").get(1)); + d = new QueryStringDecoder("/foo%20bar?a=1&a=2"); + Assert.assertEquals("/foo bar", d.path()); + Assert.assertEquals(1, d.parameters().size()); + Assert.assertEquals(2, d.parameters().get("a").size()); + Assert.assertEquals("1", d.parameters().get("a").get(0)); + Assert.assertEquals("2", d.parameters().get("a").get(1)); + d = new QueryStringDecoder("/foo?a=&a=2"); Assert.assertEquals("/foo", d.path()); Assert.assertEquals(1, d.parameters().size()); @@ -85,6 +100,8 @@ public void testBasic() throws Exception { public void testExotic() throws Exception { assertQueryString("", ""); assertQueryString("foo", "foo"); + assertQueryString("foo", "foo?"); + assertQueryString("/foo", "/foo?"); assertQueryString("/foo", "/foo"); assertQueryString("?a=", "?a"); assertQueryString("foo?a=", "foo?a");
train
train
2016-07-27T07:11:47
"2016-07-27T05:18:31Z"
ngocdaothanh
val
netty/netty/5601_5603
netty/netty
netty/netty/5601
netty/netty/5603
[ "timestamp(timedelta=385.0, similarity=0.8761801680318148)" ]
82b617dfe9d28b6361006e6e9943fc83248aa7dd
d513dc8e17ceaefab509df22faa88e7a69663379
[]
[]
"2016-07-29T21:20:48Z"
[ "defect" ]
Consider adding findNextPositivePowerOfTwo which does bound checks
See: https://github.com/netty/netty/pull/5594#discussion_r72509427
[ "buffer/src/main/java/io/netty/buffer/PoolThreadCache.java", "common/src/main/java/io/netty/util/Recycler.java", "common/src/main/java/io/netty/util/ResourceLeakDetector.java", "common/src/main/java/io/netty/util/internal/MathUtil.java", "common/src/main/templates/io/netty/util/collection/KObjectHashMap.template" ]
[ "buffer/src/main/java/io/netty/buffer/PoolThreadCache.java", "common/src/main/java/io/netty/util/Recycler.java", "common/src/main/java/io/netty/util/ResourceLeakDetector.java", "common/src/main/java/io/netty/util/internal/MathUtil.java", "common/src/main/templates/io/netty/util/collection/KObjectHashMap.template" ]
[]
diff --git a/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java b/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java index fb044c851a2..0caa6f1efc7 100644 --- a/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java +++ b/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java @@ -369,7 +369,7 @@ private abstract static class MemoryRegionCache<T> { private int allocations; MemoryRegionCache(int size, SizeClass sizeClass) { - this.size = MathUtil.findNextPositivePowerOfTwo(size); + this.size = MathUtil.safeFindNextPositivePowerOfTwo(size); queue = PlatformDependent.newFixedMpscQueue(this.size); this.sizeClass = sizeClass; } diff --git a/common/src/main/java/io/netty/util/Recycler.java b/common/src/main/java/io/netty/util/Recycler.java index 71fd442e342..593974f5ecb 100644 --- a/common/src/main/java/io/netty/util/Recycler.java +++ b/common/src/main/java/io/netty/util/Recycler.java @@ -27,7 +27,7 @@ import java.util.WeakHashMap; import java.util.concurrent.atomic.AtomicInteger; -import static io.netty.util.internal.MathUtil.findNextPositivePowerOfTwo; +import static io.netty.util.internal.MathUtil.safeFindNextPositivePowerOfTwo; import static java.lang.Math.max; import static java.lang.Math.min; @@ -71,14 +71,13 @@ public void recycle(Object object) { SystemPropertyUtil.getInt("io.netty.recycler.maxSharedCapacityFactor", 2)); - LINK_CAPACITY = findNextPositivePowerOfTwo( + LINK_CAPACITY = safeFindNextPositivePowerOfTwo( max(SystemPropertyUtil.getInt("io.netty.recycler.linkCapacity", 16), 16)); // By default we allow one push to a Recycler for each 8th try on handles that were never recycled before. // This should help to slowly increase the capacity of the recycler while not be too sensitive to allocation // bursts. - RATIO = min(findNextPositivePowerOfTwo( - max(SystemPropertyUtil.getInt("io.netty.recycler.ratio", 8), 2)), 0x40000000); + RATIO = safeFindNextPositivePowerOfTwo(SystemPropertyUtil.getInt("io.netty.recycler.ratio", 8)); if (logger.isDebugEnabled()) { if (DEFAULT_MAX_CAPACITY == 0) { @@ -121,10 +120,7 @@ protected Recycler(int maxCapacity, int maxSharedCapacityFactor) { } protected Recycler(int maxCapacity, int maxSharedCapacityFactor, int ratio) { - if (ratio > 0x40000000) { - throw new IllegalArgumentException(ratio + ": " + ratio + " (expected: < 0x40000000)"); - } - ratioMask = findNextPositivePowerOfTwo(ratio) - 1; + ratioMask = safeFindNextPositivePowerOfTwo(ratio) - 1; if (maxCapacity <= 0) { this.maxCapacity = 0; this.maxSharedCapacityFactor = 1; diff --git a/common/src/main/java/io/netty/util/ResourceLeakDetector.java b/common/src/main/java/io/netty/util/ResourceLeakDetector.java index c0144e4b933..f85513aaf66 100644 --- a/common/src/main/java/io/netty/util/ResourceLeakDetector.java +++ b/common/src/main/java/io/netty/util/ResourceLeakDetector.java @@ -193,15 +193,12 @@ public ResourceLeakDetector(String resourceType, int samplingInterval, long maxA if (resourceType == null) { throw new NullPointerException("resourceType"); } - if (samplingInterval <= 0) { - throw new IllegalArgumentException("samplingInterval: " + samplingInterval + " (expected: 1+)"); - } if (maxActive <= 0) { throw new IllegalArgumentException("maxActive: " + maxActive + " (expected: 1+)"); } this.resourceType = resourceType; - this.samplingInterval = MathUtil.findNextPositivePowerOfTwo(samplingInterval); + this.samplingInterval = MathUtil.safeFindNextPositivePowerOfTwo(samplingInterval); // samplingInterval is a power of two so we calculate a mask that we can use to // check if we need to do any leak detection or not. mask = this.samplingInterval - 1; diff --git a/common/src/main/java/io/netty/util/internal/MathUtil.java b/common/src/main/java/io/netty/util/internal/MathUtil.java index 2b16e22d7ce..10e012cf3f1 100644 --- a/common/src/main/java/io/netty/util/internal/MathUtil.java +++ b/common/src/main/java/io/netty/util/internal/MathUtil.java @@ -25,7 +25,7 @@ private MathUtil() { /** * Fast method of finding the next power of 2 greater than or equal to the supplied value. * - * If the value is {@code <= 0} then 1 will be returned. + * <p>If the value is {@code <= 0} then 1 will be returned. * This method is not suitable for {@link Integer#MIN_VALUE} or numbers greater than 2^30. * * @param value from which to search for next power of 2 @@ -36,6 +36,22 @@ public static int findNextPositivePowerOfTwo(final int value) { return 1 << (32 - Integer.numberOfLeadingZeros(value - 1)); } + /** + * Fast method of finding the next power of 2 greater than or equal to the supplied value. + * <p>This method will do runtime bounds checking and call {@link #findNextPositivePowerOfTwo(int)} if within a + * valid range. + * @param value from which to search for next power of 2 + * @return The next power of 2 or the value itself if it is a power of 2. + * <p>Special cases for return values are as follows: + * <ul> + * <li>{@code <= 0} -> 1</li> + * <li>{@code >= 2^30} -> 2^30</li> + * </ul> + */ + public static int safeFindNextPositivePowerOfTwo(final int value) { + return value <= 0 ? 1 : value >= 0x40000000 ? 0x40000000 : findNextPositivePowerOfTwo(value); + } + /** * Determine if the requested {@code index} and {@code length} will fit within {@code capacity}. * @param index The starting index. diff --git a/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template b/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template index 6358364c1eb..422e76e7f21 100644 --- a/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template +++ b/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template @@ -15,7 +15,7 @@ package io.netty.util.collection; -import static io.netty.util.internal.MathUtil.findNextPositivePowerOfTwo; +import static io.netty.util.internal.MathUtil.safeFindNextPositivePowerOfTwo; import java.util.AbstractCollection; import java.util.AbstractSet; @@ -77,9 +77,6 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V> { } public @K@ObjectHashMap(int initialCapacity, float loadFactor) { - if (initialCapacity < 1) { - throw new IllegalArgumentException("initialCapacity must be >= 1"); - } if (loadFactor <= 0.0f || loadFactor > 1.0f) { // Cannot exceed 1 because we can never store more than capacity elements; // using a bigger loadFactor would trigger rehashing before the desired load is reached. @@ -89,7 +86,7 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V> { this.loadFactor = loadFactor; // Adjust the initial capacity if necessary. - int capacity = findNextPositivePowerOfTwo(initialCapacity); + int capacity = safeFindNextPositivePowerOfTwo(initialCapacity); mask = capacity - 1; // Allocate the arrays.
null
train
train
2016-07-29T20:16:44
"2016-07-29T14:06:23Z"
normanmaurer
val
netty/netty/5597_5605
netty/netty
netty/netty/5597
netty/netty/5605
[ "timestamp(timedelta=50.0, similarity=0.8688967097123617)" ]
82b617dfe9d28b6361006e6e9943fc83248aa7dd
a0c809b7c1c501b1f0c6aa2f3430995ddb798477
[ "Thanks will check\n", "thanks for reporting @tkaitchuck !\n", "Fixed by https://github.com/netty/netty/pull/5605\n" ]
[]
"2016-07-30T06:22:41Z"
[ "defect" ]
Unpooled tries to double release empty buffer
See: https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/Unpooled.java#L316 In unpooled when an empty buffer is at the start of the list of several to be wrapped, it first detects the empty buffer. Then without removing it from the list it passes the whole list to the CompositeByteBuf constructor. Which will attempt to take ownership of the buffer, calling release on each of the buffers it is passed. This results in release being called twice on the empty buffer, which can cause an exception to be thrown.
[ "buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java", "buffer/src/main/java/io/netty/buffer/Unpooled.java" ]
[ "buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java", "buffer/src/main/java/io/netty/buffer/Unpooled.java" ]
[ "buffer/src/test/java/io/netty/buffer/UnpooledTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java index 05ceb4adfff..8cd285e2bc8 100644 --- a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java @@ -65,6 +65,11 @@ public CompositeByteBuf(ByteBufAllocator alloc, boolean direct, int maxNumCompon } public CompositeByteBuf(ByteBufAllocator alloc, boolean direct, int maxNumComponents, ByteBuf... buffers) { + this(alloc, direct, maxNumComponents, buffers, 0, buffers.length); + } + + CompositeByteBuf( + ByteBufAllocator alloc, boolean direct, int maxNumComponents, ByteBuf[] buffers, int offset, int len) { super(Integer.MAX_VALUE); if (alloc == null) { throw new NullPointerException("alloc"); @@ -79,7 +84,7 @@ public CompositeByteBuf(ByteBufAllocator alloc, boolean direct, int maxNumCompon this.maxNumComponents = maxNumComponents; components = newList(maxNumComponents); - addComponents0(false, 0, buffers); + addComponents0(false, 0, buffers, offset, len); consolidateIfNeeded(); setIndex(0, capacity()); } @@ -202,7 +207,7 @@ public CompositeByteBuf addComponent(boolean increaseWriterIndex, ByteBuf buffer * ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}. */ public CompositeByteBuf addComponents(boolean increaseWriterIndex, ByteBuf... buffers) { - addComponents0(increaseWriterIndex, components.size(), buffers); + addComponents0(increaseWriterIndex, components.size(), buffers, 0, buffers.length); consolidateIfNeeded(); return this; } @@ -294,19 +299,19 @@ private int addComponent0(boolean increaseWriterIndex, int cIndex, ByteBuf buffe * ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}. */ public CompositeByteBuf addComponents(int cIndex, ByteBuf... buffers) { - addComponents0(false, cIndex, buffers); + addComponents0(false, cIndex, buffers, 0, buffers.length); consolidateIfNeeded(); return this; } - private int addComponents0(boolean increaseWriterIndex, int cIndex, ByteBuf... buffers) { + private int addComponents0(boolean increaseWriterIndex, int cIndex, ByteBuf[] buffers, int offset, int len) { checkNotNull(buffers, "buffers"); - int i = 0; + int i = offset; try { checkComponentIndex(cIndex); // No need for consolidation - while (i < buffers.length) { + while (i < len) { // Increment i now to prepare for the next iteration and prevent a duplicate release (addComponent0 // will release if an exception occurs, and we also release in the finally block here). ByteBuf b = buffers[i++]; @@ -321,7 +326,7 @@ private int addComponents0(boolean increaseWriterIndex, int cIndex, ByteBuf... b } return cIndex; } finally { - for (; i < buffers.length; ++i) { + for (; i < len; ++i) { ByteBuf b = buffers[i]; if (b != null) { try { @@ -383,7 +388,7 @@ private int addComponents0(boolean increaseIndex, int cIndex, Iterable<ByteBuf> } Collection<ByteBuf> col = (Collection<ByteBuf>) buffers; - return addComponents0(increaseIndex, cIndex, col.toArray(new ByteBuf[col.size()])); + return addComponents0(increaseIndex, cIndex, col.toArray(new ByteBuf[col.size()]), 0 , col.size()); } /** diff --git a/buffer/src/main/java/io/netty/buffer/Unpooled.java b/buffer/src/main/java/io/netty/buffer/Unpooled.java index 9b6c434c944..faaaabc2cd6 100644 --- a/buffer/src/main/java/io/netty/buffer/Unpooled.java +++ b/buffer/src/main/java/io/netty/buffer/Unpooled.java @@ -309,13 +309,14 @@ public static ByteBuf wrappedBuffer(int maxNumComponents, ByteBuf... buffers) { } break; default: - for (ByteBuf b: buffers) { - if (b.isReadable()) { - return new CompositeByteBuf(ALLOC, false, maxNumComponents, buffers); - } else { - b.release(); + for (int i = 0; i < buffers.length; i++) { + ByteBuf buf = buffers[i]; + if (buf.isReadable()) { + return new CompositeByteBuf(ALLOC, false, maxNumComponents, buffers, i, buffers.length); } + buf.release(); } + break; } return EMPTY_BUFFER; }
diff --git a/buffer/src/test/java/io/netty/buffer/UnpooledTest.java b/buffer/src/test/java/io/netty/buffer/UnpooledTest.java index d0dfc1ed292..5abf700267f 100644 --- a/buffer/src/test/java/io/netty/buffer/UnpooledTest.java +++ b/buffer/src/test/java/io/netty/buffer/UnpooledTest.java @@ -628,4 +628,22 @@ public void skipBytesNegativeLength() { ByteBuf buf = freeLater(buffer(8)); buf.skipBytes(-1); } + + // See https://github.com/netty/netty/issues/5597 + @Test + public void testWrapByteBufArrayStartsWithNonReadable() { + ByteBuf buffer1 = buffer(8); + ByteBuf buffer2 = buffer(8).writeZero(8); // Ensure the ByteBuf is readable. + ByteBuf buffer3 = buffer(8); + ByteBuf buffer4 = buffer(8).writeZero(8); // Ensure the ByteBuf is readable. + + ByteBuf wrapped = wrappedBuffer(buffer1, buffer2, buffer3, buffer4); + assertEquals(16, wrapped.readableBytes()); + assertTrue(wrapped.release()); + assertEquals(0, buffer1.refCnt()); + assertEquals(0, buffer2.refCnt()); + assertEquals(0, buffer3.refCnt()); + assertEquals(0, buffer4.refCnt()); + assertEquals(0, wrapped.refCnt()); + } }
train
train
2016-07-29T20:16:44
"2016-07-29T00:28:35Z"
tkaitchuck
val
netty/netty/5602_5608
netty/netty
netty/netty/5602
netty/netty/5608
[ "timestamp(timedelta=134149.0, similarity=1.0000000000000002)" ]
e85d43739819e6408c3a4a4e2f9e71bcf6905a2e
8803b0d89ab929e1178a9844147154ba50d5cf3d
[ "To show the issue in vert.x the following unit test should show the error message when running it inside the current master of eclipse/vert.x\n\nhttps://gist.github.com/alexlehm/d4d4ffdf95c97c78dff7ba91de1b0d09\n\nStarting test: UnknownHostExceptionTest#testResolveExceptionHostname \njava.net.UnknownHostException: failed to resolve 'sdfsdfsdfsdsdffd.internal.example.com'. Exceeded max queries per resolve 3\n", "@alexlehm makes sense... interested in proving a PR ?\n", "https://github.com/netty/netty/pull/5608\n" ]
[]
"2016-07-31T01:27:21Z"
[ "defect" ]
UnknownHostException mentions hostname with search domain added
Netty version: 4.1.4.Final Context: While testing the dns resolution of vert.x I found the cosmetic issue that the exception for UnknownHost mentions the hostname with the last attempted search domain appended, which is kind of confusing. I would prefer to see the original hostname supplied to the method in the exception. The resolution function is based on the netty resolver, the exception text is created in DnsNameResolverContext. I tried to reproduce the issue with a unit test in netty, which I didn't get quite right, however it is shown when doing the following: 1. add an unknown hostname to the list of hostnames in DnsNameResolverTest, e.g. "unknown.hostname" 2. remove the line .nameServerAddresses(DnsServerAddresses.singleton(dnsServer.localAddress())) in newResolver() to enable dns resolution with real dns 3. add at least one search domain (I used internal.example.com) 4. run the unit test testResolveA() 5. if should fail with the error java.net.UnknownHostException: failed to resolve 'unknown.hostname.internal.example.com' $ java -version java version "1.8.0_71" Java(TM) SE Runtime Environment (build 1.8.0_71-b15) Java HotSpot(TM) Client VM (build 25.71-b15, mixed mode) Operating system: Windows 10 Home 64 bit Microsoft Windows [Version 10.0.10586] (edit: I choose a rather bad example as search domain before, I have changed the example to example.com now)
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java" ]
[ "resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java" ]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java index 08ba335a6ad..3579e0644ec 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java @@ -69,6 +69,7 @@ public void operationComplete(Future<AddressedEnvelope<DnsResponse, InetSocketAd private final DnsNameResolver parent; private final DnsServerAddressStream nameServerAddrs; private final String hostname; + protected String pristineHostname; private final DnsCache resolveCache; private final boolean traceEnabled; private final int maxAllowedQueries; @@ -116,6 +117,7 @@ public void operationComplete(Future<T> future) throws Exception { String nextHostname = DnsNameResolverContext.this.hostname + "." + searchDomain; DnsNameResolverContext<T> nextContext = newResolverContext(parent, nextHostname, resolveCache); + nextContext.pristineHostname = hostname; nextContext.internalResolve(nextPromise); nextPromise.addListener(this); } else { @@ -449,8 +451,13 @@ private void finishResolve(Promise<T> promise) { final int tries = maxAllowedQueries - allowedQueries; final StringBuilder buf = new StringBuilder(64); - buf.append("failed to resolve '") - .append(hostname).append('\''); + buf.append("failed to resolve '"); + if (pristineHostname != null) { + buf.append(pristineHostname); + } else { + buf.append(hostname); + } + buf.append('\''); if (tries > 1) { if (tries < maxAllowedQueries) { buf.append(" after ")
diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java index da89173848c..77c652ff79f 100644 --- a/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java +++ b/resolver-dns/src/test/java/io/netty/resolver/dns/SearchDomainTest.java @@ -24,6 +24,7 @@ import org.junit.Test; import java.net.InetAddress; +import java.net.UnknownHostException; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -34,7 +35,10 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; +import static org.hamcrest.Matchers.not; +import static org.hamcrest.core.StringContains.containsString; public class SearchDomainTest { @@ -265,4 +269,23 @@ private List<String> assertResolveAll(DnsNameResolver resolver, String inetHost) } return list; } + + @Test + public void testExceptionMsgNoSearchDomain() throws Exception { + Set<String> domains = new HashSet<String>(); + + TestDnsServer.MapRecordStoreA store = new TestDnsServer.MapRecordStoreA(domains); + dnsServer = new TestDnsServer(store); + dnsServer.start(); + + resolver = newResolver().searchDomains(Collections.singletonList("foo.com")).build(); + + Future<InetAddress> fut = resolver.resolve("unknown.hostname"); + assertTrue(fut.await(10, TimeUnit.SECONDS)); + assertFalse(fut.isSuccess()); + final Throwable cause = fut.cause(); + assertEquals(UnknownHostException.class, cause.getClass()); + assertThat("search domain is included in UnknownHostException", cause.getMessage(), + not(containsString("foo.com"))); + } }
test
train
2016-07-30T21:16:44
"2016-07-29T20:48:28Z"
alexlehm
val
netty/netty/5615_5625
netty/netty
netty/netty/5615
netty/netty/5625
[ "timestamp(timedelta=75.0, similarity=0.8588913995068724)" ]
0b086a9625cd2e8219711e04579466a43d4a4e99
afefcde34b269208463dab8116dc6bc9e683bda6
[ "I agree, I prefer `SLF4J-API` and `Logback`\n", "it is causing any harm? maybe we can just mark it as deprecated, comment it will be removed in future releases, and recommend using an alternative (e.g. slf4j).\n", "> it is causing any harm?\n\nNo harm. It is very outdated and not used within netty. Deprecation also fine.\n", "+1 for deprecated.\n", "Fixed\n" ]
[]
"2016-08-03T20:32:05Z"
[ "cleanup" ]
Proposal : remove commons logging classes
Remove `CommonsLogger` and `CommonsLoggerFactory` classes. Commons-logging project is not updated for 2 years already. I propose to remove it also from netty too. Let me know If I can submit PR.
[ "common/src/main/java/io/netty/util/internal/logging/CommonsLogger.java", "common/src/main/java/io/netty/util/internal/logging/CommonsLoggerFactory.java" ]
[ "common/src/main/java/io/netty/util/internal/logging/CommonsLogger.java", "common/src/main/java/io/netty/util/internal/logging/CommonsLoggerFactory.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/logging/CommonsLogger.java b/common/src/main/java/io/netty/util/internal/logging/CommonsLogger.java index 1068005e893..110f0f1cf92 100644 --- a/common/src/main/java/io/netty/util/internal/logging/CommonsLogger.java +++ b/common/src/main/java/io/netty/util/internal/logging/CommonsLogger.java @@ -44,7 +44,11 @@ /** * <a href="http://commons.apache.org/logging/">Apache Commons Logging</a> * logger. + * + * @deprecated Please use {@link Log4J2Logger} or {@link Log4JLogger} or + * {@link Slf4JLogger}. */ +@Deprecated class CommonsLogger extends AbstractInternalLogger { private static final long serialVersionUID = 8647838678388394885L; diff --git a/common/src/main/java/io/netty/util/internal/logging/CommonsLoggerFactory.java b/common/src/main/java/io/netty/util/internal/logging/CommonsLoggerFactory.java index 3e9ab9fafdb..2b4af9f4e9b 100644 --- a/common/src/main/java/io/netty/util/internal/logging/CommonsLoggerFactory.java +++ b/common/src/main/java/io/netty/util/internal/logging/CommonsLoggerFactory.java @@ -18,14 +18,15 @@ import org.apache.commons.logging.LogFactory; -import java.util.HashMap; -import java.util.Map; - /** * Logger factory which creates an * <a href="http://commons.apache.org/logging/">Apache Commons Logging</a> * logger. + * + * @deprecated Please use {@link Log4J2LoggerFactory} or {@link Log4JLoggerFactory} or + * {@link Slf4JLoggerFactory}. */ +@Deprecated public class CommonsLoggerFactory extends InternalLoggerFactory { public static final InternalLoggerFactory INSTANCE = new CommonsLoggerFactory();
null
train
train
2016-08-03T22:15:40
"2016-08-01T12:28:58Z"
doom369
val
netty/netty/4204_5638
netty/netty
netty/netty/4204
netty/netty/5638
[ "timestamp(timedelta=20.0, similarity=0.9641005524722586)" ]
df5cb48e08122c861e71a1bc6da9e554250f9633
5a3acfa319fd9d515880d2af6bf29e6178aea21a
[ "Do you want to provide a pr to fix it?\n", "I'm on company time - I'll ask (+day of turnaround).\n\nSupposing that I get an OK to fix/re test.\nDo you want to:\n- have the DefaultSctpServerChannelConfig brought to match DefaultSctpChannelConfig (where the static import is `import static io.netty.channel.sctp.SctpChannelOption.*;`)\n- just add ChannelOption to SCTP_INIT_MAXSTREAMS\n\n?\n", "Yes sounds good\n", "I've never got an answer back about it (working on this issue on company time). \nIf I where to guess that means NO.\nSorry.\n", "God damn, so that's what \"close and comment\" does.\nReopening.\n", "@normanmaurer I will do the fix. In which branch I have to open a PR?\n", "4.1 would be best\n\n> Am 22.01.2016 um 18:37 schrieb Jestan Nirojan [email protected]:\n> \n> @normanmaurer I will do the fix. In which branch I have to open a PR?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "@jestan did you fix this ?\n", "Fixed by https://github.com/netty/netty/pull/5638\n" ]
[ "nit: newline\n", "fixed\n" ]
"2016-08-04T19:15:38Z"
[ "defect" ]
Broken support of SCTP_INIT_MAXSTREAMS in *SctpServerChannel
Netty version: 4.0.12.Final (but the suspected code is the same in the current 4.0 branch). The SCTP_INIT_MAXSTREAMS property is ignored on NioSctpServerChannel (and judging by the source on OioSctpServerChannel too). JUnit test to reproduce: ``` java import com.sun.nio.sctp.SctpServerChannel; import com.sun.nio.sctp.SctpStandardSocketOptions; import io.netty.bootstrap.Bootstrap; import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelOption; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.sctp.SctpChannel; import io.netty.channel.sctp.SctpChannelOption; import io.netty.channel.sctp.nio.NioSctpChannel; import io.netty.channel.sctp.nio.NioSctpServerChannel; import org.junit.After; import org.junit.Before; import org.junit.Test; import static org.junit.Assert.*; import java.lang.reflect.Method; import java.net.InetSocketAddress; public class SctpLimitStreamsTest { @Before public void setUp() { serverBootstrap = new ServerBootstrap(); serverBootstrap.group(new NioEventLoopGroup(), new NioEventLoopGroup()) .channel(NioSctpServerChannel.class) .option(ChannelOption.SO_REUSEADDR, true) .option(SctpChannelOption.SCTP_INIT_MAXSTREAMS, SctpStandardSocketOptions.InitMaxStreams.create(1, 1)) .localAddress(new InetSocketAddress("localhost", 7766)) .childHandler(new ChannelHandler() { @Override public void handlerAdded(ChannelHandlerContext ctx) throws Exception { } @Override public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { } }); clientBootstrap = new Bootstrap() .group(new NioEventLoopGroup()) .channel(NioSctpChannel.class) .option(SctpChannelOption.SCTP_INIT_MAXSTREAMS, SctpStandardSocketOptions.InitMaxStreams.create(112, 112)) .handler(new ChannelHandler() { @Override public void handlerAdded(ChannelHandlerContext ctx) throws Exception { } @Override public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { } }); } @After public void tearDown() { serverBootstrap.childGroup().shutdownGracefully(); clientBootstrap.group().shutdownGracefully(); serverBootstrap.group().shutdownGracefully().syncUninterruptibly(); // this sync is enough for this test } @Test public void testSctpInitMaxstreamsNetty() throws Exception { NioSctpServerChannel serverChannel = (NioSctpServerChannel) serverBootstrap.bind().syncUninterruptibly().channel(); serverChannel.config().setOption(SctpChannelOption.SCTP_INIT_MAXSTREAMS, SctpStandardSocketOptions.InitMaxStreams.create(1, 1)); SctpChannel clientChannel = (SctpChannel) clientBootstrap.connect(new InetSocketAddress("localhost", 7766)).syncUninterruptibly() .channel(); System.out.println("maxOutboundStreams:" + clientChannel.association().maxOutboundStreams()); System.out.println("maxInboundStreams:" + clientChannel.association().maxInboundStreams()); assertEquals(1, clientChannel.association().maxOutboundStreams()); assertEquals(1, clientChannel.association().maxInboundStreams()); } @Test public void testSctpInitMaxstreamsNioDirect() throws Exception { NioSctpServerChannel serverChannel = (NioSctpServerChannel) serverBootstrap.bind().syncUninterruptibly().channel(); Method getJavaChannel = serverChannel.getClass().getDeclaredMethod("javaChannel"); getJavaChannel.setAccessible(true); SctpServerChannel javaChannel = (SctpServerChannel) getJavaChannel.invoke(serverChannel); javaChannel.setOption(SctpStandardSocketOptions.SCTP_INIT_MAXSTREAMS, SctpStandardSocketOptions.InitMaxStreams.create(1, 1)); SctpChannel clientChannel = (SctpChannel) clientBootstrap.connect(new InetSocketAddress("localhost", 7766)).syncUninterruptibly() .channel(); System.out.println("maxOutboundStreams:" + clientChannel.association().maxOutboundStreams()); System.out.println("maxInboundStreams:" + clientChannel.association().maxInboundStreams()); assertEquals(1, clientChannel.association().maxOutboundStreams()); assertEquals(1, clientChannel.association().maxInboundStreams()); } private ServerBootstrap serverBootstrap; private Bootstrap clientBootstrap; } ``` The effect is that directly setting the SCTP_INIT_MAXSTREAMS on NIO SctpServerChannel in Testcase testSctpInitMaxstreamsNioDirect **succeeds**, while doing it using NETTY SctpChannel/ServerBootstrap in Testcase testSctpInitMaxstreamsNetty **fails**. ``` [junit] Testcase: testSctpInitMaxstreamsNetty took 2.214 sec [junit] FAILED [junit] expected:<1> but was:<112> [junit] junit.framework.AssertionFailedError: expected:<1> but was:<112> [junit] at SctpLimitStreamsTest.testSctpInitMaxstreamsNetty(SctpLimitStreamsTest.java:90) ``` It seems that the offending code is located in io.netty.channel.sctp.DefaultSctpServerChannelConfig.setOption(...): ``` java import static com.sun.nio.sctp.SctpStandardSocketOptions.*; // /\== note that the static import is "com.sun.nio" ... @Override public <T> boolean setOption(ChannelOption<T> option, T value) { validate(option, value); if (option == ChannelOption.SO_RCVBUF) { setReceiveBufferSize((Integer) value); } else if (option == ChannelOption.SO_SNDBUF) { setSendBufferSize((Integer) value); } else if (option == SCTP_INIT_MAXSTREAMS) { // YOU SHALL NOT PASS... // as in the preceding conditional we are comparing // constant from io.netty.channel.ChannelOption WITH constant from // com.sun.nio.sctp.SctpStandardSocketOptions // (option == SCTP_INIT_MAXSTREAMS) is always FALSE setInitMaxStreams((InitMaxStreams) value); } else { return super.setOption(option, value); } return true; } ``` Environment: java version "1.7.0_72" Java(TM) SE Runtime Environment (build 1.7.0_72-b14) Java HotSpot(TM) 64-Bit Server VM (build 24.72-b04, mixed mode) Linux lwy-dell 3.13.0-63-generic #103-Ubuntu SMP Fri Aug 14 21:42:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[ "transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpChannelConfig.java", "transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java" ]
[ "transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpChannelConfig.java", "transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java" ]
[ "transport-sctp/src/main/test/io/netty/channel/sctp/SctpLimitStreamsTest.java", "transport-sctp/src/main/test/io/netty/channel/sctp/nio/NioSctpLimitStreamsTest.java", "transport-sctp/src/main/test/io/netty/channel/sctp/oio/OioSctpLimitStreamsTest.java" ]
diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpChannelConfig.java b/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpChannelConfig.java index 932e09ca3a9..6b284992e81 100644 --- a/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpChannelConfig.java +++ b/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpChannelConfig.java @@ -74,6 +74,9 @@ public <T> T getOption(ChannelOption<T> option) { if (option == SCTP_NODELAY) { return (T) Boolean.valueOf(isSctpNoDelay()); } + if (option == SCTP_INIT_MAXSTREAMS) { + return (T) getInitMaxStreams(); + } return super.getOption(option); } diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java b/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java index efad6eb722a..2c2e59afa64 100644 --- a/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java +++ b/transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java @@ -16,6 +16,7 @@ package io.netty.channel.sctp; import com.sun.nio.sctp.SctpServerChannel; +import com.sun.nio.sctp.SctpStandardSocketOptions; import io.netty.buffer.ByteBufAllocator; import io.netty.channel.ChannelException; import io.netty.channel.ChannelOption; @@ -27,8 +28,6 @@ import java.io.IOException; import java.util.Map; -import static com.sun.nio.sctp.SctpStandardSocketOptions.*; - /** * The default {@link SctpServerChannelConfig} implementation for SCTP. */ @@ -65,6 +64,9 @@ public <T> T getOption(ChannelOption<T> option) { if (option == ChannelOption.SO_SNDBUF) { return (T) Integer.valueOf(getSendBufferSize()); } + if (option == SctpChannelOption.SCTP_INIT_MAXSTREAMS) { + return (T) getInitMaxStreams(); + } return super.getOption(option); } @@ -76,8 +78,8 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { setReceiveBufferSize((Integer) value); } else if (option == ChannelOption.SO_SNDBUF) { setSendBufferSize((Integer) value); - } else if (option == SCTP_INIT_MAXSTREAMS) { - setInitMaxStreams((InitMaxStreams) value); + } else if (option == SctpChannelOption.SCTP_INIT_MAXSTREAMS) { + setInitMaxStreams((SctpStandardSocketOptions.InitMaxStreams) value); } else { return super.setOption(option, value); } @@ -88,7 +90,7 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { @Override public int getSendBufferSize() { try { - return javaChannel.getOption(SO_SNDBUF); + return javaChannel.getOption(SctpStandardSocketOptions.SO_SNDBUF); } catch (IOException e) { throw new ChannelException(e); } @@ -97,7 +99,7 @@ public int getSendBufferSize() { @Override public SctpServerChannelConfig setSendBufferSize(int sendBufferSize) { try { - javaChannel.setOption(SO_SNDBUF, sendBufferSize); + javaChannel.setOption(SctpStandardSocketOptions.SO_SNDBUF, sendBufferSize); } catch (IOException e) { throw new ChannelException(e); } @@ -107,7 +109,7 @@ public SctpServerChannelConfig setSendBufferSize(int sendBufferSize) { @Override public int getReceiveBufferSize() { try { - return javaChannel.getOption(SO_RCVBUF); + return javaChannel.getOption(SctpStandardSocketOptions.SO_RCVBUF); } catch (IOException e) { throw new ChannelException(e); } @@ -116,7 +118,7 @@ public int getReceiveBufferSize() { @Override public SctpServerChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - javaChannel.setOption(SO_RCVBUF, receiveBufferSize); + javaChannel.setOption(SctpStandardSocketOptions.SO_RCVBUF, receiveBufferSize); } catch (IOException e) { throw new ChannelException(e); } @@ -124,18 +126,18 @@ public SctpServerChannelConfig setReceiveBufferSize(int receiveBufferSize) { } @Override - public InitMaxStreams getInitMaxStreams() { + public SctpStandardSocketOptions.InitMaxStreams getInitMaxStreams() { try { - return javaChannel.getOption(SCTP_INIT_MAXSTREAMS); + return javaChannel.getOption(SctpStandardSocketOptions.SCTP_INIT_MAXSTREAMS); } catch (IOException e) { throw new ChannelException(e); } } @Override - public SctpServerChannelConfig setInitMaxStreams(InitMaxStreams initMaxStreams) { + public SctpServerChannelConfig setInitMaxStreams(SctpStandardSocketOptions.InitMaxStreams initMaxStreams) { try { - javaChannel.setOption(SCTP_INIT_MAXSTREAMS, initMaxStreams); + javaChannel.setOption(SctpStandardSocketOptions.SCTP_INIT_MAXSTREAMS, initMaxStreams); } catch (IOException e) { throw new ChannelException(e); }
diff --git a/transport-sctp/src/main/test/io/netty/channel/sctp/SctpLimitStreamsTest.java b/transport-sctp/src/main/test/io/netty/channel/sctp/SctpLimitStreamsTest.java new file mode 100644 index 00000000000..3d5abf4c344 --- /dev/null +++ b/transport-sctp/src/main/test/io/netty/channel/sctp/SctpLimitStreamsTest.java @@ -0,0 +1,68 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.sctp; + +import com.sun.nio.sctp.SctpStandardSocketOptions; +import io.netty.bootstrap.Bootstrap; +import io.netty.bootstrap.ServerBootstrap; +import io.netty.channel.Channel; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.ChannelOption; +import io.netty.channel.EventLoopGroup; +import org.junit.Test; +import java.net.InetSocketAddress; + +import static org.junit.Assert.*; + +public abstract class SctpLimitStreamsTest { + + @Test(timeout = 5000) + public void testSctpInitMaxstreams() throws Exception { + EventLoopGroup loop = newEventLoopGroup(); + try { + ServerBootstrap serverBootstrap = new ServerBootstrap(); + serverBootstrap.group(loop) + .channel(serverClass()) + .option(ChannelOption.SO_REUSEADDR, true) + .option(SctpChannelOption.SCTP_INIT_MAXSTREAMS, + SctpStandardSocketOptions.InitMaxStreams.create(1, 1)) + .localAddress(new InetSocketAddress(0)) + .childHandler(new ChannelInboundHandlerAdapter()); + + Bootstrap clientBootstrap = new Bootstrap() + .group(loop) + .channel(clientClass()) + .option(SctpChannelOption.SCTP_INIT_MAXSTREAMS, + SctpStandardSocketOptions.InitMaxStreams.create(112, 112)) + .handler(new ChannelInboundHandlerAdapter()); + + Channel serverChannel = serverBootstrap.bind() + .syncUninterruptibly().channel(); + SctpChannel clientChannel = (SctpChannel) clientBootstrap.connect(serverChannel.localAddress()) + .syncUninterruptibly().channel(); + assertEquals(1, clientChannel.association().maxOutboundStreams()); + assertEquals(1, clientChannel.association().maxInboundStreams()); + serverChannel.close().syncUninterruptibly(); + clientChannel.close().syncUninterruptibly(); + } finally { + loop.shutdownGracefully(); + } + } + + protected abstract EventLoopGroup newEventLoopGroup(); + protected abstract Class<? extends SctpChannel> clientClass(); + protected abstract Class<? extends SctpServerChannel> serverClass(); +} diff --git a/transport-sctp/src/main/test/io/netty/channel/sctp/nio/NioSctpLimitStreamsTest.java b/transport-sctp/src/main/test/io/netty/channel/sctp/nio/NioSctpLimitStreamsTest.java new file mode 100644 index 00000000000..78365ab5072 --- /dev/null +++ b/transport-sctp/src/main/test/io/netty/channel/sctp/nio/NioSctpLimitStreamsTest.java @@ -0,0 +1,39 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.sctp.nio; + +import io.netty.channel.EventLoopGroup; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.channel.sctp.SctpChannel; +import io.netty.channel.sctp.SctpLimitStreamsTest; +import io.netty.channel.sctp.SctpServerChannel; + +public class NioSctpLimitStreamsTest extends SctpLimitStreamsTest { + @Override + protected EventLoopGroup newEventLoopGroup() { + return new NioEventLoopGroup(); + } + + @Override + protected Class<? extends SctpChannel> clientClass() { + return NioSctpChannel.class; + } + + @Override + protected Class<? extends SctpServerChannel> serverClass() { + return NioSctpServerChannel.class; + } +} diff --git a/transport-sctp/src/main/test/io/netty/channel/sctp/oio/OioSctpLimitStreamsTest.java b/transport-sctp/src/main/test/io/netty/channel/sctp/oio/OioSctpLimitStreamsTest.java new file mode 100644 index 00000000000..d30a97d0ba2 --- /dev/null +++ b/transport-sctp/src/main/test/io/netty/channel/sctp/oio/OioSctpLimitStreamsTest.java @@ -0,0 +1,39 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.sctp.oio; + +import io.netty.channel.EventLoopGroup; +import io.netty.channel.oio.OioEventLoopGroup; +import io.netty.channel.sctp.SctpChannel; +import io.netty.channel.sctp.SctpLimitStreamsTest; +import io.netty.channel.sctp.SctpServerChannel; + +public class OioSctpLimitStreamsTest extends SctpLimitStreamsTest { + @Override + protected EventLoopGroup newEventLoopGroup() { + return new OioEventLoopGroup(); + } + + @Override + protected Class<? extends SctpChannel> clientClass() { + return OioSctpChannel.class; + } + + @Override + protected Class<? extends SctpServerChannel> serverClass() { + return OioSctpServerChannel.class; + } +}
train
train
2016-08-05T07:17:01
"2015-09-09T15:06:29Z"
LukasWysocki
val
netty/netty/5657_5659
netty/netty
netty/netty/5657
netty/netty/5659
[ "timestamp(timedelta=2.0, similarity=0.9480561467821279)" ]
e44c562932ed4310a6915b58b1d8dcb5e964c6f8
5ef9732de29c0ecd560da057bf2bef2d9a72416f
[ "@trustin so you will look into it ?\n", "Yes\n", "Ok cool thx\n\n> Am 09.08.2016 um 16:22 schrieb Trustin Lee [email protected]:\n> \n> Yes\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "I believe a vert.x user observed this issue when running his program in google cloud (and trying to connect to www.googleapis.com) https://groups.google.com/d/topic/vertx/7xTo_a_lOWk/discussion\n", "would be worth mentioning this issue in the group thread for the record @alexlehm \n", "and perhaps create a corresponding issue in Vert.x project ?\n", "Fixed by https://github.com/netty/netty/pull/5659\n" ]
[ "![CRITICAL](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-critical.png) Make this array \"private\". [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1873)\n", "False positive? It's referenced by `DnsNameResolverBuilder`.\n", "@trustin I think the rule only checks the local scope, the suggestion from Sonarqube is that the elements are mutable even though the field is final and so its better to make it private if it is used only in the class or provide a getter that copies, but this is not applicable to this example\n" ]
"2016-08-10T02:58:49Z"
[ "defect" ]
Disable IPv6 address lookups when -Djava.net.preferIPv4Stack=true
According to [the Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/net/ipv6_guide/): > java.net.preferIPv4Stack (default: false) > > If IPv6 is available on the operating system, the underlying native socket will be an IPv6 socket. This allows Java applications to connect to, and accept connections from, both IPv4 and IPv6 hosts. > > If an application has a preference to only use IPv4 sockets, then this property can be set to true. The implication is that the application will not be able to communicate with IPv6 hosts. which means, if `DnsNameResolver` returns an IPv6 address, a user (or Netty) will not be able to connect to it. Perhaps a more sensible default when `-Djava.net.preferIPv4Stack=true` is specified is not to send an AAAA query at all.
[ "common/src/main/java/io/netty/util/NetUtil.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java" ]
[ "common/src/main/java/io/netty/util/NetUtil.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java index 5ac18a353a0..b990d0e9889 100644 --- a/common/src/main/java/io/netty/util/NetUtil.java +++ b/common/src/main/java/io/netty/util/NetUtil.java @@ -118,16 +118,24 @@ public final class NetUtil { private static final int IPV4_SEPARATORS = 3; /** - * {@code true} if ipv4 should be used on a system that supports ipv4 and ipv6. + * {@code true} if IPv4 should be used even if the system supports both IPv4 and IPv6. */ private static final boolean IPV4_PREFERRED = Boolean.getBoolean("java.net.preferIPv4Stack"); + /** + * {@code true} if an IPv6 address should be preferred when a host has both an IPv4 address and an IPv6 address. + */ + private static final boolean IPV6_ADDRESSES_PREFERRED = Boolean.getBoolean("java.net.preferIPv6Addresses"); + /** * The logger being used by this class */ private static final InternalLogger logger = InternalLoggerFactory.getInstance(NetUtil.class); static { + logger.debug("-Djava.net.preferIPv4Stack: {}", IPV4_PREFERRED); + logger.debug("-Djava.net.preferIPv6Addresses: {}", IPV6_ADDRESSES_PREFERRED); + byte[] LOCALHOST4_BYTES = {127, 0, 0, 1}; byte[] LOCALHOST6_BYTES = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}; @@ -278,12 +286,27 @@ public Integer run() { } /** - * Returns {@code true} if ipv4 should be prefered on a system that supports ipv4 and ipv6. + * Returns {@code true} if IPv4 should be used even if the system supports both IPv4 and IPv6. Setting this + * property to {@code true} will disable IPv6 support. The default value of this property is {@code false}. + * + * @see <a href="https://docs.oracle.com/javase/8/docs/api/java/net/doc-files/net-properties.html">Java SE + * networking properties</a> */ public static boolean isIpV4StackPreferred() { return IPV4_PREFERRED; } + /** + * Returns {@code true} if an IPv6 address should be preferred when a host has both an IPv4 address and an IPv6 + * address. The default value of this property is {@code false}. + * + * @see <a href="https://docs.oracle.com/javase/8/docs/api/java/net/doc-files/net-properties.html">Java SE + * networking properties</a> + */ + public static boolean isIpV6AddressesPreferred() { + return IPV6_ADDRESSES_PREFERRED; + } + /** * Creates an byte[] based on an ipAddressString. No error handling is * performed here. diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java index a9d592eb8c8..cfb7a78e1ca 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java @@ -70,21 +70,24 @@ public class DnsNameResolver extends InetNameResolver { private static final String LOCALHOST = "localhost"; private static final InetAddress LOCALHOST_ADDRESS; - static final InternetProtocolFamily[] DEFAULT_RESOLVE_ADDRESS_TYPES = new InternetProtocolFamily[2]; + static final InternetProtocolFamily[] DEFAULT_RESOLVE_ADDRESS_TYPES; static final String[] DEFAULT_SEACH_DOMAINS; static { - // Note that we did not use SystemPropertyUtil.getBoolean() here to emulate the behavior of JDK. - if (Boolean.getBoolean("java.net.preferIPv6Addresses")) { - DEFAULT_RESOLVE_ADDRESS_TYPES[0] = InternetProtocolFamily.IPv6; - DEFAULT_RESOLVE_ADDRESS_TYPES[1] = InternetProtocolFamily.IPv4; - LOCALHOST_ADDRESS = NetUtil.LOCALHOST6; - logger.debug("-Djava.net.preferIPv6Addresses: true"); - } else { - DEFAULT_RESOLVE_ADDRESS_TYPES[0] = InternetProtocolFamily.IPv4; - DEFAULT_RESOLVE_ADDRESS_TYPES[1] = InternetProtocolFamily.IPv6; + if (NetUtil.isIpV4StackPreferred()) { + DEFAULT_RESOLVE_ADDRESS_TYPES = new InternetProtocolFamily[] { InternetProtocolFamily.IPv4 }; LOCALHOST_ADDRESS = NetUtil.LOCALHOST4; - logger.debug("-Djava.net.preferIPv6Addresses: false"); + } else { + DEFAULT_RESOLVE_ADDRESS_TYPES = new InternetProtocolFamily[2]; + if (NetUtil.isIpV6AddressesPreferred()) { + DEFAULT_RESOLVE_ADDRESS_TYPES[0] = InternetProtocolFamily.IPv6; + DEFAULT_RESOLVE_ADDRESS_TYPES[1] = InternetProtocolFamily.IPv4; + LOCALHOST_ADDRESS = NetUtil.LOCALHOST6; + } else { + DEFAULT_RESOLVE_ADDRESS_TYPES[0] = InternetProtocolFamily.IPv4; + DEFAULT_RESOLVE_ADDRESS_TYPES[1] = InternetProtocolFamily.IPv6; + LOCALHOST_ADDRESS = NetUtil.LOCALHOST4; + } } }
null
val
train
2016-08-08T19:19:09
"2016-08-09T10:29:16Z"
trustin
val
netty/netty/5631_5676
netty/netty
netty/netty/5631
netty/netty/5676
[ "timestamp(timedelta=53920.0, similarity=0.9559465783424206)" ]
cf71e5bae22b9a0d6954c2d18963c4059a23b6eb
10115e650359376e53ac1c795a34bbc64b51f140
[ "thanks for reporting @Spikhalskiy ! will look soon\n", "@Spikhalskiy - do you get any requests of type 100? also do you get any requests that exceed the maximum content length? are you able to reproduce the conditions that cause the leak?\n", "> do you get any requests of type 100?\n\nYes, we have some % of requests of type \"100 Continue\".\n\n> do you get any requests that exceed the maximum content length?\n\nNo, we don't get TooLongFrameException in mainHandler.\n\n> are you able to reproduce the conditions that cause the leak?\n\nNo, I don't know which specific request or condition causing this leak. If it's hard to get into this problem with already provided info I would think about a way of collecting more data.\n", "BTW, more info, that could be related. We had a bug that we returned\n\n```\nif (HttpHeaders.is100ContinueExpected(request)) {\n send100Continue(ctx);\n}\n```\n\nin the mainHandler AFTER HttpObjectAggregator, which already does this. And looks like our clients failed after receiving this second \"continue\". After fixing this bug (removing this code from mainHandler) I didn't see leaks anymore, so it looks like something possibly related, or just by chance and I will see leak soon (this bug fixed just yesterday).\n\nOn the same time, I don't get how it could be related because we sent this second continue in mainHandler, which have release() inside channelRead and we should see this release in advanced logs, but we don't. So, it shouldn't be that cases when we send this second continue.\n\nJust a stream of thoughts, that probably could be useful for you.\n", "The `HttpObjectAggregator` owns the responsibility to send a 100 continue response (because it aggregates and knows the limit on how much data you want to accept). Your peer may get confused if you send 2 continue responses. I will throw together a PR which tightens up the release related stuff in `HttpObjectAggregator` and ping you when ready.\n", "@Scottmitch yep, I already got it and bug with double continue fixed now in our server. Thanks for your work, waiting.\n", "Closed by https://github.com/netty/netty/pull/5676\n" ]
[ "This rework doesn't make too much sense because this invocation will always fall to code\n\n```\n private void AbstractChannelHandlerContext#invokeChannelInactive() {\n if (invokeHandler()) {\n try {\n ((ChannelInboundHandler) handler()).channelInactive(this);\n } catch (Throwable t) {\n notifyHandlerException(t);\n }\n } else {\n fireChannelInactive();\n }\n }\n```\n\nSo, looks like the current call `invokeChannelInactive` here is safe in terms of throwing exceptions.\n", "super#handlerRemoved is empty, so should be safe without `try {} catch` too.\n", "practically there should be no difference, but there is no need to make assumptions about what the super class does.\n" ]
"2016-08-10T19:33:20Z"
[]
HttpObjectAggregator could possibly leak in some conditions
Hi, I am getting messages about buffer leaks in netty server very rarely, like once in 4-6 hours. And it looks like from advanced logging info that HttpObjectAggregator could not release buffer sometimes. My assumption - it could be related to closing idle connection while HttpObjectAggregator has something cached inside and HttpObjectAggregator doesn't release it correctly. This is how my pipeline looks like: ``` p.addLast("idleState", new IdleStateHandler(0, 0, idleChannelTimeOutSec)); p.addLast("httpCodec", new HttpServerCodec()); p.addLast("unGzip", new HttpContentDecompressor()); p.addLast("httpAggregator", new HttpObjectAggregator(64 * 1024)); p.addLast("handler", mainHandler); ``` mainHandler has ``` @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { try { ... } finally { ReferenceCountUtil.release(msg); } } ``` This is how cleaned up version of log looks like: ``` [03 Aug 2016 18:44:55,952][ERROR][ResourceLeakDetector,httpServerWorker-4-1] - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: 13 #13: io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:61) io.netty.handler.codec.http.DefaultHttpContent.release(DefaultHttpContent.java:72) io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:59) io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:90) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:435) io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1279) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:889) io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:572) io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:513) io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:427) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:399) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:136) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) java.lang.Thread.run(Thread.java:745) #10: io.netty.buffer.AdvancedLeakAwareByteBuf.retain(AdvancedLeakAwareByteBuf.java:731) io.netty.handler.codec.http.DefaultHttpContent.retain(DefaultHttpContent.java:60) io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:225) io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:57) io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:435) io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1279) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:889) io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:572) io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:513) io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:427) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:399) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:136) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) java.lang.Thread.run(Thread.java:745) #9: io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:61) io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:256) io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1279) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:889) io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:572) io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:513) io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:427) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:399) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:136) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) java.lang.Thread.run(Thread.java:745) #8: io.netty.buffer.AdvancedLeakAwareByteBuf.retain(AdvancedLeakAwareByteBuf.java:731) io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:347) io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:152) io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1279) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:889) io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:572) io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:513) io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:427) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:399) io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:136) io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) java.lang.Thread.run(Thread.java:745) ``` Full advanced logs: [log1.txt](https://github.com/netty/netty/files/401905/log1.txt) [log2.txt](https://github.com/netty/netty/files/401904/log2.txt). If this info would not be enough for troubleshooting I would try to write a handler that will analyse/log this situation more deeply when I get a chance. Or, please, let me know if it looks like a mistake in user code, but for me it doesn't. Netty version 4.0.38
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java" ]
[]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java index bebe52a3995..2a530bbe8fb 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java @@ -99,7 +99,6 @@ public class HttpObjectAggregator extends MessageToMessageDecoder<HttpObject> { private final int maxContentLength; private AggregatedFullHttpMessage currentMessage; - private boolean tooLongFrameFound; private final boolean closeOnExpectationFailed; private int maxCumulationBufferComponents = DEFAULT_MAX_COMPOSITEBUFFER_COMPONENTS; @@ -168,18 +167,17 @@ public final void setMaxCumulationBufferComponents(int maxCumulationBufferCompon @Override protected void decode(final ChannelHandlerContext ctx, HttpObject msg, List<Object> out) throws Exception { - AggregatedFullHttpMessage currentMessage = this.currentMessage; - if (msg instanceof HttpMessage) { - tooLongFrameFound = false; - assert currentMessage == null; - + if (currentMessage != null) { + currentMessage.release(); + currentMessage = null; + throw new IllegalStateException("Start of new message received before existing message completed."); + } HttpMessage m = (HttpMessage) msg; // Handle the 'Expect: 100-continue' header if necessary. if (is100ContinueExpected(m)) { if (HttpHeaders.getContentLength(m, 0) > maxContentLength) { - tooLongFrameFound = true; final ChannelFuture future = ctx.writeAndFlush(EXPECTATION_FAILED.duplicate().retain()); future.addListener(new ChannelFutureListener() { @Override @@ -208,18 +206,16 @@ public void operationComplete(ChannelFuture future) throws Exception { if (!m.getDecoderResult().isSuccess()) { removeTransferEncodingChunked(m); out.add(toFullMessage(m)); - this.currentMessage = null; return; } if (msg instanceof HttpRequest) { HttpRequest header = (HttpRequest) msg; - this.currentMessage = currentMessage = new AggregatedFullHttpRequest( + currentMessage = new AggregatedFullHttpRequest( header, ctx.alloc().compositeBuffer(maxCumulationBufferComponents), null); } else if (msg instanceof HttpResponse) { HttpResponse header = (HttpResponse) msg; - this.currentMessage = currentMessage = new AggregatedFullHttpResponse( - header, - Unpooled.compositeBuffer(maxCumulationBufferComponents), null); + currentMessage = new AggregatedFullHttpResponse( + header, ctx.alloc().compositeBuffer(maxCumulationBufferComponents), null); } else { throw new Error(); } @@ -227,25 +223,20 @@ public void operationComplete(ChannelFuture future) throws Exception { // A streamed message - initialize the cumulative buffer, and wait for incoming chunks. removeTransferEncodingChunked(currentMessage); } else if (msg instanceof HttpContent) { - if (tooLongFrameFound) { - if (msg instanceof LastHttpContent) { - this.currentMessage = null; - } - // already detect the too long frame so just discard the content + if (currentMessage == null) { + // it is possible that a TooLongFrameException was already thrown but we can still discard data + // until the begging of the next request/response. return; } - assert currentMessage != null; // Merge the received chunk into the content of the current message. HttpContent chunk = (HttpContent) msg; CompositeByteBuf content = (CompositeByteBuf) currentMessage.content(); if (content.readableBytes() > maxContentLength - chunk.content().readableBytes()) { - tooLongFrameFound = true; - // release current message to prevent leaks currentMessage.release(); - this.currentMessage = null; + currentMessage = null; throw new TooLongFrameException( "HTTP content length exceeded " + maxContentLength + @@ -254,9 +245,7 @@ public void operationComplete(ChannelFuture future) throws Exception { // Append the content of the chunk if (chunk.content().isReadable()) { - chunk.retain(); - content.addComponent(chunk.content()); - content.writerIndex(content.writerIndex() + chunk.content().readableBytes()); + content.addComponent(true, chunk.content().retain()); } final boolean last; @@ -269,8 +258,6 @@ public void operationComplete(ChannelFuture future) throws Exception { } if (last) { - this.currentMessage = null; - // Merge trailing headers into the message. if (chunk instanceof LastHttpContent) { LastHttpContent trailer = (LastHttpContent) chunk; @@ -290,7 +277,9 @@ public void operationComplete(ChannelFuture future) throws Exception { Names.CONTENT_LENGTH, String.valueOf(content.readableBytes())); } - // All done + // Set our currentMessage member variable to null in case adding to out will cause re-entry. + AggregatedFullHttpMessage currentMessage = this.currentMessage; + this.currentMessage = null; out.add(currentMessage); } } else { @@ -300,12 +289,11 @@ public void operationComplete(ChannelFuture future) throws Exception { @Override public void channelInactive(ChannelHandlerContext ctx) throws Exception { - super.channelInactive(ctx); - - // release current message if it is not null as it may be a left-over - if (currentMessage != null) { - currentMessage.release(); - currentMessage = null; + try { + super.channelInactive(ctx); + } finally { + // release current message if it is not null as it may be a left-over + releaseCurrentMessage(); } } @@ -316,9 +304,16 @@ public void handlerAdded(ChannelHandlerContext ctx) throws Exception { @Override public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { - super.handlerRemoved(ctx); - // release current message if it is not null as it may be a left-over as there is not much more we can do in - // this case + try { + super.handlerRemoved(ctx); + } finally { + // release current message if it is not null as it may be a left-over as there is not much more we can do in + // this case + releaseCurrentMessage(); + } + } + + private void releaseCurrentMessage() { if (currentMessage != null) { currentMessage.release(); currentMessage = null;
null
test
train
2016-09-14T02:16:02
"2016-08-04T15:01:51Z"
Spikhalskiy
val
netty/netty/5701_5725
netty/netty
netty/netty/5701
netty/netty/5725
[ "timestamp(timedelta=959.0, similarity=0.841766743990749)" ]
0a3e6999c186e0426d5b7c816458a72ad16a0f07
efe72d44297416038c5530bac02bbfb8c607ae50
[ "ups... let me fix it.\n", "thanks for reporting @ferrybig !\n", "Fixed by https://github.com/netty/netty/pull/5725\n" ]
[]
"2016-08-22T05:44:19Z"
[ "cleanup" ]
git conflict markers in public javadoc
The following file contains git conflict markers: https://github.com/netty/netty/blob/4.0/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L107 These conflict markers are visible in the public javadoc for the hashcode and hexdump methods: https://netty.io/4.0/api/io/netty/buffer/ByteBufUtil.html
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[]
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java index f405d496cf1..1ff9efb5faa 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java @@ -103,8 +103,6 @@ public static String hexDump(ByteBuf buffer, int fromIndex, int length) { } /** -<<<<<<< HEAD -======= * Returns a <a href="http://en.wikipedia.org/wiki/Hex_dump">hex dump</a> * of the specified byte array. */ @@ -121,7 +119,6 @@ public static String hexDump(byte[] array, int fromIndex, int length) { } /** ->>>>>>> e5386b0... Move Hex dump related util from ByteBufUtil to inner class * Calculates the hash code of the specified buffer. This method is * useful when implementing a new buffer type. */
null
test
train
2016-08-18T21:04:10
"2016-08-15T13:27:32Z"
ferrybig
val
netty/netty/5720_5730
netty/netty
netty/netty/5720
netty/netty/5730
[ "timestamp(timedelta=35.0, similarity=0.93808538678266)" ]
9bc3e56647e4794178a721f2d06128a4ca868b79
98ab322e42928a092be68ab3ab2e3a7db7dc3154
[ "@gustavonalle which version exactly ?\n", "@normanmaurer we are on 4.1.0.CR7\n", "Csn you try 4.1.4.Final as well?\n\n> Am 19.08.2016 um 12:25 schrieb Gustavo Fernandes [email protected]:\n> \n> @normanmaurer we are on 4.1.0.CR7\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer Looking at 4.1.4.Final, the static initializers are unchanged, so I looks like the issue will happen as well\n", "@gustavonalle sorry this was fixed in the 4.1 branch and will be part of 4.1.5.Final once released. See https://github.com/netty/netty/commit/b427a8c8bdb453b45d9f3e39f3fdb15927356e42\n", "@normanmaurer cool thanks\n", "@normanmaurer \n\nA closer look at branch 4.1 shows the cycle may still the there:\n\n`SystemPropertyUtil` requires `InternalLoggerFactory` requires `ThreadLocalRandom` requires `SystemPropertyUtil`\n", "@gustavonalle doh... you are right. Let me fix it.\n", "Fixed by https://github.com/netty/netty/pull/5730\n" ]
[]
"2016-08-22T05:51:01Z"
[ "defect" ]
Static initializers can cause deadlock
We observed the following deadlock in our test suite: ``` "TestNG-1" #14 prio=5 os_prio=0 tid=0x00007f2a1c70e000 nid=0x6f95 in Object.wait() [0x00007f29fdaf7000] java.lang.Thread.State: RUNNABLE at io.netty.util.internal.SystemPropertyUtil.<clinit>(SystemPropertyUtil.java:38) at io.netty.channel.epoll.Native.<clinit>(Native.java:56) at io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:33) "TestNG-2" #12 prio=5 os_prio=0 tid=0x00007f2a1c6f2800 nid=0x6f92 in Object.wait() [0x00007f29fdcf9000] java.lang.Thread.State: RUNNABLE at io.netty.util.internal.ThreadLocalRandom.<clinit>(ThreadLocalRandom.java:67) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at io.netty.util.internal.logging.InternalLoggerFactory.<clinit>(InternalLoggerFactory.java:44) at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:67) at io.netty.buffer.UnpooledByteBufAllocator.<clinit>(UnpooledByteBufAllocator.java:30) at io.netty.buffer.Unpooled.<clinit>(Unpooled.java:73) ``` `TestNG-1` thread acquires class lock on `SystemPropertyUtil` and executes its static initialization and blocks waiting for a class lock on `InternalLoggerFactory`, but `InternalLoggerFactory` class lock is held by `TestNG-2` thread, and it also has a static initializer where it tries to initialize class `ThreadLocalRandom` that in turn needs the `SystemPropertyUtil` class lock which is held by `TestNG-1` thread
[ "common/src/main/java/io/netty/util/internal/ThreadLocalRandom.java" ]
[ "common/src/main/java/io/netty/util/internal/ThreadLocalRandom.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/ThreadLocalRandom.java b/common/src/main/java/io/netty/util/internal/ThreadLocalRandom.java index fb1594bcaec..f8c6a539d45 100644 --- a/common/src/main/java/io/netty/util/internal/ThreadLocalRandom.java +++ b/common/src/main/java/io/netty/util/internal/ThreadLocalRandom.java @@ -66,8 +66,7 @@ public final class ThreadLocalRandom extends Random { private static final AtomicLong seedUniquifier = new AtomicLong(); - private static volatile long initialSeedUniquifier = - SystemPropertyUtil.getLong("io.netty.initialSeedUniquifier", 0); + private static volatile long initialSeedUniquifier; private static final Thread seedGeneratorThread; private static final BlockingQueue<Long> seedQueue; @@ -75,11 +74,18 @@ public final class ThreadLocalRandom extends Random { private static volatile long seedGeneratorEndTime; static { + initialSeedUniquifier = AccessController.doPrivileged(new PrivilegedAction<Long>() { + @Override + public Long run() { + return Long.getLong("io.netty.initialSeedUniquifier", 0); + } + }); + if (initialSeedUniquifier == 0) { boolean secureRandom = AccessController.doPrivileged(new PrivilegedAction<Boolean>() { @Override public Boolean run() { - return SystemPropertyUtil.getBoolean("java.util.secureRandomSeed", false); + return Boolean.getBoolean("java.util.secureRandomSeed"); } });
null
val
train
2016-08-18T21:03:39
"2016-08-19T10:21:39Z"
gustavocoding
val
netty/netty/5718_5732
netty/netty
netty/netty/5718
netty/netty/5732
[ "timestamp(timedelta=7555.0, similarity=0.9075777944986547)" ]
4accd300e55b9f234ef5f460a7f4243bb19178e5
48b3c32f43fca36e7b02e8463e6ef21d9e4b6100
[ "@adityakishore so you basically say both should return 1. Right ?\n", "@adityakishore ups -1 I mean :)\n", "Let me fix this.\n", "Yes, both should return -1 if we want to retain the memcmp/strcmp behavior.\n\nFor that, both buffer should be compared as BIG_ENDIAN if we still want to retain the performance optimization of comparing 4 bytes at a time.\n", "@adityakishore PTAL https://github.com/netty/netty/pull/5732\n", "fixed by #5732\n", "Thanks for fixing this.\n" ]
[ "nit: remove space before `=`\n", "if you change `uintCount` to something like `aEnd` you could get rid of `i` and a decrement on every iteration.\n\n``` java\nfor (;aIndex < aEnd; aIndex += 4, bIndex +=4) {\n..\n}\n```\n", "consider removing `va` and `vb` for the following ... trade an aritmatic operation for potentially a conditional\n\n``` java\nlong cmp = bufferA.getUnsignedInt(aIndex) - bufferB.getUnsignedInt(bIndex);\nif (cmp == 0) {\n continue;\n}\nreturn cmp;\n```\n", "I think this could overflow ...\n\n> Am 23.08.2016 um 19:36 schrieb Scott Mitchell [email protected]:\n> \n> In buffer/src/main/java/io/netty/buffer/ByteBufUtil.java:\n> \n> > ```\n> > }\n> > \n> > return aLen - bLen;\n> > }\n> > ```\n> > - private static int compareUintBigEndian(\n> > - ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCount) {\n> > - for (int i = uintCount; i > 0; --i, aIndex += 4, bIndex += 4) {\n> > - long va = bufferA.getUnsignedInt(aIndex);\n> > - long vb = bufferB.getUnsignedInt(bIndex);\n> > - if (va == vb) {\n> > - // fast-path\n> > - continue;\n> > - }\n> > - if (va > vb) {\n> > consider removing va and vb for the following ... trade an aritmatic operation for potentially a conditional\n> \n> long cmp = bufferA.getUnsignedInt(aIndex) - bufferB.getUnsignedInt(bIndex);\n> if (cmp == 0) {\n> continue;\n> }\n> return cmp;\n> —\n> You are receiving this because you were assigned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "we are using `long` to represent an \"unsigned int\" ... seems like we should not overflow in this situation?\n", "is `aEnd` correct here? does `uintCount` need to be multiplied by 4 to be in unit of `bytes`?\n", "maybe consider passing in `uintCountIncrement` from outside?\n", "@Scottmitch good catch... Let me fix it.\n" ]
"2016-08-22T05:53:01Z"
[ "defect" ]
Result of ByteBufUtil.compare(ByteBuf a, ByteBuf b) is dependent on ByteOrder of supplied ByteBufs
This should be similar to memcmp() or strcmp() as described in the java-doc. This is because the [ByteBufUtil.compare(ByteBuf, ByteBuf)](https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L226) compare the `ByteBuf`s as integers and not raw bytes. ``` Sample Code ======== public static void main(String[] args) { ByteBuf buf1 = Unpooled.wrappedBuffer("1234".getBytes()).order(ByteOrder.LITTLE_ENDIAN); ByteBuf buf2 = Unpooled.wrappedBuffer("4321".getBytes()).order(ByteOrder.LITTLE_ENDIAN); System.out.println("\"1234\".compareTo(\"4321\") = " + buf1.compareTo(buf2)); buf1 = Unpooled.wrappedBuffer("1234".getBytes()).order(ByteOrder.BIG_ENDIAN); buf2 = Unpooled.wrappedBuffer("4321".getBytes()).order(ByteOrder.BIG_ENDIAN); System.out.println("\"1234\".compareTo(\"4321\") = " + buf1.compareTo(buf2)); } Output ===== "1234".compareTo("4321") = 1 "1234".compareTo("4321") = -1 ```
[ "buffer/src/main/java/io/netty/buffer/ByteBuf.java", "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/main/java/io/netty/buffer/ByteBuf.java", "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBuf.java b/buffer/src/main/java/io/netty/buffer/ByteBuf.java index e72fb21f34c..f51caabd627 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBuf.java @@ -2332,7 +2332,7 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> { /** * Compares the content of the specified buffer to the content of this - * buffer. Comparison is performed in the same manner with the string + * buffer. Comparison is performed in the same manner with the string * comparison functions of various languages such as {@code strcmp}, * {@code memcmp} and {@link String#compareTo(String)}. */ diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java index 591bc7db911..79403c61be1 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java @@ -229,54 +229,83 @@ public static int compare(ByteBuf bufferA, ByteBuf bufferB) { final int minLength = Math.min(aLen, bLen); final int uintCount = minLength >>> 2; final int byteCount = minLength & 3; - int aIndex = bufferA.readerIndex(); int bIndex = bufferB.readerIndex(); - if (bufferA.order() == bufferB.order()) { - for (int i = uintCount; i > 0; i --) { - long va = bufferA.getUnsignedInt(aIndex); - long vb = bufferB.getUnsignedInt(bIndex); - if (va > vb) { - return 1; - } - if (va < vb) { - return -1; - } - aIndex += 4; - bIndex += 4; + if (uintCount > 0) { + boolean bufferAIsBigEndian = bufferA.order() == ByteOrder.BIG_ENDIAN; + final long res; + int uintCountIncrement = uintCount << 2; + + if (bufferA.order() == bufferB.order()) { + res = bufferAIsBigEndian ? compareUintBigEndian(bufferA, bufferB, aIndex, bIndex, uintCountIncrement) : + compareUintLittleEndian(bufferA, bufferB, aIndex, bIndex, uintCountIncrement); + } else { + res = bufferAIsBigEndian ? compareUintBigEndianA(bufferA, bufferB, aIndex, bIndex, uintCountIncrement) : + compareUintBigEndianB(bufferA, bufferB, aIndex, bIndex, uintCountIncrement); } - } else { - for (int i = uintCount; i > 0; i --) { - long va = bufferA.getUnsignedInt(aIndex); - long vb = swapInt(bufferB.getInt(bIndex)) & 0xFFFFFFFFL; - if (va > vb) { - return 1; - } - if (va < vb) { - return -1; - } - aIndex += 4; - bIndex += 4; + if (res != 0) { + // Ensure we not overflow when cast + return (int) Math.min(Integer.MAX_VALUE, res); } + aIndex += uintCountIncrement; + bIndex += uintCountIncrement; } - for (int i = byteCount; i > 0; i --) { - short va = bufferA.getUnsignedByte(aIndex); - short vb = bufferB.getUnsignedByte(bIndex); - if (va > vb) { - return 1; - } - if (va < vb) { - return -1; + for (int aEnd = aIndex + byteCount; aIndex < aEnd; ++aIndex, ++bIndex) { + int comp = bufferA.getUnsignedByte(aIndex) - bufferB.getUnsignedByte(bIndex); + if (comp != 0) { + return comp; } - aIndex ++; - bIndex ++; } return aLen - bLen; } + private static long compareUintBigEndian( + ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) { + for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) { + long comp = bufferA.getUnsignedInt(aIndex) - bufferB.getUnsignedInt(bIndex); + if (comp != 0) { + return comp; + } + } + return 0; + } + + private static long compareUintLittleEndian( + ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) { + for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) { + long comp = bufferA.getUnsignedIntLE(aIndex) - bufferB.getUnsignedIntLE(bIndex); + if (comp != 0) { + return comp; + } + } + return 0; + } + + private static long compareUintBigEndianA( + ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) { + for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) { + long comp = bufferA.getUnsignedInt(aIndex) - bufferB.getUnsignedIntLE(bIndex); + if (comp != 0) { + return comp; + } + } + return 0; + } + + private static long compareUintBigEndianB( + ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) { + for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) { + long comp = bufferA.getUnsignedIntLE(aIndex) - bufferB.getUnsignedInt(bIndex); + if (comp != 0) { + return comp; + } + } + return 0; + } + /** * The default implementation of {@link ByteBuf#indexOf(int, int, byte)}. * This method is useful when implementing a new buffer type.
diff --git a/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java b/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java index 47bca1f5ebd..f4e6b90c799 100644 --- a/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java @@ -31,6 +31,7 @@ import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; +import java.nio.ByteOrder; import java.nio.ReadOnlyBufferException; import java.nio.channels.Channels; import java.nio.channels.FileChannel; @@ -1822,6 +1823,28 @@ public void testCompareTo() { buffer.retainedSlice(0, 31).compareTo(wrappedBuffer(value).order(LITTLE_ENDIAN)) < 0); } + @Test + public void testCompareTo2() { + byte[] bytes = {1, 2, 3, 4}; + byte[] bytesReversed = {4, 3, 2, 1}; + + ByteBuf buf1 = newBuffer(4).clear().writeBytes(bytes).order(ByteOrder.LITTLE_ENDIAN); + ByteBuf buf2 = newBuffer(4).clear().writeBytes(bytesReversed).order(ByteOrder.LITTLE_ENDIAN); + ByteBuf buf3 = newBuffer(4).clear().writeBytes(bytes).order(ByteOrder.BIG_ENDIAN); + ByteBuf buf4 = newBuffer(4).clear().writeBytes(bytesReversed).order(ByteOrder.BIG_ENDIAN); + try { + assertEquals(buf1.compareTo(buf2), buf3.compareTo(buf4)); + assertEquals(buf2.compareTo(buf1), buf4.compareTo(buf3)); + assertEquals(buf1.compareTo(buf3), buf2.compareTo(buf4)); + assertEquals(buf3.compareTo(buf1), buf4.compareTo(buf2)); + } finally { + buf1.release(); + buf2.release(); + buf3.release(); + buf4.release(); + } + } + @Test public void testToString() { buffer.clear();
test
train
2016-08-24T15:58:02
"2016-08-19T07:17:10Z"
adityakishore
val
netty/netty/5736_5737
netty/netty
netty/netty/5736
netty/netty/5737
[ "timestamp(timedelta=2141.0, similarity=0.9192202716298955)" ]
9bc3e56647e4794178a721f2d06128a4ca868b79
972ecf2e3c7094a3ebfd3bba3a208e514680e964
[ "Yep, looks like a bug to me. The issue lays in `io.netty.handler.codec.http.cookie.DefaultCookie#equals` implementation that compares names with `equalsIgnoreCase`.\n\nWould you like to contribute a fix?\n", "yes, I'd like\n", "https://github.com/netty/netty/pull/5737\n", "Can be closed, as of #5737\n" ]
[ "I know the spec doesn't say anything here, but it feels very weird to me that domain would be case sensitive. After all, it's more or less the host name of the server that originally set the cookie. And host names are definitively case insensitive. I would vote to keep case insensitivity here.\n", "same as above\n", "ping @normanmaurer @Scottmitch \n", "revert it back\n" ]
"2016-08-22T16:39:24Z"
[ "defect" ]
Why cookie name is case-insensitive
I fall in such situation when HTTP-server (implemented with Netty) receives two cookies: `session_id` and `Session_id`. But after parsing such header only one cookie is returned from method `decode`. Example: ``` java Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode("session_id=a; Session_id=b"); for (Cookie cookie: cookies) { System.out.println(cookie.name()); } ``` will print only `session_id`. Is this a bug? [RFC 6265](https://tools.ietf.org/search/rfc6265) does not state that cookie names must be case insensitive.
[ "codec-http/src/main/java/io/netty/handler/codec/http/cookie/DefaultCookie.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/cookie/DefaultCookie.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/cookie/DefaultCookie.java b/codec-http/src/main/java/io/netty/handler/codec/http/cookie/DefaultCookie.java index fb98555bb85..17fe5c43688 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/cookie/DefaultCookie.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/cookie/DefaultCookie.java @@ -135,7 +135,7 @@ public boolean equals(Object o) { } Cookie that = (Cookie) o; - if (!name().equalsIgnoreCase(that.name())) { + if (!name().equals(that.name())) { return false; } @@ -164,7 +164,7 @@ public boolean equals(Object o) { @Override public int compareTo(Cookie c) { - int v = name().compareToIgnoreCase(c.name()); + int v = name().compareTo(c.name()); if (v != 0) { return v; }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java index d0c07646a55..fe9df5ec300 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java @@ -72,6 +72,10 @@ public void testDecodingGoogleAnalyticsCookie() { Iterator<Cookie> it = cookies.iterator(); Cookie c; + c = it.next(); + assertEquals("ARPT", c.name()); + assertEquals("LWUKQPSWRTUN04CKKJI", c.value()); + c = it.next(); assertEquals("__utma", c.name()); assertEquals("48461872.1094088325.1258140131.1258140131.1258140131.1", c.value()); @@ -90,10 +94,6 @@ public void testDecodingGoogleAnalyticsCookie() { "utmccn=(referral)|utmcmd=referral|utmcct=/Home-Garden/Furniture/Clearance/clearance/32/dept.html", c.value()); - c = it.next(); - assertEquals("ARPT", c.name()); - assertEquals("LWUKQPSWRTUN04CKKJI", c.value()); - c = it.next(); assertEquals("kw-2E343B92-B097-442c-BFA5-BE371E0325A2", c.name()); assertEquals("unfinished_furniture", c.value()); @@ -182,4 +182,21 @@ public void testRejectCookieValueWithSemicolon() { Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode("name=\"foo;bar\";"); assertTrue(cookies.isEmpty()); } + + @Test + public void testCaseSensitiveNames() { + Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode("session_id=a; Session_id=b;"); + Iterator<Cookie> it = cookies.iterator(); + Cookie c; + + c = it.next(); + assertEquals("Session_id", c.name()); + assertEquals("b", c.value()); + + c = it.next(); + assertEquals("session_id", c.name()); + assertEquals("a", c.value()); + + assertFalse(it.hasNext()); + } }
test
train
2016-08-18T21:03:39
"2016-08-22T16:02:09Z"
jamel
val
netty/netty/5678_5749
netty/netty
netty/netty/5678
netty/netty/5749
[ "timestamp(timedelta=1555.0, similarity=0.8681263283293253)" ]
4accd300e55b9f234ef5f460a7f4243bb19178e5
25e3c91c618da6574c66af1b1277235e6068c564
[ "thanks for reporting @rkapsi ! let me come up with a PR.\n", "Thanks @Scottmitch and @rkapsi \n" ]
[ "Maybe even fire a custom `SniHandlerException(hostname, context, cause)` to give the user every possible opportunity not to lose track of the SslContext.\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove this unused method parameter \"hostname\". [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1172)\n", "Firing exceptions with reference counted objects can create confusion in my experience. IMO lets avoid this and push folks toward `replaceHandler`.\n" ]
"2016-08-25T12:51:32Z"
[ "defect" ]
SniHandler#replaceHandler() may leak (reference counted) SslContexts
The SniHandler#select()/AsyncMapper#map() methods execute asynchronously and by the time the returned Future completes the client is possibly no longer connected. The #selection field will be set but probably after #channelInactive() (i.e. can't really do anything with it) and the #replace() call on the ChannelPipeline will probably throw a NoSuchElementException. I have no good suggestion how to fix it. There is an opportunity to cancel the future and it could try to decrement the refCnt if the Future completes successfully but the client has disconnected in the meantime. Given SniHandler is a one-shot handler maybe something along the lines of this?! ``` java class SniHandler { private Promsie<SslContext> promise; void handlerAdded(ChannelHandlerContext ctx) { promsie = ctx.newPromise(); } void channelInactive(ChannelHandlerContext ctx) { promsie.cancel(true); promsie.addFutureListener(... release() on success ...); ctx.fireChannelInactive(); } void select(...) { mapping.map(hostname, promsie).addFutureListener(... replaceHandler() ...); } } ```
[ "handler/src/main/java/io/netty/handler/ssl/SniHandler.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/SniHandler.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/SniHandler.java b/handler/src/main/java/io/netty/handler/ssl/SniHandler.java index 7fc70d1258a..6b404d3763c 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SniHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/SniHandler.java @@ -303,8 +303,24 @@ public void operationComplete(Future<SslContext> future) throws Exception { } private void replaceHandler(ChannelHandlerContext ctx, Selection selection) { - this.selection = selection; - SslHandler sslHandler = selection.context.newHandler(ctx.alloc()); + try { + this.selection = selection; + replaceHandler(ctx, selection.hostname, selection.context); + } catch (Throwable cause) { + this.selection = EMPTY_SELECTION; + ctx.fireExceptionCaught(cause); + } + } + + /** + * Replaces the {@link SniHandler} with a {@link SslHandler} as provided by the + * given {@link SslContext}. + * + * Users may override this method to implement custom behavior and to get a + * hold onto the {@link SslContext}. + */ + protected void replaceHandler(ChannelHandlerContext ctx, String hostname, SslContext context) throws Exception { + SslHandler sslHandler = context.newHandler(ctx.alloc()); ctx.pipeline().replace(this, SslHandler.class.getName(), sslHandler); }
null
train
train
2016-08-24T15:58:02
"2016-08-10T22:40:39Z"
rkapsi
val
netty/netty/5763_5782
netty/netty
netty/netty/5763
netty/netty/5782
[ "timestamp(timedelta=21.0, similarity=0.902178681582907)" ]
3051df8961273fb39fe282a46c7f76124b39112f
89ee71e131f0ffdd999d544d95f58591e8b8ad05
[ "@rkapsi oversight... let me fix it.\n", "Fixed by https://github.com/netty/netty/pull/5782\n" ]
[]
"2016-08-31T20:02:59Z"
[]
DefaultEventLoopGroup doesn't expose ctor variant that accepts custom Executor
The `DefaultEventLoopGroup` class extends `MultithreadEventExecutorGroup` but doesn't expose the ctor variants that accept a custom Executor like `NioEventLoopGroup` and `EpollEventLoopGroup` do. Is that on purpose or just an oversight?
[ "transport/src/main/java/io/netty/channel/DefaultEventLoopGroup.java" ]
[ "transport/src/main/java/io/netty/channel/DefaultEventLoopGroup.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/DefaultEventLoopGroup.java b/transport/src/main/java/io/netty/channel/DefaultEventLoopGroup.java index 6e8ba13452f..ee5eeea6418 100644 --- a/transport/src/main/java/io/netty/channel/DefaultEventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/DefaultEventLoopGroup.java @@ -36,7 +36,7 @@ public DefaultEventLoopGroup() { * @param nThreads the number of threads to use */ public DefaultEventLoopGroup(int nThreads) { - this(nThreads, null); + this(nThreads, (ThreadFactory) null); } /** @@ -49,6 +49,16 @@ public DefaultEventLoopGroup(int nThreads, ThreadFactory threadFactory) { super(nThreads, threadFactory); } + /** + * Create a new instance + * + * @param nThreads the number of threads to use + * @param executor the Executor to use, or {@code null} if the default should be used. + */ + public DefaultEventLoopGroup(int nThreads, Executor executor) { + super(nThreads, executor); + } + @Override protected EventLoop newChild(Executor executor, Object... args) throws Exception { return new DefaultEventLoop(this, executor);
null
test
train
2016-08-31T14:01:05
"2016-08-29T22:16:55Z"
rkapsi
val
netty/netty/5773_5783
netty/netty
netty/netty/5773
netty/netty/5783
[ "timestamp(timedelta=14.0, similarity=0.8610196742337676)" ]
3051df8961273fb39fe282a46c7f76124b39112f
a17eebbcdc597795803b640d50f3e5489e267b7c
[ "Will have a look\n\n> Am 30.08.2016 um 18:22 schrieb Nick Pordash [email protected]:\n> \n> After updating from 4.1.4.Final to 4.1.5.Final I have failing tests because of the following:\n> \n> java.lang.IndexOutOfBoundsException: index: 21, length: 1 (expected: range(0, 21))\n> at io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1359)\n> at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1354)\n> at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1349)\n> at io.netty.buffer.CompositeByteBuf.findComponent(CompositeByteBuf.java:1407)\n> at io.netty.buffer.CompositeByteBuf._getByte(CompositeByteBuf.java:751)\n> at io.netty.buffer.AbstractByteBuf.forEachByteDesc0(AbstractByteBuf.java:1297)\n> at io.netty.buffer.AbstractByteBuf.forEachByteDesc(AbstractByteBuf.java:1277)\n> This seems to be specific to CompositeByteBuf:\n> \n> // Doesn't throw\n> Unpooled.copiedBuffer(\"test\", StandardCharsets.UTF_8)\n> .forEachByteDesc(ByteProcessor.FIND_CR);\n> \n> // Throws (forEachByteDesc)\n> Unpooled.compositeBuffer()\n> .addComponent(true, Unpooled.copiedBuffer(\"test\", StandardCharsets.UTF_8))\n> .forEachByteDesc(ByteProcessor.FIND_CR);\n> \n> // Doesn't throw (forEachByte)\n> Unpooled.compositeBuffer()\n> .addComponent(true, Unpooled.copiedBuffer(\"test\", StandardCharsets.UTF_8))\n> .forEachByte(ByteProcessor.FIND_CR);\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@npordash I think I have an idea... stay tuned\n", "thanks for the repro @npordash \n", "See https://github.com/netty/netty/pull/5783\n", "Fixed by https://github.com/netty/netty/pull/5783\n" ]
[]
"2016-08-31T20:03:58Z"
[ "defect" ]
IndexOutOfBoundsException from CompositeByteBuf.forEachByteDesc
After updating from 4.1.4.Final to 4.1.5.Final I have failing tests because of the following: ``` java java.lang.IndexOutOfBoundsException: index: 21, length: 1 (expected: range(0, 21)) at io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1359) at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1354) at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1349) at io.netty.buffer.CompositeByteBuf.findComponent(CompositeByteBuf.java:1407) at io.netty.buffer.CompositeByteBuf._getByte(CompositeByteBuf.java:751) at io.netty.buffer.AbstractByteBuf.forEachByteDesc0(AbstractByteBuf.java:1297) at io.netty.buffer.AbstractByteBuf.forEachByteDesc(AbstractByteBuf.java:1277) ``` This seems to be specific to CompositeByteBuf: ``` java // Doesn't throw Unpooled.copiedBuffer("test", StandardCharsets.UTF_8) .forEachByteDesc(ByteProcessor.FIND_CR); // Throws (forEachByteDesc) Unpooled.compositeBuffer() .addComponent(true, Unpooled.copiedBuffer("test", StandardCharsets.UTF_8)) .forEachByteDesc(ByteProcessor.FIND_CR); // Doesn't throw (forEachByte) Unpooled.compositeBuffer() .addComponent(true, Unpooled.copiedBuffer("test", StandardCharsets.UTF_8)) .forEachByte(ByteProcessor.FIND_CR); ```
[ "buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java" ]
[ "buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java" ]
[ "buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java", "buffer/src/test/java/io/netty/buffer/SlicedByteBufTest.java", "buffer/src/test/java/io/netty/buffer/WrappedUnpooledUnsafeByteBufTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java b/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java index 3b9d2b1137f..5c8c837b024 100644 --- a/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java @@ -1274,7 +1274,7 @@ private int forEachByteAsc0(int start, int end, ByteProcessor processor) throws public int forEachByteDesc(ByteProcessor processor) { ensureAccessible(); try { - return forEachByteDesc0(writerIndex, readerIndex, processor); + return forEachByteDesc0(writerIndex - 1, readerIndex, processor); } catch (Exception e) { PlatformDependent.throwException(e); return -1;
diff --git a/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java b/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java index f4e6b90c799..53ec4b11560 100644 --- a/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java @@ -3203,6 +3203,52 @@ public void testReadBytes() { assertEquals(0, buffer2.refCnt()); } + @Test + public void testForEachByteDesc2() { + byte[] expected = {1, 2, 3, 4}; + ByteBuf buf = newBuffer(expected.length); + try { + buf.writeBytes(expected); + final byte[] bytes = new byte[expected.length]; + int i = buf.forEachByteDesc(new ByteProcessor() { + private int index = bytes.length - 1; + + @Override + public boolean process(byte value) throws Exception { + bytes[index--] = value; + return true; + } + }); + assertEquals(-1, i); + assertArrayEquals(expected, bytes); + } finally { + buf.release(); + } + } + + @Test + public void testForEachByte2() { + byte[] expected = {1, 2, 3, 4}; + ByteBuf buf = newBuffer(expected.length); + try { + buf.writeBytes(expected); + final byte[] bytes = new byte[expected.length]; + int i = buf.forEachByte(new ByteProcessor() { + private int index; + + @Override + public boolean process(byte value) throws Exception { + bytes[index++] = value; + return true; + } + }); + assertEquals(-1, i); + assertArrayEquals(expected, bytes); + } finally { + buf.release(); + } + } + private static void assertTrueAndRelease(ByteBuf buf, boolean actual) { try { assertTrue(actual); diff --git a/buffer/src/test/java/io/netty/buffer/SlicedByteBufTest.java b/buffer/src/test/java/io/netty/buffer/SlicedByteBufTest.java index 1ab56d49043..f126138233a 100644 --- a/buffer/src/test/java/io/netty/buffer/SlicedByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/SlicedByteBufTest.java @@ -126,6 +126,18 @@ public void testReadBytes() { // ignore for SlicedByteBuf } + @Test + @Override + public void testForEachByteDesc2() { + // Ignore for SlicedByteBuf + } + + @Test + @Override + public void testForEachByte2() { + // Ignore for SlicedByteBuf + } + @Test(expected = UnsupportedOperationException.class) @Override public void testDuplicateCapacityChange() { diff --git a/buffer/src/test/java/io/netty/buffer/WrappedUnpooledUnsafeByteBufTest.java b/buffer/src/test/java/io/netty/buffer/WrappedUnpooledUnsafeByteBufTest.java index 645074b3842..81480afb19c 100644 --- a/buffer/src/test/java/io/netty/buffer/WrappedUnpooledUnsafeByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/WrappedUnpooledUnsafeByteBufTest.java @@ -123,4 +123,16 @@ public void testRetainedDuplicateCapacityChange() { public void testLittleEndianWithExpand() { super.testLittleEndianWithExpand(); } + + @Test + @Override + public void testForEachByteDesc2() { + // Ignore + } + + @Test + @Override + public void testForEachByte2() { + // Ignore + } }
test
train
2016-08-31T14:01:05
"2016-08-30T16:22:13Z"
npordash
val
netty/netty/5712_5791
netty/netty
netty/netty/5712
netty/netty/5791
[ "timestamp(timedelta=12.0, similarity=0.8946751578239116)" ]
30fe2e868fcc99d24046e505f9406067ba6e9d07
5ef068cd8d627b808a691b46f919075d38a14645
[ "/cc @fredericBregier \n", "Let me take care\n", "@normanmaurer Sorry to be late again (very busy those last 9 months). I look at your commit, fine for me.\n", "Fixed by https://github.com/netty/netty/pull/5791\n", "Thanks guys! I could've sent you a poll request but wasn't sure if this is the right design. \n" ]
[]
"2016-09-02T16:55:22Z"
[ "improvement" ]
Allow clients to override userDefinedWritabilityIndex from AbstractTrafficShapingHandler
Hello, AbstractTrafficShapingHandler has a package-private method called "userDefinedWritabilityIndex()". If I create two sub-classes of ChannelTrafficShapingHandler (one for throttling on bandwidth and the other for throttling on message size), I cannot make them independent from each other in terms of writability, because they will all have the same index as ChannelTrafficShapingHandler, which cannot be overridden. Can we change the access of userDefinedWritabilityIndex() to _protected_ such that it can be overridden by clients? Thanks Michael
[ "handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java", "handler/src/main/java/io/netty/handler/traffic/GlobalChannelTrafficShapingHandler.java" ]
[ "handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java", "handler/src/main/java/io/netty/handler/traffic/GlobalChannelTrafficShapingHandler.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java b/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java index 9cff40f0cef..fb4cdce3e22 100644 --- a/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java +++ b/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java @@ -145,7 +145,7 @@ void setTrafficCounter(TrafficCounter newTrafficCounter) { * for GlobalChannel TSH it is defined as * {@value #GLOBALCHANNEL_DEFAULT_USER_DEFINED_WRITABILITY_INDEX}. */ - int userDefinedWritabilityIndex() { + protected int userDefinedWritabilityIndex() { if (this instanceof GlobalChannelTrafficShapingHandler) { return GLOBALCHANNEL_DEFAULT_USER_DEFINED_WRITABILITY_INDEX; } else if (this instanceof GlobalTrafficShapingHandler) { diff --git a/handler/src/main/java/io/netty/handler/traffic/GlobalChannelTrafficShapingHandler.java b/handler/src/main/java/io/netty/handler/traffic/GlobalChannelTrafficShapingHandler.java index 91a68f0a0e0..8b257eb7f05 100644 --- a/handler/src/main/java/io/netty/handler/traffic/GlobalChannelTrafficShapingHandler.java +++ b/handler/src/main/java/io/netty/handler/traffic/GlobalChannelTrafficShapingHandler.java @@ -156,7 +156,7 @@ void createGlobalTrafficCounter(ScheduledExecutorService executor) { } @Override - int userDefinedWritabilityIndex() { + protected int userDefinedWritabilityIndex() { return AbstractTrafficShapingHandler.GLOBALCHANNEL_DEFAULT_USER_DEFINED_WRITABILITY_INDEX; }
null
train
train
2016-09-01T08:55:02
"2016-08-16T22:09:22Z"
jqmichael
val
netty/netty/5762_5798
netty/netty
netty/netty/5762
netty/netty/5798
[ "timestamp(timedelta=15.0, similarity=0.9568037118728966)" ]
67d3a78123fa3faa85c1a150bd4ee69425079b3d
512a6e6cc3eea548514d81662a84c2404ff2f007
[ "@buchgr will look into it.\n", "Should be fixed by https://github.com/netty/netty/pull/5798\n", "Fixed by https://github.com/netty/netty/pull/5798\n" ]
[ "This will first cast `capacity` to int, then do the division. I think you need to wrap the division with parantheses?\n", "doh! right... \n", "This cast doesn't seem safe, for `maxHeaderTableSize` gt `MAX_INT`?\n", "@buchgr why ? Shouldn't it just be converted to a signed int ?\n", "@normanmaurer what if it doesn't fit a signed int?\n", "@buchgr good point... Maybe we should do a range check in `setMaxHeaderTableSize(...)` and check its not bigger then `0xFFFFFFFFL`?\n", "@normanmaurer I think that would be good.\n", "done\n", "Does this need to check if capacity is greater than 1<<32 ?\n", "yep let me check for uint as well\n", "Does here need to check as well?\n", "done\n", "done\n", "@normanmaurer You should probably also add a comment to this line, explaining why it's safe.\n", "@normanmaurer - it seems like this may be problematic if `maxHeaderTableSize > Integer.MAX_INT`. Do we have a unit test covering this case?\n", "@normanmaurer - should this be of type `long` now?\n", "should we instead check around the bounds `MAX_HEADER_TABLE_SIZE` and `MAX_HEADER_TABLE_SIZE+1`?\n" ]
"2016-09-07T04:53:12Z"
[ "defect" ]
HTTP/2: SETTINGS_HEADER_TABLE_SIZE should be an unsigned int
The HTTP/2 spec demands that the max value for `SETTINGS_HEADER_TABLE_SIZE` should be an unsigned 32-bit integer. However, it seems that some [limitations in HPACK](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java#L90) prevent us from supporting that. h2spec test suite 6.5 fails due to this limitation. The exception is `Setting HEADER_TABLE_SIZE is invalid: 4294967295`. ``` × Sends a SETTINGS frame - The endpoint MUST sends a SETTINGS frame with ACK. Expected: SETTINGS frame (Flags: 1) Actual: GOAWAY frame (Length: 56, Flags: 0, ErrorCode: PROTOCOL_ERROR) ```
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersEncoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeaderTable.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/DynamicTable.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersEncoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeaderTable.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/DynamicTable.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java index 037cd9c9038..a20ecda44ca 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java @@ -23,6 +23,8 @@ import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_HEADER_TABLE_SIZE; import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_MAX_HEADER_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_HEADER_TABLE_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_HEADER_TABLE_SIZE; import static io.netty.handler.codec.http2.Http2Error.COMPRESSION_ERROR; import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR; import static io.netty.handler.codec.http2.Http2Exception.connectionError; @@ -99,9 +101,10 @@ public Http2Headers decodeHeaders(ByteBuf headerBlock) throws Http2Exception { */ private final class Http2HeaderTableDecoder extends DefaultHttp2HeaderTableListSize implements Http2HeaderTable { @Override - public void maxHeaderTableSize(int max) throws Http2Exception { - if (max < 0) { - throw connectionError(PROTOCOL_ERROR, "Header Table Size must be non-negative but was %d", max); + public void maxHeaderTableSize(long max) throws Http2Exception { + if (max < MIN_HEADER_TABLE_SIZE || max > MAX_HEADER_TABLE_SIZE) { + throw connectionError(PROTOCOL_ERROR, "Header Table Size must be >= %d and <= %d but was %d", + MIN_HEADER_TABLE_SIZE, MAX_HEADER_TABLE_SIZE, max); } try { decoder.setMaxHeaderTableSize(max); @@ -111,7 +114,7 @@ public void maxHeaderTableSize(int max) throws Http2Exception { } @Override - public int maxHeaderTableSize() { + public long maxHeaderTableSize() { return decoder.getMaxHeaderTableSize(); } } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersEncoder.java index 534e37b4d92..2157fc9b404 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersEncoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersEncoder.java @@ -23,6 +23,8 @@ import java.util.Map.Entry; import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_HEADER_TABLE_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_HEADER_TABLE_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_HEADER_TABLE_SIZE; import static io.netty.handler.codec.http2.Http2Error.COMPRESSION_ERROR; import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR; import static io.netty.handler.codec.http2.Http2Exception.connectionError; @@ -89,9 +91,10 @@ private void encodeHeader(ByteBuf out, CharSequence key, CharSequence value) { */ private final class Http2HeaderTableEncoder extends DefaultHttp2HeaderTableListSize implements Http2HeaderTable { @Override - public void maxHeaderTableSize(int max) throws Http2Exception { - if (max < 0) { - throw connectionError(PROTOCOL_ERROR, "Header Table Size must be non-negative but was %d", max); + public void maxHeaderTableSize(long max) throws Http2Exception { + if (max < MIN_HEADER_TABLE_SIZE || max > MAX_HEADER_TABLE_SIZE) { + throw connectionError(PROTOCOL_ERROR, "Header Table Size must be >= %d and <= %d but was %d", + MIN_HEADER_TABLE_SIZE, MAX_HEADER_TABLE_SIZE, max); } try { // No headers should be emitted. If they are, we throw. @@ -102,7 +105,7 @@ public void maxHeaderTableSize(int max) throws Http2Exception { } @Override - public int maxHeaderTableSize() { + public long maxHeaderTableSize() { return encoder.getMaxHeaderTableSize(); } } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java index 8079bd00f7a..5a4c91ee89b 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java @@ -87,7 +87,7 @@ public final class Http2CodecUtil { public static final char SETTINGS_MAX_HEADER_LIST_SIZE = 6; public static final int NUM_STANDARD_SETTINGS = 6; - public static final int MAX_HEADER_TABLE_SIZE = Integer.MAX_VALUE; // Size limited by HPACK library + public static final long MAX_HEADER_TABLE_SIZE = MAX_UNSIGNED_INT; public static final long MAX_CONCURRENT_STREAMS = MAX_UNSIGNED_INT; public static final int MAX_INITIAL_WINDOW_SIZE = Integer.MAX_VALUE; public static final int MAX_FRAME_SIZE_LOWER_BOUND = 0x4000; diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeaderTable.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeaderTable.java index b9636471db1..b009a7fe2e7 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeaderTable.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeaderTable.java @@ -24,12 +24,12 @@ public interface Http2HeaderTable { /** * Sets the maximum size of the HPACK header table used for decoding HTTP/2 headers. */ - void maxHeaderTableSize(int max) throws Http2Exception; + void maxHeaderTableSize(long max) throws Http2Exception; /** * Gets the maximum size of the HPACK header table used for decoding HTTP/2 headers. */ - int maxHeaderTableSize(); + long maxHeaderTableSize(); /** * Sets the maximum allowed header elements. diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java index 39695942057..7ef0c4c8e8d 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java @@ -87,7 +87,7 @@ public Long headerTableSize() { * * @throws IllegalArgumentException if verification of the setting fails. */ - public Http2Settings headerTableSize(int value) { + public Http2Settings headerTableSize(long value) { put(SETTINGS_HEADER_TABLE_SIZE, Long.valueOf(value)); return this; } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java index b67a9dc4ca2..a4dc6c0be22 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java @@ -74,7 +74,7 @@ public final class Decoder { private final DynamicTable dynamicTable; private final HuffmanDecoder huffmanDecoder; private final int maxHeadersLength; - private int maxDynamicTableSize; + private long maxDynamicTableSize; private int encoderMaxDynamicTableSize; private boolean maxDynamicTableSizeChangeRequired; @@ -305,7 +305,7 @@ public void decode(ByteBuf in, Http2Headers headers) throws Http2Exception { * Set the maximum table size. If this is below the maximum size of the dynamic table used by * the encoder, the beginning of the next header block MUST signal this change. */ - public void setMaxHeaderTableSize(int maxHeaderTableSize) { + public void setMaxHeaderTableSize(long maxHeaderTableSize) { maxDynamicTableSize = maxHeaderTableSize; if (maxDynamicTableSize < encoderMaxDynamicTableSize) { // decoder requires less space than encoder @@ -319,7 +319,7 @@ public void setMaxHeaderTableSize(int maxHeaderTableSize) { * Return the maximum table size. This is the maximum size allowed by both the encoder and the * decoder. */ - public int getMaxHeaderTableSize() { + public long getMaxHeaderTableSize() { return dynamicTable.capacity(); } @@ -333,7 +333,7 @@ int length() { /** * Return the size of the dynamic table. Exposed for testing. */ - int size() { + long size() { return dynamicTable.size(); } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/DynamicTable.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/DynamicTable.java index a623b5b29a5..a8a67c037a2 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/DynamicTable.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/DynamicTable.java @@ -31,6 +31,8 @@ */ package io.netty.handler.codec.http2.internal.hpack; +import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_HEADER_TABLE_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_HEADER_TABLE_SIZE; import static io.netty.handler.codec.http2.internal.hpack.HeaderField.HEADER_ENTRY_OVERHEAD; final class DynamicTable { @@ -40,7 +42,7 @@ final class DynamicTable { int head; int tail; private int size; - private int capacity = -1; // ensure setCapacity creates the array + private long capacity = -1; // ensure setCapacity creates the array /** * Creates a new dynamic table with the specified initial capacity. @@ -65,14 +67,14 @@ public int length() { /** * Return the current size of the dynamic table. This is the sum of the size of the entries. */ - public int size() { + public long size() { return size; } /** * Return the maximum allowable size of the dynamic table. */ - public int capacity() { + public long capacity() { return capacity; } @@ -149,11 +151,10 @@ public void clear() { * Set the maximum size of the dynamic table. Entries are evicted from the dynamic table until * the size of the table is less than or equal to the maximum size. */ - public void setCapacity(int capacity) { - if (capacity < 0) { - throw new IllegalArgumentException("Illegal Capacity: " + capacity); + public void setCapacity(long capacity) { + if (capacity < MIN_HEADER_TABLE_SIZE || capacity > MAX_HEADER_TABLE_SIZE) { + throw new IllegalArgumentException("capacity is invalid: " + capacity); } - // initially capacity will be -1 so init won't return here if (this.capacity == capacity) { return; @@ -169,7 +170,7 @@ public void setCapacity(int capacity) { } } - int maxEntries = capacity / HEADER_ENTRY_OVERHEAD; + int maxEntries = (int) (capacity / HEADER_ENTRY_OVERHEAD); if (capacity % HEADER_ENTRY_OVERHEAD != 0) { maxEntries++; } @@ -192,8 +193,8 @@ public void setCapacity(int capacity) { } } - this.tail = 0; - this.head = tail + len; - this.headerFields = tmp; + tail = 0; + head = tail + len; + headerFields = tmp; } } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java index a4c9e86140a..c28a950e05e 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java @@ -37,6 +37,8 @@ import java.util.Arrays; +import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_HEADER_TABLE_SIZE; +import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_HEADER_TABLE_SIZE; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.INCREMENTAL; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.NEVER; import static io.netty.handler.codec.http2.internal.hpack.HpackUtil.IndexType.NONE; @@ -52,22 +54,22 @@ public final class Encoder { AsciiString.EMPTY_STRING, Integer.MAX_VALUE, null); private final HuffmanEncoder huffmanEncoder = new HuffmanEncoder(); private final byte hashMask; - private int size; - private int capacity; + private long size; + private long capacity; /** * Creates a new encoder. */ - public Encoder(int maxHeaderTableSize) { + public Encoder(long maxHeaderTableSize) { this(maxHeaderTableSize, 16); } /** * Creates a new encoder. */ - public Encoder(int maxHeaderTableSize, int arraySizeHint) { - if (maxHeaderTableSize < 0) { - throw new IllegalArgumentException("Illegal Capacity: " + maxHeaderTableSize); + public Encoder(long maxHeaderTableSize, int arraySizeHint) { + if (maxHeaderTableSize < MIN_HEADER_TABLE_SIZE || maxHeaderTableSize > MAX_HEADER_TABLE_SIZE) { + throw new IllegalArgumentException("maxHeaderTableSize is invalid: " + maxHeaderTableSize); } capacity = maxHeaderTableSize; // Enforce a bound of [2, 128] because hashMask is a byte. The max possible value of hashMask is one less @@ -133,22 +135,23 @@ public void encodeHeader(ByteBuf out, CharSequence name, CharSequence value, boo /** * Set the maximum table size. */ - public void setMaxHeaderTableSize(ByteBuf out, int maxHeaderTableSize) { - if (maxHeaderTableSize < 0) { - throw new IllegalArgumentException("Illegal Capacity: " + maxHeaderTableSize); + public void setMaxHeaderTableSize(ByteBuf out, long maxHeaderTableSize) { + if (maxHeaderTableSize < MIN_HEADER_TABLE_SIZE || maxHeaderTableSize > MAX_HEADER_TABLE_SIZE) { + throw new IllegalArgumentException("maxHeaderTableSize is invalid: " + maxHeaderTableSize); } if (capacity == maxHeaderTableSize) { return; } capacity = maxHeaderTableSize; ensureCapacity(0); - encodeInteger(out, 0x20, 5, maxHeaderTableSize); + // Casting to integer is safe as we verified the maxHeaderTableSize is a valid unsigned int. + encodeInteger(out, 0x20, 5, (int) maxHeaderTableSize); } /** * Return the maximum table size. */ - public int getMaxHeaderTableSize() { + public long getMaxHeaderTableSize() { return capacity; } @@ -252,7 +255,7 @@ int length() { /** * Return the size of the dynamic table. Exposed for testing. */ - int size() { + long size() { return size; }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java index cb33ca136e0..514d53e8770 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java @@ -20,6 +20,7 @@ import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; +import io.netty.handler.codec.http.HttpUtil; import org.junit.Before; import org.junit.Test; @@ -86,4 +87,21 @@ public void boundarySettingsShouldBeSet() { settings.maxHeaderListSize((int) settingsValueUpperBound); assertEquals(Integer.MAX_VALUE, (long) settings.maxHeaderListSize()); } + + @Test + public void headerTableSizeUnsignedInt() { + final long value = 1L << 31; + settings.put(Http2CodecUtil.SETTINGS_HEADER_TABLE_SIZE, (Long) value); + assertEquals(value, (long) settings.get(Http2CodecUtil.SETTINGS_HEADER_TABLE_SIZE)); + } + + @Test(expected = IllegalArgumentException.class) + public void headerTableSizeBoundCheck() { + settings.put(Http2CodecUtil.SETTINGS_HEADER_TABLE_SIZE, (Long) Long.MAX_VALUE); + } + + @Test(expected = IllegalArgumentException.class) + public void headerTableSizeBoundCheck2() { + settings.put(Http2CodecUtil.SETTINGS_HEADER_TABLE_SIZE, Long.valueOf(-1L)); + } }
train
train
2016-09-09T07:57:41
"2016-08-29T16:10:30Z"
buchgr
val
netty/netty/5800_5825
netty/netty
netty/netty/5800
netty/netty/5825
[ "timestamp(timedelta=24.0, similarity=0.8764989131870482)" ]
0b5e75a614ca29810c7ef1b695c0ca962ec7b004
5273802db25aed721a06c2bc6273575df022b749
[ "@akurilov let me fix this.\n", "also FYI Netty 5.x has been discontinued.\n", "Fixed by https://github.com/netty/netty/pull/5825\n" ]
[ "src.limit() -> limit\n", "Can you add a comment why you explicitly use `Unpooled` here instead of just using `alloc`?\n", "on line 402 ... can you change `region.transfered()` -> `region.transferred()` while you are here?\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Add the missing @deprecated Javadoc tag. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AMissingDeprecatedCheck)\n", "actually I should use the allocator.\n", "false positive\n", "fixed\n", "fixed\n" ]
"2016-09-15T16:20:52Z"
[ "defect" ]
Epoll transport doesn't work with custom FileRegion implementation
- Netty version: 4.1.5.Final, 5.0.0.Alpha2 - Tried to use custom FileRegion implementation with epoll transport - Expected: any FileRegion DMA implementation works with epoll transport as with NIO transport - Observerd: epoll transport expects only the particular DefaultFileRegion implementation: ``` java.lang.UnsupportedOperationException: unsupported message type: DataItemFileRegion (expected: ByteBuf, DefaultFileRegion) at io.netty.channel.epoll.AbstractEpollStreamChannel.filterOutboundMessage(AbstractEpollStreamChannel.java:539) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:799) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1291) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:748) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:740) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:826) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:733) at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.write(CombinedChannelDuplexHandler.java:525) at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:111) at io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:345) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:748) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:740) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:826) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:733) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:714) ```
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java" ]
[ "testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java" ]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java index f18e4c74303..78507d70e28 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java @@ -17,7 +17,9 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.ByteBufUtil; import io.netty.buffer.CompositeByteBuf; +import io.netty.buffer.Unpooled; import io.netty.channel.Channel; import io.netty.channel.ChannelConfig; import io.netty.channel.ChannelFuture; @@ -29,6 +31,7 @@ import io.netty.channel.ConnectTimeoutException; import io.netty.channel.DefaultFileRegion; import io.netty.channel.EventLoop; +import io.netty.channel.FileRegion; import io.netty.channel.RecvByteBufAllocator; import io.netty.channel.socket.DuplexChannel; import io.netty.channel.unix.FileDescriptor; @@ -44,6 +47,7 @@ import java.nio.ByteBuffer; import java.nio.channels.ClosedChannelException; import java.nio.channels.ConnectionPendingException; +import java.nio.channels.WritableByteChannel; import java.util.Queue; import java.util.concurrent.Executor; import java.util.concurrent.ScheduledFuture; @@ -83,6 +87,8 @@ public abstract class AbstractEpollStreamChannel extends AbstractEpollChannel im private FileDescriptor pipeIn; private FileDescriptor pipeOut; + private WritableByteChannel byteChannel; + /** * @deprecated Use {@link #AbstractEpollStreamChannel(Channel, Socket)}. */ @@ -372,7 +378,7 @@ private boolean writeBytesMultiple( * @param region the {@link DefaultFileRegion} from which the bytes should be written * @return amount the amount of written bytes */ - private boolean writeFileRegion( + private boolean writeDefaultFileRegion( ChannelOutboundBuffer in, DefaultFileRegion region, int writeSpinCount) throws Exception { final long regionCount = region.count(); if (region.transferred() >= regionCount) { @@ -393,7 +399,43 @@ private boolean writeFileRegion( } flushedAmount += localFlushedAmount; - if (region.transfered() >= regionCount) { + if (region.transferred() >= regionCount) { + done = true; + break; + } + } + + if (flushedAmount > 0) { + in.progress(flushedAmount); + } + + if (done) { + in.remove(); + } + return done; + } + + private boolean writeFileRegion( + ChannelOutboundBuffer in, FileRegion region, final int writeSpinCount) throws Exception { + if (region.transferred() >= region.count()) { + in.remove(); + return true; + } + + boolean done = false; + long flushedAmount = 0; + + if (byteChannel == null) { + byteChannel = new SocketWritableByteChannel(); + } + for (int i = writeSpinCount - 1; i >= 0; i--) { + final long localFlushedAmount = region.transferTo(byteChannel, region.transferred()); + if (localFlushedAmount == 0) { + break; + } + + flushedAmount += localFlushedAmount; + if (region.transferred() >= region.count()) { done = true; break; } @@ -448,15 +490,19 @@ protected boolean doWriteSingle(ChannelOutboundBuffer in, int writeSpinCount) th // The outbound buffer contains only one message or it contains a file region. Object msg = in.current(); if (msg instanceof ByteBuf) { - ByteBuf buf = (ByteBuf) msg; - if (!writeBytes(in, buf, writeSpinCount)) { + if (!writeBytes(in, (ByteBuf) msg, writeSpinCount)) { // was not able to write everything so break here we will get notified later again once // the network stack can handle more writes. return false; } } else if (msg instanceof DefaultFileRegion) { - DefaultFileRegion region = (DefaultFileRegion) msg; - if (!writeFileRegion(in, region, writeSpinCount)) { + if (!writeDefaultFileRegion(in, (DefaultFileRegion) msg, writeSpinCount)) { + // was not able to write everything so break here we will get notified later again once + // the network stack can handle more writes. + return false; + } + } else if (msg instanceof FileRegion) { + if (!writeFileRegion(in, (FileRegion) msg, writeSpinCount)) { // was not able to write everything so break here we will get notified later again once // the network stack can handle more writes. return false; @@ -533,7 +579,7 @@ protected Object filterOutboundMessage(Object msg) { return buf; } - if (msg instanceof DefaultFileRegion || msg instanceof SpliceOutTask) { + if (msg instanceof FileRegion || msg instanceof SpliceOutTask) { return msg; } @@ -1211,4 +1257,56 @@ public boolean spliceIn(RecvByteBufAllocator.Handle handle) { } } } + + private final class SocketWritableByteChannel implements WritableByteChannel { + + @Override + public int write(ByteBuffer src) throws IOException { + final int written; + int position = src.position(); + int limit = src.limit(); + if (src.isDirect()) { + written = fd().write(src, position, limit); + } else { + final int readableBytes = limit - position; + ByteBuf buffer = null; + try { + if (readableBytes == 0) { + buffer = Unpooled.EMPTY_BUFFER; + } else { + final ByteBufAllocator alloc = alloc(); + if (alloc.isDirectBufferPooled()) { + buffer = alloc.directBuffer(readableBytes); + } else { + buffer = ByteBufUtil.threadLocalDirectBuffer(); + if (buffer == null) { + buffer = alloc.directBuffer(readableBytes); + } + } + } + buffer.writeBytes(src.duplicate()); + ByteBuffer nioBuffer = buffer.internalNioBuffer(buffer.readerIndex(), readableBytes); + written = fd().write(nioBuffer, nioBuffer.position(), nioBuffer.limit()); + } finally { + if (buffer != null) { + buffer.release(); + } + } + } + if (written > 0) { + src.position(position + written); + } + return written; + } + + @Override + public boolean isOpen() { + return fd().isOpen(); + } + + @Override + public void close() throws IOException { + fd().close(); + } + } }
diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java index 4a44c62171c..6b7ad3ca0e5 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java @@ -33,6 +33,7 @@ import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; +import java.nio.channels.WritableByteChannel; import java.util.Random; import java.util.concurrent.atomic.AtomicReference; @@ -52,6 +53,11 @@ public void testFileRegion() throws Throwable { run(); } + @Test + public void testCustomFileRegion() throws Throwable { + run(); + } + @Test public void testFileRegionNotAutoRead() throws Throwable { run(); @@ -68,23 +74,28 @@ public void testFileRegionVoidPromiseNotAutoRead() throws Throwable { } public void testFileRegion(ServerBootstrap sb, Bootstrap cb) throws Throwable { - testFileRegion0(sb, cb, false, true); + testFileRegion0(sb, cb, false, true, true); + } + + public void testCustomFileRegion(ServerBootstrap sb, Bootstrap cb) throws Throwable { + testFileRegion0(sb, cb, false, true, false); } public void testFileRegionVoidPromise(ServerBootstrap sb, Bootstrap cb) throws Throwable { - testFileRegion0(sb, cb, true, true); + testFileRegion0(sb, cb, true, true, true); } public void testFileRegionNotAutoRead(ServerBootstrap sb, Bootstrap cb) throws Throwable { - testFileRegion0(sb, cb, false, false); + testFileRegion0(sb, cb, false, false, true); } public void testFileRegionVoidPromiseNotAutoRead(ServerBootstrap sb, Bootstrap cb) throws Throwable { - testFileRegion0(sb, cb, true, false); + testFileRegion0(sb, cb, true, false, true); } private static void testFileRegion0( - ServerBootstrap sb, Bootstrap cb, boolean voidPromise, final boolean autoRead) throws Throwable { + ServerBootstrap sb, Bootstrap cb, boolean voidPromise, final boolean autoRead, boolean defaultFileRegion) + throws Throwable { sb.childOption(ChannelOption.AUTO_READ, autoRead); cb.option(ChannelOption.AUTO_READ, autoRead); @@ -140,6 +151,10 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E new FileInputStream(file).getChannel(), startOffset, data.length - bufferSize); FileRegion emptyRegion = new DefaultFileRegion(new FileInputStream(file).getChannel(), 0, 0); + if (!defaultFileRegion) { + region = new FileRegionWrapper(region); + emptyRegion = new FileRegionWrapper(emptyRegion); + } // Do write ByteBuf and then FileRegion to ensure that mixed writes work // Also, write an empty FileRegion to test if writing an empty FileRegion does not cause any issues. // @@ -229,4 +244,77 @@ public void exceptionCaught(ChannelHandlerContext ctx, } } } + + private static final class FileRegionWrapper implements FileRegion { + private final FileRegion region; + + FileRegionWrapper(FileRegion region) { + this.region = region; + } + + @Override + public int refCnt() { + return region.refCnt(); + } + + @Override + public long position() { + return region.position(); + } + + @Override + @Deprecated + public long transfered() { + return region.transfered(); + } + + @Override + public boolean release() { + return region.release(); + } + + @Override + public long transferred() { + return region.transferred(); + } + + @Override + public long count() { + return region.count(); + } + + @Override + public boolean release(int decrement) { + return region.release(decrement); + } + + @Override + public long transferTo(WritableByteChannel target, long position) throws IOException { + return region.transferTo(target, position); + } + + @Override + public FileRegion retain() { + region.retain(); + return this; + } + + @Override + public FileRegion retain(int increment) { + region.retain(increment); + return this; + } + + @Override + public FileRegion touch() { + region.touch(); + return this; + } + + @Override + public FileRegion touch(Object hint) { + region.touch(hint); + return this; + } + } }
train
train
2016-09-15T19:27:29
"2016-09-07T09:02:51Z"
akurilov
val
netty/netty/5821_5844
netty/netty
netty/netty/5821
netty/netty/5844
[ "timestamp(timedelta=70.0, similarity=0.9335052059039113)" ]
9b6cbf8ab1b3cd651e9ad6ef19af21605f270dac
516dab90085198035ae3e1ffd63d12b03d2a2208
[ "@henrik-lindqvist maybe you can provide a PR ?\n", "or provide a link to an RFC or specification which describes expected behavior and an input which doesn't produce the expected output.\n", "NetUtil.bytesToIpAddress(new byte[] { 83, -1, -20, 16 }, 0, 4) returns \"0.255.255.16\" not \"83.255.236.16\" as expected. See initial post for cause.\n" ]
[ "shouldnt this be 16 ?\n", "I think 15 should be correct. `\"000.000.000.000\".length() == 15`\n", "doh... right\n" ]
"2016-09-21T20:36:23Z"
[ "defect" ]
NetUtil.bytesToIpAddress broken for IPv4
For textual IPv4 representation the method incorrectly performs bit shifts on the array bytes. 4.1.5.Final
[ "codec-socks/src/main/java/io/netty/handler/codec/socksx/v5/Socks5AddressDecoder.java", "common/src/main/java/io/netty/util/NetUtil.java" ]
[ "codec-socks/src/main/java/io/netty/handler/codec/socksx/v5/Socks5AddressDecoder.java", "common/src/main/java/io/netty/util/NetUtil.java" ]
[ "codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandRequestDecoderTest.java", "codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandResponseDecoderTest.java", "common/src/test/java/io/netty/util/NetUtilTest.java" ]
diff --git a/codec-socks/src/main/java/io/netty/handler/codec/socksx/v5/Socks5AddressDecoder.java b/codec-socks/src/main/java/io/netty/handler/codec/socksx/v5/Socks5AddressDecoder.java index 83b15c42f1c..bc37469edb7 100644 --- a/codec-socks/src/main/java/io/netty/handler/codec/socksx/v5/Socks5AddressDecoder.java +++ b/codec-socks/src/main/java/io/netty/handler/codec/socksx/v5/Socks5AddressDecoder.java @@ -52,7 +52,7 @@ public String decodeAddress(Socks5AddressType addrType, ByteBuf in) throws Excep } else { byte[] tmp = new byte[IPv6_LEN]; in.readBytes(tmp); - return NetUtil.bytesToIpAddress(tmp, 0, IPv6_LEN); + return NetUtil.bytesToIpAddress(tmp); } } else { throw new DecoderException("unsupported address type: " + (addrType.byteValue() & 0xFF)); diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java index b990d0e9889..a645cd70d63 100644 --- a/common/src/main/java/io/netty/util/NetUtil.java +++ b/common/src/main/java/io/netty/util/NetUtil.java @@ -16,7 +16,6 @@ package io.netty.util; import io.netty.util.internal.PlatformDependent; -import io.netty.util.internal.StringUtil; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -506,35 +505,33 @@ public static String intToIpAddress(int i) { * @throws IllegalArgumentException * if {@code length} is not {@code 4} nor {@code 16} */ - public static String bytesToIpAddress(byte[] bytes, int offset, int length) { - if (length == 4) { - StringBuilder buf = new StringBuilder(15); - - buf.append(bytes[offset ++] >> 24 & 0xff); - buf.append('.'); - buf.append(bytes[offset ++] >> 16 & 0xff); - buf.append('.'); - buf.append(bytes[offset ++] >> 8 & 0xff); - buf.append('.'); - buf.append(bytes[offset] & 0xff); - - return buf.toString(); - } - - if (length == 16) { - final StringBuilder sb = new StringBuilder(39); - final int endOffset = offset + 14; + public static String bytesToIpAddress(byte[] bytes) { + return bytesToIpAddress(bytes, 0, bytes.length); + } - for (; offset < endOffset; offset += 2) { - StringUtil.toHexString(sb, bytes, offset, 2); - sb.append(':'); + /** + * Converts 4-byte or 16-byte data into an IPv4 or IPv6 string respectively. + * + * @throws IllegalArgumentException + * if {@code length} is not {@code 4} nor {@code 16} + */ + public static String bytesToIpAddress(byte[] bytes, int offset, int length) { + switch (length) { + case 4: { + return new StringBuilder(15) + .append(bytes[offset] & 0xff) + .append('.') + .append(bytes[offset + 1] & 0xff) + .append('.') + .append(bytes[offset + 2] & 0xff) + .append('.') + .append(bytes[offset + 3] & 0xff).toString(); } - StringUtil.toHexString(sb, bytes, offset, 2); - - return sb.toString(); + case 16: + return toAddressString(bytes, offset, false); + default: + throw new IllegalArgumentException("length: " + length + " (expected: 4 or 16)"); } - - throw new IllegalArgumentException("length: " + length + " (expected: 4 or 16)"); } public static boolean isValidIpV6Address(String ipAddress) { @@ -981,13 +978,17 @@ public static String toAddressString(InetAddress ip, boolean ipv4Mapped) { return ip.getHostAddress(); } if (!(ip instanceof Inet6Address)) { - throw new IllegalArgumentException("Unhandled type: " + ip.getClass()); + throw new IllegalArgumentException("Unhandled type: " + ip); } - final byte[] bytes = ip.getAddress(); + return toAddressString(ip.getAddress(), 0, ipv4Mapped); + } + + private static String toAddressString(byte[] bytes, int offset, boolean ipv4Mapped) { final int[] words = new int[IPV6_WORD_COUNT]; int i; - for (i = 0; i < words.length; ++i) { + final int end = offset + words.length; + for (i = offset; i < end; ++i) { words[i] = ((bytes[i << 1] & 0xff) << 8) | (bytes[(i << 1) + 1] & 0xff); }
diff --git a/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandRequestDecoderTest.java b/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandRequestDecoderTest.java index 8b0478a4b15..728ee1eb7a1 100755 --- a/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandRequestDecoderTest.java +++ b/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandRequestDecoderTest.java @@ -67,7 +67,7 @@ public void testCmdRequestDecoderIPv4() { @Test public void testCmdRequestDecoderIPv6() { String[] hosts = { - NetUtil.bytesToIpAddress(IPAddressUtil.textToNumericFormatV6("::1"), 0, 16) }; + NetUtil.bytesToIpAddress(IPAddressUtil.textToNumericFormatV6("::1")) }; int[] ports = {1, 32769, 65535}; for (Socks5CommandType cmdType: Arrays.asList(Socks5CommandType.BIND, Socks5CommandType.CONNECT, diff --git a/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandResponseDecoderTest.java b/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandResponseDecoderTest.java index 150a4169a1b..23f8cbe589f 100755 --- a/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandResponseDecoderTest.java +++ b/codec-socks/src/test/java/io/netty/handler/codec/socksx/v5/Socks5CommandResponseDecoderTest.java @@ -22,7 +22,8 @@ import java.util.Arrays; -import static org.junit.Assert.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNull; public class Socks5CommandResponseDecoderTest { @@ -92,7 +93,7 @@ public void testSocksCmdResponseDecoderIncludingHost() { test(cmdStatus, Socks5AddressType.IPv6, "2001:db8:85a3:42:1000:8a2e:370:7334", 80); test(cmdStatus, Socks5AddressType.IPv6, - "1111:111:11:1:0:0:0:1", 80); + "1111:111:11:1::1", 80); } } } diff --git a/common/src/test/java/io/netty/util/NetUtilTest.java b/common/src/test/java/io/netty/util/NetUtilTest.java index f0f600cf8a2..ecd60d5d3c1 100644 --- a/common/src/test/java/io/netty/util/NetUtilTest.java +++ b/common/src/test/java/io/netty/util/NetUtilTest.java @@ -17,12 +17,17 @@ import org.junit.Test; +import java.net.Inet6Address; import java.net.InetAddress; import java.net.UnknownHostException; import java.util.HashMap; import java.util.Map; import java.util.Map.Entry; +import static io.netty.util.NetUtil.bytesToIpAddress; +import static io.netty.util.NetUtil.createByteArrayFromIpAddressString; +import static io.netty.util.NetUtil.getByName; +import static io.netty.util.NetUtil.toAddressString; import static org.junit.Assert.*; public class NetUtilTest { @@ -44,7 +49,8 @@ private static final class TestMap extends HashMap<String, String> { "10.255.255.254", "0afffffe", "172.18.5.4", "ac120504", "0.0.0.0", "00000000", - "127.0.0.1", "7f000001"); + "127.0.0.1", "7f000001", + "1.2.3.4", "01020304"); private static final Map<String, String> invalidIpV4Hosts = new TestMap( "1.256.3.4", null, @@ -438,30 +444,40 @@ public void testIsValidIpV6Address() { @Test public void testCreateByteArrayFromIpAddressString() { for (Entry<String, String> e : validIpV4Hosts.entrySet()) { - assertHexDumpEquals(e.getValue(), NetUtil.createByteArrayFromIpAddressString(e.getKey())); + assertHexDumpEquals(e.getValue(), createByteArrayFromIpAddressString(e.getKey())); } for (Entry<String, String> e : invalidIpV4Hosts.entrySet()) { - assertHexDumpEquals(e.getValue(), NetUtil.createByteArrayFromIpAddressString(e.getKey())); + assertHexDumpEquals(e.getValue(), createByteArrayFromIpAddressString(e.getKey())); } for (Entry<String, String> e : validIpV6Hosts.entrySet()) { - assertHexDumpEquals(e.getValue(), NetUtil.createByteArrayFromIpAddressString(e.getKey())); + assertHexDumpEquals(e.getValue(), createByteArrayFromIpAddressString(e.getKey())); } for (Entry<String, String> e : invalidIpV6Hosts.entrySet()) { - assertHexDumpEquals(e.getValue(), NetUtil.createByteArrayFromIpAddressString(e.getKey())); + assertHexDumpEquals(e.getValue(), createByteArrayFromIpAddressString(e.getKey())); + } + } + + @Test + public void testBytesToIpAddress() throws UnknownHostException { + for (Entry<String, String> e : validIpV4Hosts.entrySet()) { + assertEquals(e.getKey(), bytesToIpAddress(createByteArrayFromIpAddressString(e.getKey()))); + } + for (Entry<byte[], String> testEntry : ipv6ToAddressStrings.entrySet()) { + assertEquals(testEntry.getValue(), bytesToIpAddress(testEntry.getKey())); } } @Test public void testIp6AddressToString() throws UnknownHostException { for (Entry<byte[], String> testEntry : ipv6ToAddressStrings.entrySet()) { - assertEquals(testEntry.getValue(), NetUtil.toAddressString(InetAddress.getByAddress(testEntry.getKey()))); + assertEquals(testEntry.getValue(), toAddressString(InetAddress.getByAddress(testEntry.getKey()))); } } @Test public void testIp4AddressToString() throws UnknownHostException { for (Entry<String, String> e : validIpV4Hosts.entrySet()) { - assertEquals(e.getKey(), NetUtil.toAddressString(InetAddress.getByAddress(unhex(e.getValue())))); + assertEquals(e.getKey(), toAddressString(InetAddress.getByAddress(unhex(e.getValue())))); } } @@ -470,18 +486,18 @@ public void testIpv4MappedIp6GetByName() { for (Entry<String, String> testEntry : ipv4MappedToIPv6AddressStrings.entrySet()) { assertEquals( testEntry.getValue(), - NetUtil.toAddressString(NetUtil.getByName(testEntry.getKey(), true), true)); + toAddressString(getByName(testEntry.getKey(), true), true)); } } @Test public void testinvalidIpv4MappedIp6GetByName() { for (String testEntry : invalidIpV4Hosts.keySet()) { - assertNull(NetUtil.getByName(testEntry, true)); + assertNull(getByName(testEntry, true)); } for (String testEntry : invalidIpV6Hosts.keySet()) { - assertNull(NetUtil.getByName(testEntry, true)); + assertNull(getByName(testEntry, true)); } }
train
train
2016-09-22T08:15:27
"2016-09-14T01:31:07Z"
henrik-lindqvist
val
netty/netty/5831_5846
netty/netty
netty/netty/5831
netty/netty/5846
[ "timestamp(timedelta=30.0, similarity=0.8635042964389493)" ]
4639d56596f5dd023bcb515d3d9e3f16cbc3217e
3a5c8029863b004a9b62af3542d4b445175d71f9
[ "@bryce-anderson interested to provide a PR + unit test ?\n", "@normanmaurer Do you mean PR to fix or PR to demonstrate? I'm happy to provide a PR either way, I just want to be clear on what PR we want. 😄\n\nIf we are talking about a fix, the easy solution is to adopt the netty3 behavior of resetting the encoder on every new `HttpMessage`, but before making the change I think it's worth quickly discussing if that is an acceptable solution: in my minds eye someone introduced that `IllegalStateException` for a reason.\n", "let me fix this...\n", "@bryce-anderson actually I not understand how this happens. Can you provide a unit test that shows it ?\n", "@bryce-anderson nevermind... I think I understand it now\n", "Ah, and I just hacked together an example! Just in case, here it is: https://github.com/bryce-anderson/netty/commit/0fdfcacb67eff649699d266ea6425cb2b93b1218\n\nI apologize in advance, there is probably a cleaner way to do it.\n", "@bryce-anderson PTAL at https://github.com/netty/netty/pull/5846\n", "Fixed by https://github.com/netty/netty/pull/5846" ]
[ "move inner `if` to this level to make an `} else if { ... } else { ... }`\n", "done\n", "Could we define an overridable protected method in `HttpObjectEncoder` rather than duplicating HEAD check in `encode*Content()` methods? That way, `HttpObjectEncoder` could keep `encode*Content()` private. i.e.\n\n``` java\nprotected boolean isContentAlwaysEmpty(HttpMessage msg) { // subclasses will override.\n return false;\n}\n```\n\nActually, this is what we are doing for `HttpObjectDecoder`, so it's probably good to be consistent?\n", "@trustin actually I think this will not work as we need to map request to response which we only want to do in the `HttpServerCodec`. Or I am missing something ?\n", "@trustin ping\n", "@trustin ping again... sorry for being a PITA.", "@Scottmitch as @trustin seems to be busy wdyt ? I think what @trustin suggested can't be done so I think we should pull it in as it is.", "@normanmaurer - I think what @trustin was suggesting was to do something like this:\r\n\r\n\r\n```java\r\nclass HttpObjectEncoder {\r\n final boolean encodeNonChunkedContent(ByteBuf buf, Object msg, long contentLength, List<Object> out) {\r\n // use isContentAlwaysEmpty, release the msg && add empty buff if true\r\n }\r\n\r\n final boolean encodeChunkedContent(ChannelHandlerContext ctx, Object msg, long contentLength, List<Object> out) {\r\n // use isContentAlwaysEmpty, release the msg && add empty buff if true\r\n }\r\n\r\n protected boolean isContentAlwaysEmpty(HttpMessage msg) {\r\n return false;\r\n }\r\n}\r\n\r\n// In HttpServerCodec.java\r\nprivate final class HttpServerResponseEncoder extends HttpResponseEncoder {\r\n @Override\r\n protected boolean isContentAlwaysEmpty(HttpMessage msg) {\r\n if (msg instanceof LastHttpContent) {\r\n HttpMethod method = queue.poll();\r\n return HttpMethod.HEAD.equals(method);\r\n }\r\n return false;\r\n }\r\n}\r\n```", "@Scottmitch Ah got it! Let me update the PR, this is a good idea.", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Remove this unused method parameter \"msg\". [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1172)\n", "we have a bunch of conditionals on `state` ... can we make this a `switch`?", "the `List` interface allows base classes to do some interesting things which may make this iteration not do exactly what we want. Maybe its OK to assume the super class behavior only adds to the list, and doesn't change indexes though bcz its an internal class? This method is also used in HttpClientCodec too. We have discussed before but it seems like a good idea to remove ambiguity and overhead by removing the the list interface (for future releases).", "add a call to `assertFalse(ch.finishAndReleaseAll());` or something similar to assert there is no more data and we clean everything up.", "yep for future releases, for now its ok.", "+1" ]
"2016-09-21T22:05:12Z"
[ "defect" ]
HttpServerCodec annot encode a respons e to HEAD request with a 'content-encoding: chunked' header
Netty versions: 4.x ### Context It is valid to send a response to a HEAD request that contains a `transfer-encoding: chunked` header, but there is no way to do this using the netty4 `HttpServerCodec`. ### Root of the problem The root cause is that the netty4 `HttpObjectEncoder` will [transition to the state](https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java#L80) `ST_CONTENT_CHUNK` and the only way to transition back to `ST_INIT` is through the `encodeChunkedContent` method which [will write the terminating length](https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java#L163-L180) (`0\r\n\r\n\r\n`), a protocol error when responding to a HEAD request. I don't believe this is a problem with netty3: it looks like submitting a new `HttpMessage` [effectively resets the encoder](https://github.com/netty/netty/blob/3.10/src/main/java/org/jboss/netty/handler/codec/http/HttpMessageEncoder.java#L62-L101), while the same strategy with netty4 [results in an illegal state exception](https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java#L68-L70). ### Workaround The `content-encoding` MAY be omitted ([RFC 7231](https://tools.ietf.org/html/rfc7231#section-4.3.2)), but it would be nice to have.
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java index dba6f01de2e..10794983fdc 100755 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectEncoder.java @@ -57,6 +57,7 @@ public abstract class HttpObjectEncoder<H extends HttpMessage> extends MessageTo private static final int ST_INIT = 0; private static final int ST_CONTENT_NON_CHUNK = 1; private static final int ST_CONTENT_CHUNK = 2; + private static final int ST_CONTENT_ALWAYS_EMPTY = 3; @SuppressWarnings("RedundantFieldInitialization") private int state = ST_INIT; @@ -77,7 +78,8 @@ protected void encode(ChannelHandlerContext ctx, Object msg, List<Object> out) t encodeInitialLine(buf, m); encodeHeaders(m.headers(), buf); buf.writeBytes(CRLF); - state = HttpUtil.isTransferEncodingChunked(m) ? ST_CONTENT_CHUNK : ST_CONTENT_NON_CHUNK; + state = isContentAlwaysEmpty(m) ? ST_CONTENT_ALWAYS_EMPTY : + HttpUtil.isTransferEncodingChunked(m) ? ST_CONTENT_CHUNK : ST_CONTENT_NON_CHUNK; } // Bypass the encoder in case of an empty buffer, so that the following idiom works: @@ -92,44 +94,50 @@ protected void encode(ChannelHandlerContext ctx, Object msg, List<Object> out) t } if (msg instanceof HttpContent || msg instanceof ByteBuf || msg instanceof FileRegion) { - - if (state == ST_INIT) { - throw new IllegalStateException("unexpected message type: " + StringUtil.simpleClassName(msg)); - } - - final long contentLength = contentLength(msg); - if (state == ST_CONTENT_NON_CHUNK) { - if (contentLength > 0) { - if (buf != null && buf.writableBytes() >= contentLength && msg instanceof HttpContent) { - // merge into other buffer for performance reasons - buf.writeBytes(((HttpContent) msg).content()); - out.add(buf); + switch (state) { + case ST_INIT: + throw new IllegalStateException("unexpected message type: " + StringUtil.simpleClassName(msg)); + case ST_CONTENT_ALWAYS_EMPTY: + out.add(EMPTY_BUFFER); + if (msg instanceof LastHttpContent) { + state = ST_INIT; + } + return; + case ST_CONTENT_NON_CHUNK: + final long contentLength = contentLength(msg); + if (contentLength > 0) { + if (buf != null && buf.writableBytes() >= contentLength && msg instanceof HttpContent) { + // merge into other buffer for performance reasons + buf.writeBytes(((HttpContent) msg).content()); + out.add(buf); + } else { + if (buf != null) { + out.add(buf); + } + out.add(encodeAndRetain(msg)); + } } else { if (buf != null) { out.add(buf); + } else { + // Need to produce some output otherwise an + // IllegalStateException will be thrown + out.add(EMPTY_BUFFER); } - out.add(encodeAndRetain(msg)); } - } else { + + if (msg instanceof LastHttpContent) { + state = ST_INIT; + } + return; + case ST_CONTENT_CHUNK: if (buf != null) { out.add(buf); - } else { - // Need to produce some output otherwise an - // IllegalStateException will be thrown - out.add(EMPTY_BUFFER); } - } - - if (msg instanceof LastHttpContent) { - state = ST_INIT; - } - } else if (state == ST_CONTENT_CHUNK) { - if (buf != null) { - out.add(buf); - } - encodeChunkedContent(ctx, msg, contentLength, out); - } else { - throw new Error(); + encodeChunkedContent(ctx, msg, contentLength(msg), out); + return; + default: + throw new Error(); } } else { if (buf != null) { @@ -187,6 +195,10 @@ private void encodeChunkedContent(ChannelHandlerContext ctx, Object msg, long co } } + boolean isContentAlwaysEmpty(@SuppressWarnings("unused") H msg) { + return false; + } + @Override public boolean acceptOutboundMessage(Object msg) throws Exception { return msg instanceof HttpObject || msg instanceof ByteBuf || msg instanceof FileRegion; diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java index 684af0b67a0..29a11992ad5 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java @@ -15,9 +15,14 @@ */ package io.netty.handler.codec.http; +import io.netty.buffer.ByteBuf; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.CombinedChannelDuplexHandler; +import java.util.ArrayDeque; +import java.util.List; +import java.util.Queue; + /** * A combination of {@link HttpRequestDecoder} and {@link HttpResponseEncoder} * which enables easier server side HTTP implementation. @@ -27,6 +32,9 @@ public final class HttpServerCodec extends CombinedChannelDuplexHandler<HttpRequestDecoder, HttpResponseEncoder> implements HttpServerUpgradeHandler.SourceCodec { + /** A queue that is used for correlating a request and a response. */ + private final Queue<HttpMethod> queue = new ArrayDeque<HttpMethod>(); + /** * Creates a new instance with the default decoder options * ({@code maxInitialLineLength (4096}}, {@code maxHeaderSize (8192)}, and @@ -40,15 +48,16 @@ public HttpServerCodec() { * Creates a new instance with the specified decoder options. */ public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { - super(new HttpRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize), new HttpResponseEncoder()); + init(new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize), + new HttpServerResponseEncoder()); } /** * Creates a new instance with the specified decoder options. */ public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders) { - super(new HttpRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders), - new HttpResponseEncoder()); + init(new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders), + new HttpServerResponseEncoder()); } /** @@ -56,9 +65,10 @@ public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunk */ public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders, int initialBufferSize) { - super( - new HttpRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders, initialBufferSize), - new HttpResponseEncoder()); + init( + new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, + validateHeaders, initialBufferSize), + new HttpServerResponseEncoder()); } /** @@ -69,4 +79,41 @@ public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunk public void upgradeFrom(ChannelHandlerContext ctx) { ctx.pipeline().remove(this); } + + private final class HttpServerRequestDecoder extends HttpRequestDecoder { + public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { + super(maxInitialLineLength, maxHeaderSize, maxChunkSize); + } + + public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, + boolean validateHeaders) { + super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders); + } + + public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, + boolean validateHeaders, int initialBufferSize) { + super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders, initialBufferSize); + } + + @Override + protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) throws Exception { + int oldSize = out.size(); + super.decode(ctx, buffer, out); + int size = out.size(); + for (int i = oldSize; i < size; i++) { + Object obj = out.get(i); + if (obj instanceof HttpRequest) { + queue.add(((HttpRequest) obj).method()); + } + } + } + } + + private final class HttpServerResponseEncoder extends HttpResponseEncoder { + + @Override + boolean isContentAlwaysEmpty(@SuppressWarnings("unused") HttpResponse msg) { + return HttpMethod.HEAD.equals(queue.poll()); + } + } }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java index 345e9e821fc..30878198947 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java @@ -114,6 +114,37 @@ public void test100Continue() throws Exception { ch.finish(); } + @Test + public void testChunkedHeadResponse() { + EmbeddedChannel ch = new EmbeddedChannel(new HttpServerCodec()); + + // Send the request headers. + assertTrue(ch.writeInbound(Unpooled.copiedBuffer( + "HEAD / HTTP/1.1\r\n\r\n", CharsetUtil.UTF_8))); + + HttpRequest request = ch.readInbound(); + assertEquals(HttpMethod.HEAD, request.method()); + LastHttpContent content = ch.readInbound(); + assertFalse(content.content().isReadable()); + content.release(); + + HttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK); + HttpUtil.setTransferEncodingChunked(response, true); + assertTrue(ch.writeOutbound(response)); + assertTrue(ch.writeOutbound(LastHttpContent.EMPTY_LAST_CONTENT)); + assertTrue(ch.finish()); + + ByteBuf buf = ch.readOutbound(); + assertEquals("HTTP/1.1 200 OK\r\ntransfer-encoding: chunked\r\n\r\n", buf.toString(CharsetUtil.US_ASCII)); + buf.release(); + + buf = ch.readOutbound(); + assertFalse(buf.isReadable()); + buf.release(); + + assertFalse(ch.finishAndReleaseAll()); + } + private static ByteBuf prepareDataChunk(int size) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < size; ++i) {
test
train
2016-12-08T19:51:42
"2016-09-16T16:31:49Z"
bryce-anderson
val
netty/netty/5862_5876
netty/netty
netty/netty/5862
netty/netty/5876
[ "timestamp(timedelta=15.0, similarity=0.9115880020539914)" ]
149916d052180dd2b0e0e98598f349454ea200ba
8a2b91153bb11413e6383a251a5cc0e4b67ce425
[ "Let me fix it\n", "Fixes by https://github.com/netty/netty/pull/5876\n" ]
[]
"2016-09-29T06:21:12Z"
[ "defect" ]
Http2EventAdapter's onUnknownFrame() is missing throws Http2Exception
`Http2EventAdapter` implements the `Http2FrameListener` interface but implements the `#onUnknownFrame(...)` method without the interface's `throws Http2Exception`. ... causing compiler errors down the road if you're trying to implement a delegate or something like that.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java index 3f5665a40eb..b51e083f150 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java @@ -82,7 +82,7 @@ public void onWindowUpdateRead(ChannelHandlerContext ctx, int streamId, int wind @Override public void onUnknownFrame(ChannelHandlerContext ctx, byte frameType, int streamId, Http2Flags flags, - ByteBuf payload) { + ByteBuf payload) throws Http2Exception { } @Override
null
train
train
2016-09-24T21:33:04
"2016-09-25T19:09:09Z"
rkapsi
val
netty/netty/5861_5877
netty/netty
netty/netty/5861
netty/netty/5877
[ "timestamp(timedelta=18.0, similarity=0.9318689056094328)" ]
149916d052180dd2b0e0e98598f349454ea200ba
d1381b687fab6f5d3020be4b12392b174a2eacbb
[ "@ewie sounds like a bug... let me fix it.\n", "@ewie should be fixed by https://github.com/netty/netty/pull/5877\n", "Fixed by https://github.com/netty/netty/pull/5877\n" ]
[]
"2016-09-29T06:22:14Z"
[ "defect" ]
HttpUtil.getContentLength(HttpMessage, long) throws unexpected NumberFormatException
The Javadocs of `HttpUtil.getContentLength(HttpMessage, long)` and its `int` overload state that the provided default value is returned if the Content-Length value is not a number. `NumberFormatException` is thrown instead [1] causing the method to behave like `HttpUtil.getContentLength(HttpMessage)` where the exception is explicitly documented for this case. [1] https://github.com/netty/netty/blob/149916d052180dd2b0e0e98598f349454ea200ba/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java#L187
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java index 0d9433c698b..4d165088e67 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java @@ -184,7 +184,11 @@ public static long getContentLength(HttpMessage message) { public static long getContentLength(HttpMessage message, long defaultValue) { String value = message.headers().get(HttpHeaderNames.CONTENT_LENGTH); if (value != null) { - return Long.parseLong(value); + try { + return Long.parseLong(value); + } catch (NumberFormatException ignore) { + return defaultValue; + } } // We know the content length if it's a Web Socket message even if
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java index 1e9615ce8f5..93478d645d0 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java @@ -100,4 +100,13 @@ public void testGetMimeType() { message.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/html; charset=utf-8"); assertEquals("text/html", HttpUtil.getMimeType(message)); } + + @Test + public void testGetContentLengthDefaultValue() { + HttpMessage message = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK); + assertNull(message.headers().get(HttpHeaderNames.CONTENT_LENGTH)); + message.headers().set(HttpHeaderNames.CONTENT_LENGTH, "bar"); + assertEquals("bar", message.headers().get(HttpHeaderNames.CONTENT_LENGTH)); + assertEquals(1L, HttpUtil.getContentLength(message, 1L)); + } }
val
train
2016-09-24T21:33:04
"2016-09-25T15:20:00Z"
ewie
val
netty/netty/5933_5939
netty/netty
netty/netty/5933
netty/netty/5939
[ "timestamp(timedelta=62.0, similarity=0.8592571639964799)" ]
9cfa4675548e368f3c905c0df31d18912d43cf08
cb7a5c9fd552997ecb3ffd9be859c18f79601d1c
[ "Adding some new details. Turns out we only need to issue one read to trigger the handshake (no matter how many messages it takes to complete for HTTP and SOCKS5) given that [`ProxyHandler.readComplete()`](https://github.com/netty/netty/blob/4.1/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java#L387-L389) will keep reading for us.\n", "Fixed by https://github.com/netty/netty/pull/5939\n" ]
[ "ThreadLocalRandom.current().nextBoolean()\n", "Thanks @normanmaurer! Just updated the branch.\n" ]
"2016-10-24T18:01:45Z"
[ "defect" ]
ProxyHandler stalls the proxy handshake when auto-read=false
We're (Finagle) using the N4 tooling around proxies and just discovered a bug in [`ProxyHandler`](https://github.com/netty/netty/blob/4.1/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java). When auto-read is disabled (our default) a connection established with the proxy server goes into a stall state waiting for a first response and never receives that (fires a handshake timeout). This is a common problem and might be reproduced with both HTTP and SOCKS5. While debugging this we noticed that a response was actually sent by a server, but was never read from the pipeline by a `ProxyHandler` since [it doesn't issue a read-request explicitly](https://github.com/netty/netty/blob/4.1/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java#L194-L211) and relies on auto-read (or previous handlers to issue the read). The solution to this bug is pretty straightforward - we have to issue a read request (once for HTTP and twice for SOCKS5) if the auto-read is disabled. We're working on the PR (with the fix) right now.
[ "handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java" ]
[ "handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java" ]
[ "handler-proxy/src/test/java/io/netty/handler/proxy/ProxyHandlerTest.java" ]
diff --git a/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java b/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java index 8d5b80b0aad..756de090a0a 100644 --- a/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java +++ b/handler-proxy/src/main/java/io/netty/handler/proxy/ProxyHandler.java @@ -208,6 +208,8 @@ public void run() { if (initialMessage != null) { sendToProxyServer(initialMessage); } + + readIfNeeded(ctx); } /** @@ -384,9 +386,7 @@ public final void channelReadComplete(ChannelHandlerContext ctx) throws Exceptio if (suppressChannelReadComplete) { suppressChannelReadComplete = false; - if (!ctx.channel().config().isAutoRead()) { - ctx.read(); - } + readIfNeeded(ctx); } else { ctx.fireChannelReadComplete(); } @@ -412,6 +412,12 @@ public final void flush(ChannelHandlerContext ctx) throws Exception { } } + private static void readIfNeeded(ChannelHandlerContext ctx) { + if (!ctx.channel().config().isAutoRead()) { + ctx.read(); + } + } + private void writePendingWrites() { if (pendingWrites != null) { pendingWrites.removeAndWriteAll();
diff --git a/handler-proxy/src/test/java/io/netty/handler/proxy/ProxyHandlerTest.java b/handler-proxy/src/test/java/io/netty/handler/proxy/ProxyHandlerTest.java index 1c5cb5ec06a..697b57bfe41 100644 --- a/handler-proxy/src/test/java/io/netty/handler/proxy/ProxyHandlerTest.java +++ b/handler-proxy/src/test/java/io/netty/handler/proxy/ProxyHandlerTest.java @@ -26,6 +26,7 @@ import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInitializer; +import io.netty.channel.ChannelOption; import io.netty.channel.ChannelPipeline; import io.netty.channel.EventLoopGroup; import io.netty.channel.SimpleChannelInboundHandler; @@ -61,6 +62,7 @@ import java.util.Queue; import java.util.concurrent.CountDownLatch; import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.TimeUnit; import static org.hamcrest.CoreMatchers.*; @@ -363,9 +365,16 @@ private static final class SuccessTestHandler extends SimpleChannelInboundHandle final Queue<Throwable> exceptions = new LinkedBlockingQueue<Throwable>(); volatile int eventCount; + private static void readIfNeeded(ChannelHandlerContext ctx) { + if (!ctx.channel().config().isAutoRead()) { + ctx.read(); + } + } + @Override public void channelActive(ChannelHandlerContext ctx) throws Exception { ctx.writeAndFlush(Unpooled.copiedBuffer("A\n", CharsetUtil.US_ASCII)); + readIfNeeded(ctx); } @Override @@ -378,6 +387,7 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc // ProxyHandlers in the pipeline. Therefore, we send the 'B' message only on the first event. ctx.writeAndFlush(Unpooled.copiedBuffer("B\n", CharsetUtil.US_ASCII)); } + readIfNeeded(ctx); } } @@ -388,6 +398,7 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except if ("2".equals(str)) { ctx.writeAndFlush(Unpooled.copiedBuffer("C\n", CharsetUtil.US_ASCII)); } + readIfNeeded(ctx); } @Override @@ -523,6 +534,7 @@ protected void test() throws Exception { Bootstrap b = new Bootstrap(); b.group(group); b.channel(NioSocketChannel.class); + b.option(ChannelOption.AUTO_READ, ThreadLocalRandom.current().nextBoolean()); b.resolver(NoopAddressResolverGroup.INSTANCE); b.handler(new ChannelInitializer<SocketChannel>() { @Override
train
train
2016-10-24T18:23:56
"2016-10-21T20:40:06Z"
vkostyukov
val
netty/netty/5954_5956
netty/netty
netty/netty/5954
netty/netty/5956
[ "timestamp(timedelta=4.0, similarity=0.87246764186294)" ]
b2379e62f4ae50b55bce7c43e5bd008cf9aacfd1
d1d6ddeb9e51a00ff3ae146040500244991cef70
[ "Fixed by https://github.com/netty/netty/pull/5956\n" ]
[ "@mosesn please add missing `@Override`\n", "private final static\n", "private final static\n", "private final static\n", "@mosesn please add missing `@Override`\n", "assert return value\n", "assert return value\n", "assert return value\n", "assert return value\n", "ensure you call `channel.finish()` and assert the return value. Also ensure you consume all messages and call `release()` on them.\n", "same comments as above\n", "same comments as above\n", "@mosesn just a NIT but I think the methodname is kind of misleading as we not really check anything but just return the event. \n", "Isnt this a `FullHttpResponse` ? If so you should also call `release()` on it.\n", "Isnt this a `FullHttpRequest` ? If so you should also call `release()` on it.\n", "Isnt this a `FullHttpRequest` ? If so you should also call `release()` on it.\n", "`twitter.com` -> `netty.io` ;)\n", "call `last.release()`\n", "Isnt this a `FullHttpRequest` ? If so you should also call `release()` on it.\n", "whoops (◕ ω ◕✿)\n" ]
"2016-10-31T03:55:18Z"
[ "improvement" ]
HttpClientUpgradeHandler waits for the fully buffered stream
The HttpClientUpgradeHandler should only need the headers to decide whether to upgrade or not, but it waits for the entire response. This means that servers that respond with infinite streams will never know that their clients are only buffering the response. We should instead change it to only buffer once it has decided that it is switching protocols.
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpClientUpgradeHandlerTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java index ca4651023e5..235dbb1b5d8 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java @@ -195,6 +195,20 @@ protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> ou throw new IllegalStateException("Read HTTP response without requesting protocol switch"); } + if (msg instanceof HttpResponse) { + HttpResponse rep = (HttpResponse) msg; + if (!SWITCHING_PROTOCOLS.equals(rep.status())) { + // The server does not support the requested protocol, just remove this handler + // and continue processing HTTP. + // NOTE: not releasing the response since we're letting it propagate to the + // next handler. + ctx.fireUserEventTriggered(UpgradeEvent.UPGRADE_REJECTED); + removeThisHandler(ctx); + ctx.fireChannelRead(msg); + return; + } + } + if (msg instanceof FullHttpResponse) { response = (FullHttpResponse) msg; // Need to retain since the base class will release after returning from this method. @@ -212,16 +226,6 @@ protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> ou response = (FullHttpResponse) out.get(0); } - if (!SWITCHING_PROTOCOLS.equals(response.status())) { - // The server does not support the requested protocol, just remove this handler - // and continue processing HTTP. - // NOTE: not releasing the response since we're letting it propagate to the - // next handler. - ctx.fireUserEventTriggered(UpgradeEvent.UPGRADE_REJECTED); - removeThisHandler(ctx); - return; - } - CharSequence upgradeHeader = response.headers().get(HttpHeaderNames.UPGRADE); if (upgradeHeader != null && !AsciiString.contentEqualsIgnoreCase(upgradeCodec.protocol(), upgradeHeader)) { throw new IllegalStateException(
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientUpgradeHandlerTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientUpgradeHandlerTest.java new file mode 100644 index 00000000000..905556acc7a --- /dev/null +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientUpgradeHandlerTest.java @@ -0,0 +1,178 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.http; + +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.embedded.EmbeddedChannel; + +import java.util.Collection; +import java.util.Collections; +import java.util.Map; + +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; + +public class HttpClientUpgradeHandlerTest { + + private static final class FakeSourceCodec implements HttpClientUpgradeHandler.SourceCodec { + @Override + public void prepareUpgradeFrom(ChannelHandlerContext ctx) { + } + + @Override + public void upgradeFrom(ChannelHandlerContext ctx) { + } + } + + private static final class FakeUpgradeCodec implements HttpClientUpgradeHandler.UpgradeCodec { + @Override + public CharSequence protocol() { + return "fancyhttp"; + } + + @Override + public Collection<CharSequence> setUpgradeHeaders(ChannelHandlerContext ctx, HttpRequest upgradeRequest) { + return Collections.emptyList(); + } + + @Override + public void upgradeTo(ChannelHandlerContext ctx, FullHttpResponse upgradeResponse) throws Exception { + } + } + + private static final class UserEventCatcher extends ChannelInboundHandlerAdapter { + private Object evt; + + public Object getUserEvent() { + return evt; + } + + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) { + this.evt = evt; + } + } + + @Test + public void testSuccessfulUpgrade() { + HttpClientUpgradeHandler.SourceCodec sourceCodec = new FakeSourceCodec(); + HttpClientUpgradeHandler.UpgradeCodec upgradeCodec = new FakeUpgradeCodec(); + HttpClientUpgradeHandler handler = new HttpClientUpgradeHandler(sourceCodec, upgradeCodec, 1024); + UserEventCatcher catcher = new UserEventCatcher(); + EmbeddedChannel channel = new EmbeddedChannel(catcher); + channel.pipeline().addFirst("upgrade", handler); + + assertTrue( + channel.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "netty.io"))); + FullHttpRequest request = channel.readOutbound(); + + assertEquals(request.headers().size(), 2); + assertTrue(request.headers().contains(HttpHeaderNames.UPGRADE, "fancyhttp", false)); + assertTrue(request.headers().contains("connection", "upgrade", false)); + assertTrue(request.release()); + assertEquals(catcher.getUserEvent(), HttpClientUpgradeHandler.UpgradeEvent.UPGRADE_ISSUED); + + HttpResponse upgradeResponse = + new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.SWITCHING_PROTOCOLS); + + upgradeResponse.headers().add(HttpHeaderNames.UPGRADE, "fancyhttp"); + assertFalse(channel.writeInbound(upgradeResponse)); + assertFalse(channel.writeInbound(LastHttpContent.EMPTY_LAST_CONTENT)); + + assertEquals(catcher.getUserEvent(), HttpClientUpgradeHandler.UpgradeEvent.UPGRADE_SUCCESSFUL); + assertNull(channel.pipeline().get("upgrade")); + + assertTrue(channel.writeInbound(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK))); + FullHttpResponse response = channel.readInbound(); + assertEquals(response.status(), HttpResponseStatus.OK); + assertTrue(response.release()); + assertFalse(channel.finish()); + } + + @Test + public void testUpgradeRejected() { + HttpClientUpgradeHandler.SourceCodec sourceCodec = new FakeSourceCodec(); + HttpClientUpgradeHandler.UpgradeCodec upgradeCodec = new FakeUpgradeCodec(); + HttpClientUpgradeHandler handler = new HttpClientUpgradeHandler(sourceCodec, upgradeCodec, 1024); + UserEventCatcher catcher = new UserEventCatcher(); + EmbeddedChannel channel = new EmbeddedChannel(catcher); + channel.pipeline().addFirst("upgrade", handler); + + assertTrue( + channel.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "netty.io"))); + FullHttpRequest request = channel.readOutbound(); + + assertEquals(request.headers().size(), 2); + assertTrue(request.headers().contains(HttpHeaderNames.UPGRADE, "fancyhttp", false)); + assertTrue(request.headers().contains("connection", "upgrade", false)); + assertTrue(request.release()); + assertEquals(catcher.getUserEvent(), HttpClientUpgradeHandler.UpgradeEvent.UPGRADE_ISSUED); + + HttpResponse upgradeResponse = + new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.SWITCHING_PROTOCOLS); + upgradeResponse.headers().add(HttpHeaderNames.UPGRADE, "fancyhttp"); + assertTrue(channel.writeInbound(new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK))); + assertTrue(channel.writeInbound(LastHttpContent.EMPTY_LAST_CONTENT)); + + assertEquals(catcher.getUserEvent(), HttpClientUpgradeHandler.UpgradeEvent.UPGRADE_REJECTED); + assertNull(channel.pipeline().get("upgrade")); + + HttpResponse response = channel.readInbound(); + assertEquals(response.status(), HttpResponseStatus.OK); + + LastHttpContent last = channel.readInbound(); + assertEquals(last, LastHttpContent.EMPTY_LAST_CONTENT); + assertFalse(last.release()); + assertFalse(channel.finish()); + } + + @Test + public void testEarlyBailout() { + HttpClientUpgradeHandler.SourceCodec sourceCodec = new FakeSourceCodec(); + HttpClientUpgradeHandler.UpgradeCodec upgradeCodec = new FakeUpgradeCodec(); + HttpClientUpgradeHandler handler = new HttpClientUpgradeHandler(sourceCodec, upgradeCodec, 1024); + UserEventCatcher catcher = new UserEventCatcher(); + EmbeddedChannel channel = new EmbeddedChannel(catcher); + channel.pipeline().addFirst("upgrade", handler); + + assertTrue( + channel.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "netty.io"))); + FullHttpRequest request = channel.readOutbound(); + + assertEquals(request.headers().size(), 2); + assertTrue(request.headers().contains(HttpHeaderNames.UPGRADE, "fancyhttp", false)); + assertTrue(request.headers().contains("connection", "upgrade", false)); + assertTrue(request.release()); + assertEquals(catcher.getUserEvent(), HttpClientUpgradeHandler.UpgradeEvent.UPGRADE_ISSUED); + + HttpResponse upgradeResponse = + new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.SWITCHING_PROTOCOLS); + upgradeResponse.headers().add(HttpHeaderNames.UPGRADE, "fancyhttp"); + assertTrue(channel.writeInbound(new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK))); + + assertEquals(catcher.getUserEvent(), HttpClientUpgradeHandler.UpgradeEvent.UPGRADE_REJECTED); + assertNull(channel.pipeline().get("upgrade")); + + HttpResponse response = channel.readInbound(); + assertEquals(response.status(), HttpResponseStatus.OK); + assertFalse(channel.finish()); + } +}
test
train
2016-10-30T08:32:49
"2016-10-31T02:25:00Z"
mosesn
val
netty/netty/5945_5966
netty/netty
netty/netty/5945
netty/netty/5966
[ "timestamp(timedelta=98856.0, similarity=0.9859878293651412)" ]
b2379e62f4ae50b55bce7c43e5bd008cf9aacfd1
112c011df7c4bd36d6387e4a217095d06d5ac21b
[ "Maybe want to provide a PR with a fix?\n\n> Am 26.10.2016 um 21:25 schrieb ichaki5748 [email protected]:\n> \n> netty 4.1.6\n> \n> io.netty.handler.ssl.ReferenceCountedOpenSslEngine.OpenSslSession#initPeerCerts\n> \n> final byte[] clientCert;\n> ...\n> int len = clientCert.length + 1; // <------ BUG: raw certificate byte array size is used\n> peerCerts = new Certificate[len];\n> x509PeerCerts = new X509Certificate[len];\n> \n> But should probably be\n> \n> int len = chain.length + 1; // <------ so we have first entry as client cert + chain.length\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Yes, sure, will try to do it this week \n", "Thx\n\n> Am 26.10.2016 um 23:54 schrieb ichaki5748 [email protected]:\n> \n> Yes, sure, will try to do it this week\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Also `io.netty.handler.ssl.OpenSslJavaxX509Certificate.*`\ncould theoretically blow in concurrent usage because of lazy init in `io.netty.handler.ssl.OpenSslJavaxX509Certificate#unwrap` because \n`io.netty.handler.ssl.OpenSslJavaxX509Certificate#wrapped` is not `volatile` (certificate would not be fully initialized)? \n", "@iainmcgin please open separate PRs for the separate issues and ping me once you are ready. I will review then\n", "You probably meant to reference @ichaki5748\n", "@iainmcgin ups yes... sorry I need more ☕️ \n", "@normanmaurer Hi, please see https://github.com/netty/netty/pull/5967 https://github.com/netty/netty/pull/5966\n", "Fixed by https://github.com/netty/netty/pull/5967\n" ]
[]
"2016-10-31T21:45:48Z"
[ "defect" ]
OpenSslSession#initPeerCerts creates too long almost empty arrays
netty 4.1.6 io.netty.handler.ssl.ReferenceCountedOpenSslEngine.OpenSslSession#initPeerCerts ``` final byte[] clientCert; ... int len = clientCert.length + 1; // <------ BUG: raw certificate byte array size is used peerCerts = new Certificate[len]; x509PeerCerts = new X509Certificate[len]; ``` But should probably be `int len = chain.length + 1; // <------ so we have first entry as client cert + chain.length`
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslJavaxX509Certificate.java", "handler/src/main/java/io/netty/handler/ssl/OpenSslX509Certificate.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/OpenSslJavaxX509Certificate.java", "handler/src/main/java/io/netty/handler/ssl/OpenSslX509Certificate.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslJavaxX509Certificate.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslJavaxX509Certificate.java index da10dedf0c1..ad4447f34b7 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSslJavaxX509Certificate.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslJavaxX509Certificate.java @@ -30,7 +30,7 @@ final class OpenSslJavaxX509Certificate extends X509Certificate { private final byte[] bytes; - private X509Certificate wrapped; + private volatile X509Certificate wrapped; public OpenSslJavaxX509Certificate(byte[] bytes) { this.bytes = bytes; diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslX509Certificate.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslX509Certificate.java index 77d0713613a..faaac551a41 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSslX509Certificate.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslX509Certificate.java @@ -34,7 +34,7 @@ final class OpenSslX509Certificate extends X509Certificate { private final byte[] bytes; - private X509Certificate wrapped; + private volatile X509Certificate wrapped; public OpenSslX509Certificate(byte[] bytes) { this.bytes = bytes;
null
train
train
2016-10-30T08:32:49
"2016-10-26T13:25:01Z"
ichaki5748
val
netty/netty/5945_5967
netty/netty
netty/netty/5945
netty/netty/5967
[ "timestamp(timedelta=12.0, similarity=0.9859878293651412)" ]
42fba015ce82ab4ab30e547c888db82fe74094e9
9d6548b72d9f2a416ba694ff894524c43526a9f8
[ "Maybe want to provide a PR with a fix?\n\n> Am 26.10.2016 um 21:25 schrieb ichaki5748 [email protected]:\n> \n> netty 4.1.6\n> \n> io.netty.handler.ssl.ReferenceCountedOpenSslEngine.OpenSslSession#initPeerCerts\n> \n> final byte[] clientCert;\n> ...\n> int len = clientCert.length + 1; // <------ BUG: raw certificate byte array size is used\n> peerCerts = new Certificate[len];\n> x509PeerCerts = new X509Certificate[len];\n> \n> But should probably be\n> \n> int len = chain.length + 1; // <------ so we have first entry as client cert + chain.length\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Yes, sure, will try to do it this week \n", "Thx\n\n> Am 26.10.2016 um 23:54 schrieb ichaki5748 [email protected]:\n> \n> Yes, sure, will try to do it this week\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Also `io.netty.handler.ssl.OpenSslJavaxX509Certificate.*`\ncould theoretically blow in concurrent usage because of lazy init in `io.netty.handler.ssl.OpenSslJavaxX509Certificate#unwrap` because \n`io.netty.handler.ssl.OpenSslJavaxX509Certificate#wrapped` is not `volatile` (certificate would not be fully initialized)? \n", "@iainmcgin please open separate PRs for the separate issues and ping me once you are ready. I will review then\n", "You probably meant to reference @ichaki5748\n", "@iainmcgin ups yes... sorry I need more ☕️ \n", "@normanmaurer Hi, please see https://github.com/netty/netty/pull/5967 https://github.com/netty/netty/pull/5966\n", "Fixed by https://github.com/netty/netty/pull/5967\n" ]
[ "@ichaki5748 can you make this method static\n", "please make static\n", "sure\n", "![MINOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-minor.png) Consider using varargs for methods or constructors which take an array the last parameter. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=pmd%3AUseVarargs)\n" ]
"2016-10-31T22:26:12Z"
[ "defect" ]
OpenSslSession#initPeerCerts creates too long almost empty arrays
netty 4.1.6 io.netty.handler.ssl.ReferenceCountedOpenSslEngine.OpenSslSession#initPeerCerts ``` final byte[] clientCert; ... int len = clientCert.length + 1; // <------ BUG: raw certificate byte array size is used peerCerts = new Certificate[len]; x509PeerCerts = new X509Certificate[len]; ``` But should probably be `int len = chain.length + 1; // <------ so we have first entry as client cert + chain.length`
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java index 382f8606fc6..fce7410bb4d 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java @@ -59,6 +59,8 @@ import javax.security.cert.X509Certificate; import static io.netty.handler.ssl.OpenSsl.memoryAddress; +import static io.netty.util.internal.EmptyArrays.EMPTY_CERTIFICATES; +import static io.netty.util.internal.EmptyArrays.EMPTY_JAVAX_X509_CERTIFICATES; import static io.netty.util.internal.ObjectUtil.checkNotNull; import static javax.net.ssl.SSLEngineResult.HandshakeStatus.FINISHED; import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_UNWRAP; @@ -1308,6 +1310,14 @@ private static SSLEngineResult.HandshakeStatus pendingStatus(int pendingStatus) return pendingStatus > 0 ? NEED_WRAP : NEED_UNWRAP; } + private static boolean isEmpty(Object[] arr) { + return arr == null || arr.length == 0; + } + + private static boolean isEmpty(byte[] cert) { + return cert == null || cert.length == 0; + } + private SSLEngineResult.HandshakeStatus handshake() throws SSLException { if (handshakeState == HandshakeState.FINISHED) { return FINISHED; @@ -1595,9 +1605,9 @@ private final class OpenSslSession implements SSLSession, ApplicationProtocolAcc // These are guarded by synchronized(OpenSslEngine.this) as handshakeFinished() may be triggered by any // thread. private X509Certificate[] x509PeerCerts; + private Certificate[] peerCerts; private String protocol; private String applicationProtocol; - private Certificate[] peerCerts; private String cipher; private byte[] id; private long creationTime; @@ -1747,51 +1757,45 @@ void handshakeFinished() throws SSLException { private void initPeerCerts() { // Return the full chain from the JNI layer. byte[][] chain = SSL.getPeerCertChain(ssl); - final byte[] clientCert; - if (!clientMode) { + if (clientMode) { + if (isEmpty(chain)) { + peerCerts = EMPTY_CERTIFICATES; + x509PeerCerts = EMPTY_JAVAX_X509_CERTIFICATES; + } else { + peerCerts = new Certificate[chain.length]; + x509PeerCerts = new X509Certificate[chain.length]; + initCerts(chain, 0); + } + } else { // if used on the server side SSL_get_peer_cert_chain(...) will not include the remote peer // certificate. We use SSL_get_peer_certificate to get it in this case and add it to our // array later. // // See https://www.openssl.org/docs/ssl/SSL_get_peer_cert_chain.html - clientCert = SSL.getPeerCertificate(ssl); - } else { - clientCert = null; - } - - if (chain == null || chain.length == 0) { - if (clientCert == null || clientCert.length == 0) { - peerCerts = EmptyArrays.EMPTY_CERTIFICATES; - x509PeerCerts = EmptyArrays.EMPTY_JAVAX_X509_CERTIFICATES; + byte[] clientCert = SSL.getPeerCertificate(ssl); + if (isEmpty(clientCert)) { + peerCerts = EMPTY_CERTIFICATES; + x509PeerCerts = EMPTY_JAVAX_X509_CERTIFICATES; } else { - peerCerts = new Certificate[1]; - x509PeerCerts = new X509Certificate[1]; - - peerCerts[0] = new OpenSslX509Certificate(clientCert); - x509PeerCerts[0] = new OpenSslJavaxX509Certificate(clientCert); - } - } else if (clientCert == null || clientCert.length == 0) { - peerCerts = new Certificate[chain.length]; - x509PeerCerts = new X509Certificate[chain.length]; - - for (int a = 0; a < chain.length; ++a) { - byte[] bytes = chain[a]; - peerCerts[a] = new OpenSslX509Certificate(bytes); - x509PeerCerts[a] = new OpenSslJavaxX509Certificate(bytes); + if (isEmpty(chain)) { + peerCerts = new Certificate[] {new OpenSslX509Certificate(clientCert)}; + x509PeerCerts = new X509Certificate[] {new OpenSslJavaxX509Certificate(clientCert)}; + } else { + peerCerts = new Certificate[chain.length + 1]; + x509PeerCerts = new X509Certificate[chain.length + 1]; + peerCerts[0] = new OpenSslX509Certificate(clientCert); + x509PeerCerts[0] = new OpenSslJavaxX509Certificate(clientCert); + initCerts(chain, 1); + } } - } else { - int len = clientCert.length + 1; - peerCerts = new Certificate[len]; - x509PeerCerts = new X509Certificate[len]; - - peerCerts[0] = new OpenSslX509Certificate(clientCert); - x509PeerCerts[0] = new OpenSslJavaxX509Certificate(clientCert); + } + } - for (int a = 0, i = 1; a < chain.length; ++a, ++i) { - byte[] bytes = chain[a]; - peerCerts[i] = new OpenSslX509Certificate(bytes); - x509PeerCerts[i] = new OpenSslJavaxX509Certificate(bytes); - } + private void initCerts(byte[][] chain, int startPos) { + for (int i = 0; i < chain.length; i++) { + int certPos = startPos + i; + peerCerts[certPos] = new OpenSslX509Certificate(chain[i]); + x509PeerCerts[certPos] = new OpenSslJavaxX509Certificate(chain[i]); } } @@ -1859,7 +1863,7 @@ private String selectApplicationProtocol(List<String> protocols, @Override public Certificate[] getPeerCertificates() throws SSLPeerUnverifiedException { synchronized (ReferenceCountedOpenSslEngine.this) { - if (peerCerts == null || peerCerts.length == 0) { + if (isEmpty(peerCerts)) { throw new SSLPeerUnverifiedException("peer not verified"); } return peerCerts.clone(); @@ -1877,7 +1881,7 @@ public Certificate[] getLocalCertificates() { @Override public X509Certificate[] getPeerCertificateChain() throws SSLPeerUnverifiedException { synchronized (ReferenceCountedOpenSslEngine.this) { - if (x509PeerCerts == null || x509PeerCerts.length == 0) { + if (isEmpty(x509PeerCerts)) { throw new SSLPeerUnverifiedException("peer not verified"); } return x509PeerCerts.clone();
diff --git a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java index 32f9c9345ae..b565b3da09d 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java @@ -53,6 +53,7 @@ import java.io.InputStream; import java.net.InetSocketAddress; import java.nio.ByteBuffer; +import java.security.cert.Certificate; import java.security.cert.CertificateException; import java.util.List; import java.util.concurrent.CountDownLatch; @@ -64,6 +65,7 @@ import javax.net.ssl.SSLException; import javax.net.ssl.SSLHandshakeException; import javax.net.ssl.SSLSession; +import javax.security.cert.X509Certificate; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; @@ -866,7 +868,28 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc if (evt instanceof SslHandshakeCompletionEvent) { Throwable cause = ((SslHandshakeCompletionEvent) evt).cause(); if (cause == null) { - promise.setSuccess(null); + SSLSession session = ((SslHandler) ctx.pipeline().first()).engine().getSession(); + X509Certificate[] peerCertificateChain = session.getPeerCertificateChain(); + Certificate[] peerCertificates = session.getPeerCertificates(); + if (peerCertificateChain == null) { + promise.setFailure(new NullPointerException("peerCertificateChain")); + } else if (peerCertificates == null) { + promise.setFailure(new NullPointerException("peerCertificates")); + } else if (peerCertificateChain.length + peerCertificates.length != 4) { + String excTxtFmt = "peerCertificateChain.length:%s, peerCertificates.length:%s"; + promise.setFailure(new IllegalStateException(String.format(excTxtFmt, + peerCertificateChain.length, + peerCertificates.length))); + } else { + for (int i = 0; i < peerCertificateChain.length; i++) { + if (peerCertificateChain[i] == null || peerCertificates[i] == null) { + promise.setFailure( + new IllegalStateException("Certificate in chain is null")); + return; + } + } + promise.setSuccess(null); + } } else { promise.setFailure(cause); }
val
train
2016-11-02T08:14:48
"2016-10-26T13:25:01Z"
ichaki5748
val
netty/netty/5982_5983
netty/netty
netty/netty/5982
netty/netty/5983
[ "timestamp(timedelta=20.0, similarity=0.9229476379369648)" ]
5eebe9a06c6873e797b5f8c847086c35ae990ef5
f15732d2ab965aa1955ffeaf0c60cd6da2be5130
[ "Fixed by https://github.com/netty/netty/pull/5983\n" ]
[ "just make this one listener which always close after logging the error ? No need to have to listeners and at worst have both call close. After this I will merge.\n\n@bryce-anderson thanks!\n", "Done. Let me know if I misunderstood your comment. 😄 \n", "@bryce-anderson remove empty line and we are good :)\n", "Done. Sorry for the noise.\n" ]
"2016-11-04T21:33:59Z"
[ "defect" ]
HttpObjectAggregator closes the http/1.x connection without a 'Connection: close' header
Netty version: 4.1.6.Final The [HttpObjectAggregator](https://github.com/netty/netty/blob/eb7f751ba519cbcab47d640cd18757f09d077b55/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java) will take it upon itself to return a 413 response when the message has too large of body [here](https://github.com/netty/netty/blob/eb7f751ba519cbcab47d640cd18757f09d077b55/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java#L216-L249). On the response it sends back, it does not add a `Connection: close` header, presumably to place nice with 100-continue requests. However, for the common case the connection _is_ closed, and by unexpectedly closing the connection, netty can trigger problems for clients which attempt to reuse the connection since it has not received notice that its being closed.
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java index 908ac8d12b4..4e8455236cf 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java @@ -28,6 +28,7 @@ import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; +import static io.netty.handler.codec.http.HttpHeaderNames.CONNECTION; import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_LENGTH; import static io.netty.handler.codec.http.HttpUtil.getContentLength; @@ -89,12 +90,17 @@ public class HttpObjectAggregator new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE, Unpooled.EMPTY_BUFFER); private static final FullHttpResponse EXPECTATION_FAILED = new DefaultFullHttpResponse( HttpVersion.HTTP_1_1, HttpResponseStatus.EXPECTATION_FAILED, Unpooled.EMPTY_BUFFER); - private static final FullHttpResponse TOO_LARGE = new DefaultFullHttpResponse( + private static final FullHttpResponse TOO_LARGE_CLOSE = new DefaultFullHttpResponse( HttpVersion.HTTP_1_1, HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, Unpooled.EMPTY_BUFFER); + private static final FullHttpResponse TOO_LARGE = new DefaultFullHttpResponse( + HttpVersion.HTTP_1_1, HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, Unpooled.EMPTY_BUFFER); static { EXPECTATION_FAILED.headers().set(CONTENT_LENGTH, 0); TOO_LARGE.headers().set(CONTENT_LENGTH, 0); + + TOO_LARGE_CLOSE.headers().set(CONTENT_LENGTH, 0); + TOO_LARGE_CLOSE.headers().set(CONNECTION, HttpHeaderValues.CLOSE); } private final boolean closeOnExpectationFailed; @@ -216,22 +222,31 @@ protected void finishAggregation(FullHttpMessage aggregated) throws Exception { protected void handleOversizedMessage(final ChannelHandlerContext ctx, HttpMessage oversized) throws Exception { if (oversized instanceof HttpRequest) { // send back a 413 and close the connection - ChannelFuture future = ctx.writeAndFlush(TOO_LARGE.retainedDuplicate()).addListener( - new ChannelFutureListener() { - @Override - public void operationComplete(ChannelFuture future) throws Exception { - if (!future.isSuccess()) { - logger.debug("Failed to send a 413 Request Entity Too Large.", future.cause()); - ctx.close(); - } - } - }); // If the client started to send data already, close because it's impossible to recover. // If keep-alive is off and 'Expect: 100-continue' is missing, no need to leave the connection open. if (oversized instanceof FullHttpMessage || !HttpUtil.is100ContinueExpected(oversized) && !HttpUtil.isKeepAlive(oversized)) { - future.addListener(ChannelFutureListener.CLOSE); + ChannelFuture future = ctx.writeAndFlush(TOO_LARGE_CLOSE.retainedDuplicate()); + future.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + if (!future.isSuccess()) { + logger.debug("Failed to send a 413 Request Entity Too Large.", future.cause()); + } + ctx.close(); + } + }); + } else { + ctx.writeAndFlush(TOO_LARGE.retainedDuplicate()).addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + if (!future.isSuccess()) { + logger.debug("Failed to send a 413 Request Entity Too Large.", future.cause()); + ctx.close(); + } + } + }); } // If an oversized request was handled properly and the connection is still alive
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java index ca4a5f3afa9..69464acf6aa 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java @@ -162,7 +162,7 @@ private static void checkOversizedRequest(HttpRequest message) { assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, response.status()); assertEquals("0", response.headers().get(HttpHeaderNames.CONTENT_LENGTH)); - if (serverShouldCloseConnection(message)) { + if (serverShouldCloseConnection(message, response)) { assertFalse(embedder.isOpen()); assertFalse(embedder.finish()); } else { @@ -170,7 +170,11 @@ private static void checkOversizedRequest(HttpRequest message) { } } - private static boolean serverShouldCloseConnection(HttpRequest message) { + private static boolean serverShouldCloseConnection(HttpRequest message, HttpResponse response) { + // If the response wasn't keep-alive, the server should close the connection. + if (!HttpUtil.isKeepAlive(response)) { + return true; + } // The connection should only be kept open if Expect: 100-continue is set, // or if keep-alive is on. if (HttpUtil.is100ContinueExpected(message)) {
train
train
2016-11-04T15:47:02
"2016-11-04T18:56:33Z"
bryce-anderson
val
netty/netty/5987_5988
netty/netty
netty/netty/5987
netty/netty/5988
[ "timestamp(timedelta=34.0, similarity=0.909561793045737)" ]
5eebe9a06c6873e797b5f8c847086c35ae990ef5
34357e91b2813454fa376e8dffaa26b6cede0052
[ "One of the fields in PlatformDependent references PlatformDependent0 directly without being guarded by hasUnsafe.\n", "Is the issue you don't want to see the warning message, or is this causing a functional issue?\n\n@johnou - which field do you think is an issue?\n", "@johnou - your commit https://github.com/johnou/netty/commit/cff208eb2895134a487424b4b2dcd9c5cafc9f99 changes the initialization of `BYTE_ARRAY_BASE_OFFSET` ... is their a problem with just using the default as set by `PlatformDependent0` (currently `-1`)? IIRC it is assumed whenever this value is used it is guarded by an unsafe check ... did we miss a case?\n", "@Scottmitch BYTE_ARRAY_BASE_OFFSET is set to -1 in PlatformDependent0 if unsafe is unavailable, and if unsafe is unavailable, so is PlatformDependent0, therefore we need to use indirection when accessing it (cannot be directly how it is now otherwise PlatformDependent cannot be loaded)..\n\njava.lang.NoClassDefFoundError: Could not initialize class io.netty.util.internal.PlatformDependent0\n at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:101)\n", "@Scottmitch: this causes a functional issue in environments where sun.misc is not available, which is the default in OSGi frameworks\nI can make a pull request tomorrow, extracting the BYTE_ARRAY_BASE_OFFSET initialization into a method like addressSize0 does for ADDRESS_SIZE. Would that be something you could consider including?\n\nEdit: [johnou@cff208e](https://github.com/johnou/netty/commit/cff208eb2895134a487424b4b2dcd9c5cafc9f99) should fix the issue\n", "@jellenelis see https://github.com/netty/netty/pull/5988\n", "Thanks for the quick responses, BTW!\n", "thanks for clarifying ... we want to avoid loading `PlatformDependent0` if the `Unsafe` interface is not available.\n", "Fixed by https://github.com/netty/netty/pull/5988\n", "Hello I'm using netty version 4.1.8.Final on Apache Felix and still suffer from the same problem!\r\nlb:\r\n0|Active | 0|System Bundle (5.4.0)|5.4.0\r\n 1|Active | 1|Guava: Google Core Libraries for Java (18.0.0)|18.0.0\r\n 2|Active | 1|Jackson-annotations (2.7.0)|2.7.0\r\n 3|Active | 1|Jackson-core (2.7.4)|2.7.4\r\n 4|Active | 1|jackson-databind (2.7.4)|2.7.4\r\n 5|Active | 1|com.github.shumy.leffe-wg (1.0.0)|1.0.0\r\n 6|Active | 1|Netty/Buffer (4.1.8.Final)|4.1.8.Final\r\n 7|Active | 1|Netty/Codec (4.1.8.Final)|4.1.8.Final\r\n 8|Active | 1|Netty/Codec/DNS (4.1.8.Final)|4.1.8.Final\r\n 9|Active | 1|Netty/Codec/HTTP (4.1.8.Final)|4.1.8.Final\r\n 10|Active | 1|Netty/Codec/HTTP2 (4.1.8.Final)|4.1.8.Final\r\n 11|Active | 1|Netty/Codec/Socks (4.1.8.Final)|4.1.8.Final\r\n 12|Active | 1|Netty/Common (4.1.8.Final)|4.1.8.Final\r\n 13|Active | 1|Netty/Handler (4.1.8.Final)|4.1.8.Final\r\n 14|Active | 1|Netty/Handler/Proxy (4.1.8.Final)|4.1.8.Final\r\n 15|Active | 1|Netty/Resolver (4.1.8.Final)|4.1.8.Final\r\n 16|Active | 1|Netty/Resolver/DNS (4.1.8.Final)|4.1.8.Final\r\n 17|Active | 1|Netty/Transport (4.1.8.Final)|4.1.8.Final\r\n 18|Active | 1|Apache Felix Configuration Admin Service (1.8.14)|1.8.14\r\n 19|Active | 1|Apache Felix Gogo Command (0.16.0)|0.16.0\r\n 20|Active | 1|Apache Felix Gogo Runtime (0.16.2)|0.16.2\r\n 21|Active | 1|Apache Felix Gogo Shell (0.12.0)|0.12.0\r\n 22|Active | 1|Apache Felix Declarative Services (2.0.8)|2.0.8\r\n 23|Active | 1|Xtend Runtime Library (2.11.0.v20170124-1424)|2.11.0.v20170124-1424\r\n 24|Active | 1|Xtend Macro Interfaces (2.11.0.v20170124-1424)|2.11.0.v20170124-1424\r\n 25|Active | 1|Xbase Runtime Library (2.11.0.v20170124-1424)|2.11.0.v20170124-1424\r\n 26|Active | 1|OPS4J Pax Logging - API (1.9.1)|1.9.1\r\n 27|Active | 1|Vert.x Core (3.4.1)|3.4.1\r\n", "@shumy please open a new issue with an updated stacktrace.", "@shumy also a reproducer would be nice.", "New issue #6548", "I run netty4.1.42 under jdk11.0.6 and reports java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable, I don't know how to solve it?" ]
[]
"2016-11-07T21:59:54Z"
[ "defect" ]
Unable to initialize PlatformDependent0 due to lack of sun.misc.Unsafe in classpath
When trying to use [Vertx](http://vertx.io) in OSGi, I bumped into the following error. Initially I got: ```[FelixStartLevel] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework [FelixStartLevel] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple [FelixStartLevel] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4 [FelixStartLevel] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16 [FelixStartLevel] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available [FelixStartLevel] DEBUG io.netty.util.internal.PlatformDependent - Java version: 8 [FelixStartLevel] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false [FelixStartLevel] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: unavailable (io.netty.tryUnsafe/org.jboss.netty.tryUnsafe) [FelixStartLevel] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 3717201920 bytes (maybe) [FelixStartLevel] ERROR debug - [debug(1)] The debugNetty method has thrown an exception java.lang.NoClassDefFoundError: Could not initialize class io.netty.util.internal.PlatformDependent0 at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:101) at io.netty.util.ConstantPool.<init>(ConstantPool.java:32) at io.netty.util.Signal$1.<init>(Signal.java:27) at io.netty.util.Signal.<clinit>(Signal.java:27) at io.netty.util.concurrent.DefaultPromise.<clinit>(DefaultPromise.java:42) at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:36) at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47) at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:58) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:77) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:59) at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:133) at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:122) at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:118) at io.vertx.core.impl.VertxFactoryImpl.vertx(VertxFactoryImpl.java:34) at io.vertx.core.Vertx.vertx(Vertx.java:80) ``` I tried to debug the situation and found that the fact that sun.misc.Unsafe is not available could be an issue (see [#272](https://github.com/netty/netty/issues/272)), I tried to set the system property `io.netty.tryUnsafe` to `false`, to no avail (the stacktrace above has `io.netty.tryUnsafe` set to `false`). When I copy the PlatformDependent0 class to the debug package so I have control, I get the following: ``` [FelixStartLevel] DEBUG debug.PlatformDependent0 - java.nio.Buffer.address: available [FelixStartLevel] ERROR debug - [debug(1)] Error during instantiation of the implementation object java.lang.NoClassDefFoundError: sun/misc/Unsafe at debug.PlatformDependent0$2.run(PlatformDependent0.java:86) at java.security.AccessController.doPrivileged(Native Method) at debug.PlatformDependent0.<clinit>(PlatformDependent0.java:82) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at DebugNetty.<init>(DebugNetty.java:23) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:236) at org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:108) at org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:906) at org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:879) at org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:748) at org.apache.felix.scr.impl.manager.AbstractComponentManager.enableInternal(AbstractComponentManager.java:674) at org.apache.felix.scr.impl.manager.AbstractComponentManager.enable(AbstractComponentManager.java:429) at org.apache.felix.scr.impl.manager.ConfigurableComponentHolder.enableComponents(ConfigurableComponentHolder.java:657) at org.apache.felix.scr.impl.BundleComponentActivator.initialEnable(BundleComponentActivator.java:341) at org.apache.felix.scr.impl.Activator.loadComponents(Activator.java:403) at org.apache.felix.scr.impl.Activator.access$200(Activator.java:54) at org.apache.felix.scr.impl.Activator$ScrExtension.start(Activator.java:278) at org.apache.felix.utils.extender.AbstractExtender.createExtension(AbstractExtender.java:259) at org.apache.felix.utils.extender.AbstractExtender.modifiedBundle(AbstractExtender.java:232) at org.osgi.util.tracker.BundleTracker$Tracked.customizerModified(BundleTracker.java:482) at org.osgi.util.tracker.BundleTracker$Tracked.customizerModified(BundleTracker.java:415) at org.osgi.util.tracker.AbstractTracked.track(AbstractTracked.java:232) at org.osgi.util.tracker.BundleTracker$Tracked.bundleChanged(BundleTracker.java:444) at org.apache.felix.framework.util.EventDispatcher.invokeBundleListenerCallback(EventDispatcher.java:916) at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:835) at org.apache.felix.framework.util.EventDispatcher.fireBundleEvent(EventDispatcher.java:517) at org.apache.felix.framework.Felix.fireBundleEvent(Felix.java:4541) at org.apache.felix.framework.Felix.startBundle(Felix.java:2172) at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1371) at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassNotFoundException: sun.misc.Unsafe not found by debug [29] at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1574) at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79) at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 37 more ``` which indeed shows the root cause is sun.misc.Unsafe not being found. The version of Netty used is 4.1.6.Final. Exposing sun.misc through OSGi solves the problem, but that is not a viable solution in the long term. Please advise.
[ "common/src/main/java/io/netty/util/internal/PlatformDependent.java" ]
[ "common/src/main/java/io/netty/util/internal/PlatformDependent.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent.java b/common/src/main/java/io/netty/util/internal/PlatformDependent.java index 81348a06b67..320a63819e3 100644 --- a/common/src/main/java/io/netty/util/internal/PlatformDependent.java +++ b/common/src/main/java/io/netty/util/internal/PlatformDependent.java @@ -98,7 +98,7 @@ public final class PlatformDependent { private static final int DEFAULT_MAX_MPSC_CAPACITY = MPSC_CHUNK_SIZE * MPSC_CHUNK_SIZE; private static final int MAX_ALLOWED_MPSC_CAPACITY = Pow2.MAX_POW2; - private static final long BYTE_ARRAY_BASE_OFFSET = PlatformDependent0.byteArrayBaseOffset(); + private static final long BYTE_ARRAY_BASE_OFFSET = byteArrayBaseOffset0(); private static final boolean HAS_JAVASSIST = hasJavassist0(); @@ -1378,6 +1378,13 @@ private static int addressSize0() { return PlatformDependent0.addressSize(); } + private static long byteArrayBaseOffset0() { + if (!hasUnsafe()) { + return -1; + } + return PlatformDependent0.byteArrayBaseOffset(); + } + private static boolean equalsSafe(byte[] bytes1, int startPos1, byte[] bytes2, int startPos2, int length) { final int end = startPos1 + length; for (int i = startPos1, j = startPos2; i < end; ++i, ++j) {
null
train
train
2016-11-04T15:47:02
"2016-11-07T21:25:49Z"
jellenelis
val
netty/netty/5947_5991
netty/netty
netty/netty/5947
netty/netty/5991
[ "timestamp(timedelta=40.0, similarity=0.8763541037290001)" ]
895a92cb2208906e0cabba21063e604c10c4d5ae
e43a9d32e4b8b70648b46a94a1c409defbcdb61f
[ "@nhnFreespirit so just to ensure I understand you right you suggest that we should for example allow to pass it in into the constructor ?\n", "Yeah, having it in a constructor of Http2Codec would solve my issues nicely.\n\nWhen constructing an http/2 client, it is possible to specify non default initial settings by subclassing AbstractHttp2ConnectionHandlerBuilder and invoking the initialSettings method, but unless I am missing something, there is no reasonable way to accomplish the same when using Http2Codec to build an http/2 server.\n", "I have a version of Netty with this added.\n\nOnly problem is I don't really like it, as to keep the initial settings optional, I have to add 2 new constructors to Http2Codec, so hesitant to create a pull request for this.\n", "created https://github.com/netty/netty/pull/5991\n", "Fixed by https://github.com/netty/netty/pull/5991#issuecomment-259270529\n" ]
[]
"2016-11-08T18:35:08Z"
[ "improvement" ]
Make it possible to specify initial settings for Http2Codec
Currently, when instantiating an Http2Codec, there does not appear to be any way of specifying non default initial settings.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Codec.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Codec.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Codec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Codec.java index a61428d96d8..6a6ee39cc36 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Codec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Codec.java @@ -44,6 +44,19 @@ public Http2Codec(boolean server, ChannelHandler streamHandler) { this(server, new Http2StreamChannelBootstrap().handler(streamHandler), HTTP2_FRAME_LOGGER); } + /** + * Construct a new handler whose child channels run in the same event loop as this handler. + * + * @param server {@code true} this is a server + * @param streamHandler the handler added to channels for remotely-created streams. It must be + * {@link ChannelHandler.Sharable}. {@code null} if the event loop from the parent channel should be used. + * @param initialSettings non default initial settings to send to peer + */ + public Http2Codec(boolean server, ChannelHandler streamHandler, Http2Settings initialSettings) { + this(server, new Http2StreamChannelBootstrap().handler(streamHandler), HTTP2_FRAME_LOGGER, + initialSettings); + } + /** * Construct a new handler whose child channels run in a different event loop. * @@ -51,13 +64,25 @@ public Http2Codec(boolean server, ChannelHandler streamHandler) { * @param bootstrap bootstrap used to instantiate child channels for remotely-created streams. */ public Http2Codec(boolean server, Http2StreamChannelBootstrap bootstrap, Http2FrameLogger frameLogger) { - this(server, bootstrap, new DefaultHttp2FrameWriter(), frameLogger); + this(server, bootstrap, new DefaultHttp2FrameWriter(), frameLogger, new Http2Settings()); + } + + /** + * Construct a new handler whose child channels run in a different event loop. + * + * @param server {@code true} this is a server + * @param bootstrap bootstrap used to instantiate child channels for remotely-created streams. + * @param initialSettings non default initial settings to send to peer + */ + public Http2Codec(boolean server, Http2StreamChannelBootstrap bootstrap, Http2FrameLogger frameLogger, + Http2Settings initialSettings) { + this(server, bootstrap, new DefaultHttp2FrameWriter(), frameLogger, initialSettings); } // Visible for testing Http2Codec(boolean server, Http2StreamChannelBootstrap bootstrap, Http2FrameWriter frameWriter, - Http2FrameLogger frameLogger) { - frameCodec = new Http2FrameCodec(server, frameWriter, frameLogger); + Http2FrameLogger frameLogger, Http2Settings initialSettings) { + frameCodec = new Http2FrameCodec(server, frameWriter, frameLogger, initialSettings); multiplexCodec = new Http2MultiplexCodec(server, bootstrap); } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java index 2a941866f6c..a9f2f725efc 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java @@ -127,11 +127,12 @@ public Http2FrameCodec(boolean server) { * @param server {@code true} this is a server */ public Http2FrameCodec(boolean server, Http2FrameLogger frameLogger) { - this(server, new DefaultHttp2FrameWriter(), frameLogger); + this(server, new DefaultHttp2FrameWriter(), frameLogger, new Http2Settings()); } // Visible for testing - Http2FrameCodec(boolean server, Http2FrameWriter frameWriter, Http2FrameLogger frameLogger) { + Http2FrameCodec(boolean server, Http2FrameWriter frameWriter, Http2FrameLogger frameLogger, + Http2Settings initialSettings) { Http2Connection connection = new DefaultHttp2Connection(server); frameWriter = new Http2OutboundFrameLogger(frameWriter, frameLogger); Http2ConnectionEncoder encoder = new DefaultHttp2ConnectionEncoder(connection, frameWriter); @@ -139,7 +140,7 @@ public Http2FrameCodec(boolean server, Http2FrameLogger frameLogger) { Http2FrameReader reader = new Http2InboundFrameLogger(frameReader, frameLogger); Http2ConnectionDecoder decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader); decoder.frameListener(new FrameListener()); - http2Handler = new InternalHttp2ConnectionHandler(decoder, encoder, new Http2Settings()); + http2Handler = new InternalHttp2ConnectionHandler(decoder, encoder, initialSettings); http2Handler.connection().addListener(new ConnectionListener()); this.server = server; }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java index 260227690ca..d798d22eff4 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java @@ -70,7 +70,8 @@ public class Http2FrameCodecTest { @Before public void setUp() throws Exception { frameWriter = spy(new VerifiableHttp2FrameWriter()); - framingCodec = new Http2FrameCodec(true, frameWriter, new Http2FrameLogger(LogLevel.TRACE)); + framingCodec = new Http2FrameCodec(true, frameWriter, new Http2FrameLogger(LogLevel.TRACE), + new Http2Settings()); frameListener = ((DefaultHttp2ConnectionDecoder) framingCodec.connectionHandler().decoder()) .internalFrameListener(); inboundHandler = new LastInboundHandler();
train
train
2016-11-08T17:08:17
"2016-10-27T00:49:34Z"
nhnFreespirit
val
netty/netty/5975_5995
netty/netty
netty/netty/5975
netty/netty/5995
[ "timestamp(timedelta=31.0, similarity=0.8573709676194836)" ]
3f20b8adee1d86285acc508308c07d67b36c1fc6
f5ea111ae850b4430873b8e09f4b33975a18c176
[ "Related? https://github.com/netty/netty/pull/4254\n", "Yes, seems so. Thanks for the pointer.\n", "@jrudolph would be possible to submit a small unit test that works with JDK SSLEngine and fails with the OpenSslEngine ?\n", "@normanmaurer yes, will try to build one, shouldn't be too hard.\n", "Thanks a lot!\n\n> Am 03.11.2016 um 15:19 schrieb Johannes Rudolph [email protected]:\n> \n> @normanmaurer yes, will try to build one, shouldn't be too hard.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@jrudolph also when you say \"the JDK SSLEngine doesn't buffer encrypted data internally between calls to unwrap but \"puts them back\" into the src buffer\" do you mean that if you not completely drain stuff into the dst buffer it will just adjust the position of the src buffer and so try to process it on the next call again ?\n", "Here's a test I added to SslEngineTest: https://github.com/jrudolph/netty/tree/w/5975-SslEngine-test\n\nIt seems I was wrong about my first point. The JDK SSLEngine also only decrypts only one frame per unwrap call. Still it doesn't consume data that it hasn't read so far.\n\n> @jrudolph also when you say \"the JDK SSLEngine doesn't buffer encrypted data internally between calls to unwrap but \"puts them back\" into the src buffer\" do you mean that if you not completely drain stuff into the dst buffer it will just adjust the position of the src buffer and so try to process it on the next call again ?\n\nHmm, no, I originally don't meant to imply that case. But, indeed, the difference can be seen here as well. JDK returns BUFFER_OVERFLOW without consuming the src buffer while openssl consumes the buffer. I added a commented out block in the test to check this behavior as well.\n", "@jrudolph let me look into it\n", "I think the test case is not much different than @ijuma's [here](https://github.com/ijuma/netty/commit/0f6eff24885bfbe965d0cabd61115067d14feedc#diff-08da41869dfc1d842df441440b1c605bR232), so it might indeed just be a duplicate of #4238. Hi, @ijuma btw ;)\n", "@jrudolph are you license the shared code as ASL2 so I can include it in netty when I have a fix ?\n", "@jrudolph so after some digging I found out that the SSLEngineImpl (the JDK one) always only consume if a full packet is present while we even consume when it is only partly present. Both implementations correctly return `BUFFER_OVERFLOW` tho. I wonder if what we do is permitted.. I could not see details in the javadocs of `SSLEngine` which state that you can not consume a partial record. I am missing anything ?\n\nThe other \"problem\" is that our implementation will consume multiple records in one unwrap call but not write multiple out to the dsts buffers. The JDK implementation only consume one one record at a time (per unwrap call). Again I am not sure if what we do is allowed in terms of the the API and is just an implementation detail. That said I think we should also try to fill more then one record into the dsts buffers if we also consume more then one. \n\n@jrudolph WDYT ?\n", "@jrudolph actually I think this sentence may state we are not allowed to consume and still return a `BUFFER_OVERFLOW`:\n\n```\nThe SSLEngine produces/consumes complete SSL/TLS packets only, and does not store application data internally between calls to wrap()/unwrap().\n```\n\nAgree ?\n", "@jrudolph ok nevermind.. I think I know how to \"fix\" the problems stated here. Will need to check what performance impact this has and if it is too much I may introduce a \"strict\" and non \"strict\" mode for the `OpenSslEngine`. Will keep you posted.\n", "> @jrudolph are you license the shared code as ASL2 so I can include it in netty when I have a fix ?\n\nYes, here as well ;)\n", "> @jrudolph actually I think this sentence may state we are not allowed to consume and still return a BUFFER_OVERFLOW:\n> \n> The SSLEngine produces/consumes complete SSL/TLS packets only, and does not store application data internally between calls to wrap()/unwrap().\n> Agree ?\n\nYes, exactly. (I quoted that sentence above as well.) Hard to say if this is a requirement or just a description of the implementation. It's probably impossible to take the javadoc as a water-tight specification (I guess you know best about that...).\n\nIMO it's nice if you can rely on the openssl engine not to keep user data in internal buffers (if possible). I haven't looked into it in detail but how does the openssl engine provide backpressure? What happens if you always call `unwrap` e.g. with input buffers of size 30000 and output buffers of size 20000? Will data start to accumulate internally in the openssl engine or will it stop consuming the input buffer at some point?\n", "I have a fix locally here so it works as the JDK impl :) will open a PR later today 👍\n", "@normanmaurer with the strict / non-strict as you previously mentioned? or did you manage to keep it simple?\n", "Was able to keep it simple \n\n> Am 08.11.2016 um 15:51 schrieb Johno Crawford [email protected]:\n> \n> @normanmaurer with the strict / non-strict as you previously mentioned? or did you manage to keep it simple?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@jrudolph should be fixed by https://github.com/netty/netty/pull/5995\n", "Fixed by https://github.com/netty/netty/pull/5995\n", "@jrudolph - I know it's not nice to comment on an ancient closed topic especially when it's off-topic, but I will like to say a BIG TANK YOU. Your findings and description solved my 2 days no-sleep pain with SSLEngine, where the unconsumed data was copied back in the src, leaving my implementation hanging and waiting for more data. You are a life saver, so again a big THANKS!" ]
[ "@normanmaurer Have you verified that doing this each time will not impact performance? SslHandler would cache the packetLength so that it wouldn't recalculate it each time. I suspect it's not an issue.\n", "Is it correct to use a constant here or should we get this value from the session? Or does the session always return this constant?\n", "It always return the constants\n", "I ran a benchmark and the perf was the same\n" ]
"2016-11-09T18:28:20Z"
[ "defect" ]
Openssl-based SSLEngine differs slightly in unwrap behavior
We are experimenting with netty's openssl based SSLEngines as an alternative to the JDK one ([for our effort](https://github.com/akka/akka-http/tree/jr/http2-on-netty-openssl) to produce an HTTP/2 implementation for akka-http). We use netty-tcnative-boringssl-static, version 1.1.33.Fork23. Thanks for providing that! When we talked at reactive summit you mentioned that you put much effort into making the openssl-based SSLEngines compatible with the JDK ones. So, you might be interested in the difference we found (knowing that the SSLEngine API has enough degrees of freedom to warrant a lot of behavior...). I found that our stream-based wrapper around SSLEngine doesn't read all incoming data from the SSLEngine when there is no new incoming data from the network, so the application gets stuck waiting for incoming data. After some debugging I found out that with netty's openssl SSLEngine unwrap seems to produce plain text data in smaller chunks (probably single TLS frames) and keeps buffering the rest of the data internally. The src buffer is fully consumed but calling unwrap again with an empty src buffer will still produce more data. This differs from what the JDK SSLEngine does in two points: * the JDK SSLEngine consumes and produces as much data as possible in one go while the openssl one produces less output in one call * the JDK SSLEngine doesn't buffer encrypted data internally between calls to `unwrap` but "puts them back" into the src buffer This also somewhat documented with this comment from https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLEngine.html: > The SSLEngine produces/consumes complete SSL/TLS packets only, and does not store application data internally between calls to wrap()/unwrap(). Consequently, we [implemented the `unwrap` loop](https://github.com/akka/akka/blob/master/akka-stream/src/main/scala/akka/stream/impl/io/TLSActor.scala#L402) as ```scala private def doUnwrap(ignoreOutput: Boolean = false): Unit = { val result = engine.unwrap(transportInBuffer, userOutBuffer) // ... result.getStatus match { case OK ⇒ result.getHandshakeStatus match { case NEED_WRAP ⇒ // ... case FINISHED ⇒ // ... case _ ⇒ if (transportInBuffer.hasRemaining) doUnwrap() // loop here only if there's still src data available else flushToUser() ``` With the openssl SSLEngine we are seeing the src buffer fully consumed after the first call to `unwrap` and we won't continue to call `unwrap` in a loop. We can fix it for now by changing the condition to ```scala if (transportInBuffer.hasRemaining || (result.bytesProduced() > 0)) doUnwrap() ``` to keep looping while `unwrap` produces output. Wouldn't it be more efficient to do this loop internally in the openssl implementation of `unwrap` to write as much data as possible into the output buffer?
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java", "handler/src/main/java/io/netty/handler/ssl/SslHandler.java", "handler/src/main/java/io/netty/handler/ssl/SslUtils.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java", "handler/src/main/java/io/netty/handler/ssl/SslHandler.java", "handler/src/main/java/io/netty/handler/ssl/SslUtils.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java index fce7410bb4d..76cc0dda9b8 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java @@ -67,6 +67,7 @@ import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NEED_WRAP; import static javax.net.ssl.SSLEngineResult.HandshakeStatus.NOT_HANDSHAKING; import static javax.net.ssl.SSLEngineResult.Status.BUFFER_OVERFLOW; +import static javax.net.ssl.SSLEngineResult.Status.BUFFER_UNDERFLOW; import static javax.net.ssl.SSLEngineResult.Status.CLOSED; import static javax.net.ssl.SSLEngineResult.Status.OK; @@ -416,9 +417,8 @@ private int writePlaintextData(final ByteBuffer src) { /** * Write encrypted data to the OpenSSL network BIO. */ - private int writeEncryptedData(final ByteBuffer src) { + private int writeEncryptedData(final ByteBuffer src, int len) { final int pos = src.position(); - final int len = src.remaining(); final int netWrote; if (src.isDirect()) { final long addr = Buffer.address(src) + pos; @@ -430,8 +430,12 @@ private int writeEncryptedData(final ByteBuffer src) { final ByteBuf buf = alloc.directBuffer(len); try { final long addr = memoryAddress(buf); - - buf.setBytes(0, src); + int newLimit = pos + len; + if (newLimit != src.remaining()) { + buf.setBytes(0, (ByteBuffer) src.duplicate().position(pos).limit(newLimit)); + } else { + buf.setBytes(0, src); + } netWrote = SSL.writeToBIO(networkBIO, addr, len); if (netWrote >= 0) { @@ -601,6 +605,11 @@ public final SSLEngineResult wrap( } } + if (dst.remaining() < MAX_ENCRYPTED_PACKET_LENGTH) { + // Can not hold the maximum packet so we need to tell the caller to use a bigger destination + // buffer. + return new SSLEngineResult(BUFFER_OVERFLOW, getHandshakeStatus(), 0, 0); + } // There was no pending data in the network BIO -- encrypt any application data int bytesProduced = 0; int bytesConsumed = 0; @@ -775,9 +784,29 @@ public final SSLEngineResult unwrap( } } - // Write encrypted data to network BIO + if (len < SslUtils.SSL_RECORD_HEADER_LENGTH) { + return new SSLEngineResult(BUFFER_UNDERFLOW, getHandshakeStatus(), 0, 0); + } + + int packetLength = SslUtils.getEncryptedPacketLength(srcs, srcsOffset); + if (packetLength - SslUtils.SSL_RECORD_HEADER_LENGTH > capacity) { + // No enough space in the destination buffer so signal the caller + // that the buffer needs to be increased. + return new SSLEngineResult(BUFFER_OVERFLOW, getHandshakeStatus(), 0, 0); + } + + if (len < packetLength) { + // We either have no enough data to read the packet length at all or not enough for reading + // the whole packet. + return new SSLEngineResult(BUFFER_UNDERFLOW, getHandshakeStatus(), 0, 0); + } + int bytesConsumed = 0; if (srcsOffset < srcsEndOffset) { + + // Write encrypted data to network BIO + int packetLengthRemaining = packetLength; + do { ByteBuffer src = srcs[srcsOffset]; int remaining = src.remaining(); @@ -787,9 +816,15 @@ public final SSLEngineResult unwrap( srcsOffset++; continue; } - int written = writeEncryptedData(src); + // Write more encrypted data into the BIO. Ensure we only read one packet at a time as + // stated in the SSLEngine javadocs. + int written = writeEncryptedData(src, Math.min(packetLengthRemaining, src.remaining())); if (written > 0) { - bytesConsumed += written; + packetLengthRemaining -= written; + if (packetLengthRemaining == 0) { + // A whole packet has been consumed. + break; + } if (written == remaining) { srcsOffset++; @@ -808,6 +843,7 @@ public final SSLEngineResult unwrap( break; } } while (srcsOffset < srcsEndOffset); + bytesConsumed = packetLength - packetLengthRemaining; } // Number of produced bytes diff --git a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java index 55c09aef438..b06e3a7a52f 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java @@ -197,14 +197,6 @@ public class SslHandler extends ByteToMessageDecoder implements ChannelOutboundH * {@code true} if and only if {@link SSLEngine} expects a direct buffer. */ private final boolean wantsDirectBuffer; - /** - * {@code true} if and only if {@link SSLEngine#wrap(ByteBuffer, ByteBuffer)} requires the output buffer - * to be always as large as {@link #maxPacketBufferSize} even if the input buffer contains small amount of data. - * <p> - * If this flag is {@code false}, we allocate a smaller output buffer. - * </p> - */ - private final boolean wantsLargeOutboundNetworkBuffer; // END Platform-dependent flags @@ -283,7 +275,6 @@ public SslHandler(SSLEngine engine, boolean startTls, Executor delegatedTaskExec boolean opensslEngine = engine instanceof OpenSslEngine; wantsDirectBuffer = opensslEngine; - wantsLargeOutboundNetworkBuffer = !opensslEngine; /** * When using JDK {@link SSLEngine}, we use {@link #MERGE_CUMULATOR} because it works only with @@ -516,7 +507,7 @@ private void wrap(ChannelHandlerContext ctx, boolean inUnwrap) throws SSLExcepti ByteBuf buf = (ByteBuf) msg; if (out == null) { - out = allocateOutNetBuf(ctx, buf.readableBytes()); + out = allocateOutNetBuf(ctx); } SSLEngineResult result = wrap(alloc, engine, buf, out); @@ -599,7 +590,7 @@ private void wrapNonAppData(ChannelHandlerContext ctx, boolean inUnwrap) throws // See https://github.com/netty/netty/issues/5860 while (!ctx.isRemoved()) { if (out == null) { - out = allocateOutNetBuf(ctx, 0); + out = allocateOutNetBuf(ctx); } SSLEngineResult result = wrap(alloc, engine, Unpooled.EMPTY_BUFFER, out); @@ -1477,14 +1468,8 @@ private ByteBuf allocate(ChannelHandlerContext ctx, int capacity) { * Allocates an outbound network buffer for {@link SSLEngine#wrap(ByteBuffer, ByteBuffer)} which can encrypt * the specified amount of pending bytes. */ - private ByteBuf allocateOutNetBuf(ChannelHandlerContext ctx, int pendingBytes) { - if (wantsLargeOutboundNetworkBuffer) { - return allocate(ctx, maxPacketBufferSize); - } else { - return allocate(ctx, Math.min( - pendingBytes + OpenSslEngine.MAX_ENCRYPTION_OVERHEAD_LENGTH, - maxPacketBufferSize)); - } + private ByteBuf allocateOutNetBuf(ChannelHandlerContext ctx) { + return allocate(ctx, maxPacketBufferSize); } private final class LazyChannelPromise extends DefaultPromise<Channel> { diff --git a/handler/src/main/java/io/netty/handler/ssl/SslUtils.java b/handler/src/main/java/io/netty/handler/ssl/SslUtils.java index a7ad75e3e32..3c222b93dda 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslUtils.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslUtils.java @@ -21,6 +21,8 @@ import io.netty.handler.codec.base64.Base64; import io.netty.handler.codec.base64.Base64Dialect; +import java.nio.ByteBuffer; + /** * Constants for SSL packets. */ @@ -120,6 +122,92 @@ static int getEncryptedPacketLength(ByteBuf buffer, int offset) { return packetLength; } + private static short unsignedByte(byte b) { + return (short) (b & 0xFF); + } + + private static int unsignedShort(short s) { + return s & 0xFFFF; + } + + static int getEncryptedPacketLength(ByteBuffer[] buffers, int offset) { + ByteBuffer buffer = buffers[offset]; + + // Check if everything we need is in one ByteBuffer. If so we can make use of the fast-path. + if (buffer.remaining() >= SslUtils.SSL_RECORD_HEADER_LENGTH) { + return getEncryptedPacketLength(buffer); + } + + // We need to copy 5 bytes into a temporary buffer so we can parse out the packet length easily. + ByteBuffer tmp = ByteBuffer.allocate(5); + + do { + buffer = buffers[offset++].duplicate(); + if (buffer.remaining() > tmp.remaining()) { + buffer.limit(buffer.position() + tmp.remaining()); + } + tmp.put(buffer); + } while (tmp.hasRemaining()); + + // Done, flip the buffer so we can read from it. + tmp.flip(); + return getEncryptedPacketLength(tmp); + } + + private static int getEncryptedPacketLength(ByteBuffer buffer) { + int packetLength = 0; + int pos = buffer.position(); + // SSLv3 or TLS - Check ContentType + boolean tls; + switch (unsignedByte(buffer.get(pos))) { + case SslUtils.SSL_CONTENT_TYPE_CHANGE_CIPHER_SPEC: + case SslUtils.SSL_CONTENT_TYPE_ALERT: + case SslUtils.SSL_CONTENT_TYPE_HANDSHAKE: + case SslUtils.SSL_CONTENT_TYPE_APPLICATION_DATA: + tls = true; + break; + default: + // SSLv2 or bad data + tls = false; + } + + if (tls) { + // SSLv3 or TLS - Check ProtocolVersion + int majorVersion = unsignedByte(buffer.get(pos + 1)); + if (majorVersion == 3) { + // SSLv3 or TLS + packetLength = unsignedShort(buffer.getShort(pos + 3)) + SslUtils.SSL_RECORD_HEADER_LENGTH; + if (packetLength <= SslUtils.SSL_RECORD_HEADER_LENGTH) { + // Neither SSLv3 or TLSv1 (i.e. SSLv2 or bad data) + tls = false; + } + } else { + // Neither SSLv3 or TLSv1 (i.e. SSLv2 or bad data) + tls = false; + } + } + + if (!tls) { + // SSLv2 or bad data - Check the version + int headerLength = (unsignedByte(buffer.get(pos)) & 0x80) != 0 ? 2 : 3; + int majorVersion = unsignedByte(buffer.get(pos + headerLength + 1)); + if (majorVersion == 2 || majorVersion == 3) { + // SSLv2 + if (headerLength == 2) { + packetLength = (buffer.getShort(pos) & 0x7FFF) + 2; + } else { + packetLength = (buffer.getShort(pos) & 0x3FFF) + 3; + } + if (packetLength <= headerLength) { + return -1; + } + } else { + return -1; + } + } + return packetLength; + } + static void notifyHandshakeFailure(ChannelHandlerContext ctx, Throwable cause) { // We have may haven written some parts of data before an exception was thrown so ensure we always flush. // See https://github.com/netty/netty/issues/3900#issuecomment-172481830
diff --git a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java index b565b3da09d..8f1cbee3f00 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java @@ -678,9 +678,8 @@ protected void testEnablingAnAlreadyDisabledSslProtocol(String[] protocols1, Str } protected static void handshake(SSLEngine clientEngine, SSLEngine serverEngine) throws SSLException { - int netBufferSize = 17 * 1024; - ByteBuffer cTOs = ByteBuffer.allocateDirect(netBufferSize); - ByteBuffer sTOc = ByteBuffer.allocateDirect(netBufferSize); + ByteBuffer cTOs = ByteBuffer.allocateDirect(clientEngine.getSession().getPacketBufferSize()); + ByteBuffer sTOc = ByteBuffer.allocateDirect(serverEngine.getSession().getPacketBufferSize()); ByteBuffer serverAppReadBuffer = ByteBuffer.allocateDirect( serverEngine.getSession().getApplicationBufferSize()); @@ -915,4 +914,84 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc promise.syncUninterruptibly(); } + + @Test + public void testUnwrapBehavior() throws Exception { + SelfSignedCertificate cert = new SelfSignedCertificate(); + + clientSslCtx = SslContextBuilder + .forClient() + .trustManager(cert.cert()) + .sslProvider(sslClientProvider()) + .build(); + SSLEngine client = clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT); + + serverSslCtx = SslContextBuilder + .forServer(cert.certificate(), cert.privateKey()) + .sslProvider(sslServerProvider()) + .build(); + SSLEngine server = serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT); + + byte[] bytes = "Hello World".getBytes(CharsetUtil.US_ASCII); + + try { + ByteBuffer plainClientOut = ByteBuffer.allocate(client.getSession().getApplicationBufferSize()); + ByteBuffer encryptedClientToServer = ByteBuffer.allocate(server.getSession().getPacketBufferSize() * 2); + ByteBuffer plainServerIn = ByteBuffer.allocate(server.getSession().getApplicationBufferSize()); + + handshake(client, server); + + // create two TLS frames + + // first frame + plainClientOut.put(bytes, 0, 5); + plainClientOut.flip(); + + SSLEngineResult result = client.wrap(plainClientOut, encryptedClientToServer); + assertEquals(SSLEngineResult.Status.OK, result.getStatus()); + assertEquals(5, result.bytesConsumed()); + assertTrue(result.bytesProduced() > 0); + + assertFalse(plainClientOut.hasRemaining()); + + // second frame + plainClientOut.clear(); + plainClientOut.put(bytes, 5, 6); + plainClientOut.flip(); + + result = client.wrap(plainClientOut, encryptedClientToServer); + assertEquals(SSLEngineResult.Status.OK, result.getStatus()); + assertEquals(6, result.bytesConsumed()); + assertTrue(result.bytesProduced() > 0); + + // send over to server + encryptedClientToServer.flip(); + + // try with too small output buffer first (to check BUFFER_OVERFLOW case) + int remaining = encryptedClientToServer.remaining(); + ByteBuffer small = ByteBuffer.allocate(3); + result = server.unwrap(encryptedClientToServer, small); + assertEquals(SSLEngineResult.Status.BUFFER_OVERFLOW, result.getStatus()); + assertEquals(remaining, encryptedClientToServer.remaining()); + + // now with big enough buffer + result = server.unwrap(encryptedClientToServer, plainServerIn); + assertEquals(SSLEngineResult.Status.OK, result.getStatus()); + + assertEquals(5, result.bytesProduced()); + assertTrue(encryptedClientToServer.hasRemaining()); + + result = server.unwrap(encryptedClientToServer, plainServerIn); + assertEquals(SSLEngineResult.Status.OK, result.getStatus()); + assertEquals(6, result.bytesProduced()); + assertFalse(encryptedClientToServer.hasRemaining()); + + plainServerIn.flip(); + + assertEquals(ByteBuffer.wrap(bytes), plainServerIn); + } finally { + cleanupClientSslEngine(client); + cleanupServerSslEngine(server); + } + } }
test
train
2016-11-09T10:58:35
"2016-11-03T09:39:43Z"
jrudolph
val
netty/netty/6037_6059
netty/netty
netty/netty/6037
netty/netty/6059
[ "timestamp(timedelta=13.0, similarity=0.8963185923985323)" ]
ba95c401a7cf8c7923fce660e16c8ba567d62f30
9b6f0c7a94b9a2beec2074d0fee2bdcfb502411b
[ "Sounds ok to me... @Scottmitch ?\n", "sgtm", "@derbylock could you provide a PR ?", "Fixed by https://github.com/netty/netty/pull/6059" ]
[]
"2016-11-23T11:16:41Z"
[]
Make FixedChannelPool non final
We use **FixedChannelPool** in our project. But we have some problems adopting it to our needs. For example, we want to override methods **_pollChannel_** and **_offerChannel_** for monitoring purposes. We want to monitor count of channel objects in the pool. But it is difficult because **FixedChannelPool** is final. Could you consider removing final keyword from it, please?
[ "transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java" ]
[ "transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java index cefedee89fb..8c69dc7b0da 100644 --- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java +++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java @@ -34,7 +34,7 @@ * {@link ChannelPool} implementation that takes another {@link ChannelPool} implementation and enforce a maximum * number of concurrent connections. */ -public final class FixedChannelPool extends SimpleChannelPool { +public class FixedChannelPool extends SimpleChannelPool { private static final IllegalStateException FULL_EXCEPTION = ThrowableUtil.unknownStackTrace( new IllegalStateException("Too many outstanding acquire operations"), FixedChannelPool.class, "acquire0(...)");
null
train
train
2016-11-23T00:17:05
"2016-11-18T14:57:22Z"
derbylock
val
netty/netty/6061_6078
netty/netty
netty/netty/6061
netty/netty/6078
[ "timestamp(timedelta=20.0, similarity=0.8506734955420215)" ]
ea0ddc0ea2cd11071bb960c7ff1aeade7bc1c1cb
08f4d7c07db31970ca18a27b8f926b24681e496b
[ "Will check later today\n\n> Am 25.11.2016 um 04:45 schrieb byeongguk.gim <[email protected]>:\n> \n> Netty version: 4.1.0.Final\n> \n> Context: I found bug on EpollServerDomainSocketChannel class. Its isActive() method always return false. It is required to set isActive = true in doBind() operation. (same as EpollServerSocketChannel)\n> \n> Steps to reproduce:\n> \n> Use EpollServerDomainSocketChannel class to server bootstrap's channel class\n> Bind any unix domain socket.\n> Server Channel's channelActive() is not called due to EpollServerDomainSocketChannel.isActive() return false.\n> $ java -version\n> java version \"1.8.0_45\"\n> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)\n> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)\n> \n> Operating system: Cent OS 7.2 64-bit\n> \n> $ uname -a\n> Linux 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n> \n", "@byoengguk thanks for reporting... you are right. Let me fix this", "Fixed by https://github.com/netty/netty/pull/6078" ]
[]
"2016-12-01T06:46:25Z"
[ "defect" ]
EpollServerDomainSocketChannel.isActive() return false after bind operation succeed
Netty version: 4.1.0.Final Context: I found bug on EpollServerDomainSocketChannel class. Its isActive() method always return false. It is required to set isActive = true in doBind() operation like EpollServerSocketChannel. Steps to reproduce: 1. Use EpollServerDomainSocketChannel class to server bootstrap's channel class 2. Bind any unix domain socket. 3. Server Channel's channelActive() is not called due to EpollServerDomainSocketChannel.isActive() return false. $ java -version java version "1.8.0_45" Java(TM) SE Runtime Environment (build 1.8.0_45-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode) Operating system: Cent OS 7.2 64-bit $ uname -a Linux 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerDomainSocketChannel.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerDomainSocketChannel.java" ]
[]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerDomainSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerDomainSocketChannel.java index 958d85a4240..c533d24ea1c 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerDomainSocketChannel.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerDomainSocketChannel.java @@ -75,6 +75,7 @@ protected void doBind(SocketAddress localAddress) throws Exception { fd().bind(localAddress); fd().listen(config.getBacklog()); local = (DomainSocketAddress) localAddress; + active = true; } @Override
null
train
train
2016-12-01T06:54:51
"2016-11-25T03:45:25Z"
thywhite
val
netty/netty/6074_6097
netty/netty
netty/netty/6074
netty/netty/6097
[ "timestamp(timedelta=20.0, similarity=0.8803081878243956)" ]
002c99e751ec24e80236a310bd46e0098bdb636d
4b0f1020c2d6e8b55aac468113c9a48e1e499781
[ "@maseev - I had not heard about this ... but this sounds great thanks!" ]
[ "typo ihttp -> http", "Why not copy direct from http://netty.io/wiki/writing-a-commit-message.html? (I think the sentences under each header help a lot when thinking of what to write).", "I'm currently experimenting with the format. Markdown looks nice in UIs that support it and helps visually separate different sections/titles and still looks reasonable as plain text.", "nit caps? Minimal yet complete reproducer code", "as @johnou pointed out this is currently different from http://netty.io/wiki/writing-a-commit-message.html ... @normanmaurer @trustin @nmittler do you have a preference or objection to this (I will update the wiki if we are OK with this). Markdown looks nice in UIs that support it and helps visually separate different sections/titles and still looks reasonable as plain text.", "Maybe something like this?\r\n\r\n```text\r\n**Motivation:**\r\n\r\nExplain here the context, and why you're making that change.\r\nWhat is the problem you're trying to solve.\r\n\r\n**Modifications:**\r\n\r\nDescribe the modifications you've done.\r\n\r\n**Result:**\r\n\r\nAfter your change, what will change.\r\n```", "not strong here but the intention is that this is a \"section header/title\" (not a sentence)", "I'll include the descriptions for each section, but I prefer the Markdown section header. That way extra \"hard coded\" newlines aren't necessary ... the spacing is determined by the styles.", "I would prefer to not use markdown in commit messages honestly.", "For Netty version, you can use something very similar to [this](https://github.com/orientechnologies/orientdb/blob/master/.github/ISSUE_TEMPLATE.md#orientdb-version-operating-system-or-hardware). You can also declare that Netty 3.x is EOL and no longer supported and a person needs to upgrade to 4.0.x or 4.1.x. WDYT?", "I would prefer to not update this template with every new release. I'm fine with leaving the field free form for now and updating it later if the field is being abused.\r\n\r\nWe could add a note that 3.x is EOL but it may be useful for folks to still report the issue for knowledge sharing, and in-case someone is motivated enough they can fix it.", "@normanmaurer - You don't like the way it looks in UIs, just think it is too noisy as plain text, or something else?", "Fair enough.", "I don't like to include markdown as not all tools support it and it looks ugly if they not do. For example I often use `git log` on the cmdline etc.", "To keep it consistent with the Netty wiki I would remove caps from every word even if it's a section header / title.", "@normanmaurer You're not a fan of hashtags? :)\r\n\r\nHonestly, the basic formatting used by this template wouldn't be that ugly with pure text-based tools. The only ugly bit would be if the user added a bunch of markdown within the comments. ", "I agree with @nmittler, but I don't want to get hung up on this cause I think its roughly the same either way ... @normanmaurer do you still want me to remove the markdown for section headers?", "sgtm", "I just like to keep it simple in commit messages which basically means no markdown ;) But if you guys really love it I am ok with it. ", "@normanmaurer - I'll just stay consistent with the wiki ... we can update later if we want.", "this extra vertical space hurts my eyes 😜 " ]
"2016-12-02T22:47:19Z"
[ "documentation" ]
Consider using an issue template
I've noticed that despite the fact there are the [guidelines for contributing](https://github.com/netty/netty/blob/4.1/CONTRIBUTING.md), people still keep creating issues that contain so little information that is sometimes absolutely impossible to do anything without asking additional questions like: - What Netty's version is a person using? - What steps do I have to perform in order to reproduce the problem? - What's the context of the problem (i.e expected behavior - actual behavior)? - Any additional information: network configuration, JVM version, OS type/version, heap dump, stack trace, etc. You might have heard about these features that GitHub [provides](https://github.com/blog/2111-issue-and-pull-request-templates). By creating a simple *.md file in the repository, GitHub will automatically use it as a template for any new issues/PRs (different *.md file, though). So, instead of asking people all these questions you might consider using the issue template. Chances are that you'll save a hell of a lot of time by integrating this feature into the current workflow. [Here's](https://github.com/orientechnologies/orientdb/issues) an example of a project which uses this technique and here are the examples how to create [the pull request template](https://help.github.com/articles/creating-a-pull-request-template-for-your-repository/) and [the issue template](https://help.github.com/articles/creating-an-issue-template-for-your-repository/).
[]
[ ".github/CONTRIBUTING.md", ".github/ISSUE_TEMPLATE.md", ".github/PULL_REQUEST_TEMPLATE.md" ]
[]
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md new file mode 100644 index 00000000000..81c2ceabb93 --- /dev/null +++ b/.github/CONTRIBUTING.md @@ -0,0 +1,1 @@ +Please review the [guidelines for contributing](http://netty.io/wiki/developer-guide.html) for this repository. diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md new file mode 100644 index 00000000000..a7a6188615e --- /dev/null +++ b/.github/ISSUE_TEMPLATE.md @@ -0,0 +1,13 @@ +### Expected behavior + +### Actual behavior + +### Steps to reproduce + +### Minimal yet complete reproducer code (or URL to code) + +### Netty version + +### JVM version (e.g. `java -version`) + +### OS version (e.g. `uname -a`) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 00000000000..17b37258154 --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,14 @@ +Motivation: + +Explain here the context, and why you're making that change. +What is the problem you're trying to solve. + +Modification: + +Describe the modifications you've done. + +Result: + +Fixes #<GitHub issue number>. + +If there is no issue then describe the changes introduced by this PR.
null
train
train
2016-12-06T05:51:05
"2016-11-30T22:16:01Z"
maseev
val
netty/netty/6132_6147
netty/netty
netty/netty/6132
netty/netty/6147
[ "timestamp(timedelta=126.0, similarity=0.8475656572580857)" ]
3f82b53bae4120eaf452269f204595b78353ec2b
82f548e9a9a8d61a4c9844b518619b333aa6a80c
[ "@jo-kin - Thanks for reporting. Stand by for a PR.", "Thanx a lot.\r\nDeeply impressed by speed and thoroughness. Will report how it works for me soon.\r\nWishing you a very Happy Christmas and New Year 2017. " ]
[ "@Scottmitch just remove the `@return` as it is the same as what is stated already ? ", "@Scottmitch `:-]` looks nice and friendly ;)", "sure" ]
"2016-12-20T00:43:40Z"
[ "defect" ]
Wrong regular expression to check MACAdresses in DefaultChannelId
Using Netty on a Java-VM running on IBM z/OS it occurs, that creating a unique channel-id leads to problems. It seems to me, that this process bases at least on the MAC-Address of networking hardware. On IBM z/OS this information is not available with the usual JAVA-methods in NetworkInterface. Netty logs always a warning to System.err running on z/OS that it fell back to a random MAC. So I decided to use the VM-Option io.netty.machineId=xxxxxxxx to inject here a valid MAC-Address retrieved from non-java resources on z/OS. **To my regrets it does not work. The problem is not platform-dependent, even with in a SUN-VM running on Windows it is not possible to use this option.** Probably the Regular Expression, used to check the input, is not correct. I would recommend to change the used RegExp ` private static final Pattern MACHINE_ID_PATTERN = Pattern.compile("^(?:[0-9a-fA-F][:-]?){6,8}$");` to an expression more like `([0-9a-fA-F]{2}[:-]){5,7}[0-9a-fA-F]{2}` ### Expected behavior DefaultChannelId allows to customize the used MACAddress via System.property. Using -Dio.netty.machineId=00-14-5E-5A-AF-77 as Java-VM-option should be accepted as valid MACAddress. ### Actual behavior Netty rejects this MACAddress as malformed: `Dez 14, 2016 2:10:49 PM io.netty.channel.DefaultChannelId <clinit> WARNING: -Dio.netty.machineId: 00-14-5E-5A-AF-77 (malformed) Dez 14, 2016 2:10:49 PM io.netty.handler.logging.LoggingHandler channelRegistered INFO: [id: 0xa9354b7b] REGISTERED Dez 14, 2016 2:10:49 PM io.netty.handler.logging.LoggingHandler bind INFO: [id: 0xa9354b7b] BIND: 0.0.0.0/0.0.0.0:8007 Dez 14, 2016 2:10:49 PM io.netty.handler.logging.LoggingHandler channelActive INFO: [id: 0xa9354b7b, L:/0:0:0:0:0:0:0:0:8007] ACTIVE` ### Steps to reproduce Start any Netty-using application (i.e. your sample io.netty.example.echo.EchoServer) with VM-option -Dio.netty.machineId=xxxxxxxx to overrule the standard MAC-Address ascertaining in MacAddressUtil.bestAvailableMac(), where xxxxxxx is any MAC-Address in standard (IEEE 802) format for printing MAC-48 addresses in human-friendly form. ### Minimal yet complete reproducer code (or URL to code) see above. Reproducible via VM-option ### Netty version 4.1.5 ### JVM version (e.g. `java -version`) java version "1.8.0_111" Java(TM) SE Runtime Environment (build 1.8.0_111-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode) ### OS version (e.g. `uname -a`) Windows 7 Professional Service Pack 1
[ "common/src/main/java/io/netty/util/internal/MacAddressUtil.java", "transport/src/main/java/io/netty/channel/DefaultChannelId.java" ]
[ "common/src/main/java/io/netty/util/internal/MacAddressUtil.java", "transport/src/main/java/io/netty/channel/DefaultChannelId.java" ]
[ "common/src/test/java/io/netty/util/internal/MacAddressUtilTest.java" ]
diff --git a/common/src/main/java/io/netty/util/internal/MacAddressUtil.java b/common/src/main/java/io/netty/util/internal/MacAddressUtil.java index 858368d6dd9..49559b175e2 100644 --- a/common/src/main/java/io/netty/util/internal/MacAddressUtil.java +++ b/common/src/main/java/io/netty/util/internal/MacAddressUtil.java @@ -16,6 +16,10 @@ package io.netty.util.internal; +import io.netty.util.NetUtil; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + import java.net.InetAddress; import java.net.NetworkInterface; import java.net.SocketException; @@ -25,21 +29,14 @@ import java.util.Map; import java.util.Map.Entry; -import io.netty.util.NetUtil; -import io.netty.util.internal.logging.InternalLogger; -import io.netty.util.internal.logging.InternalLoggerFactory; +import static io.netty.util.internal.EmptyArrays.EMPTY_BYTES; public final class MacAddressUtil { - - /** - * Length of a valid MAC address. - */ - public static final int MAC_ADDRESS_LENGTH = 8; - - private static final byte[] NOT_FOUND = { -1 }; - private static final InternalLogger logger = InternalLoggerFactory.getInstance(MacAddressUtil.class); + private static final int EUI64_MAC_ADDRESS_LENGTH = 8; + private static final int EUI48_MAC_ADDRESS_LENGTH = 6; + /** * Obtains the best MAC address found on local network interfaces. * Generally speaking, an active network interface used on public @@ -49,7 +46,7 @@ public final class MacAddressUtil { */ public static byte[] bestAvailableMac() { // Find the best MAC address available. - byte[] bestMacAddr = NOT_FOUND; + byte[] bestMacAddr = EMPTY_BYTES; InetAddress bestInetAddr = NetUtil.LOCALHOST4; // Retrieve the list of available network interfaces. @@ -110,13 +107,13 @@ public static byte[] bestAvailableMac() { } } - if (bestMacAddr == NOT_FOUND) { + if (bestMacAddr == EMPTY_BYTES) { return null; } switch (bestMacAddr.length) { - case 6: // EUI-48 - convert to EUI-64 - byte[] newAddr = new byte[MAC_ADDRESS_LENGTH]; + case EUI48_MAC_ADDRESS_LENGTH: // EUI-48 - convert to EUI-64 + byte[] newAddr = new byte[EUI64_MAC_ADDRESS_LENGTH]; System.arraycopy(bestMacAddr, 0, newAddr, 0, 3); newAddr[3] = (byte) 0xFF; newAddr[4] = (byte) 0xFE; @@ -124,12 +121,73 @@ public static byte[] bestAvailableMac() { bestMacAddr = newAddr; break; default: // Unknown - bestMacAddr = Arrays.copyOf(bestMacAddr, MAC_ADDRESS_LENGTH); + bestMacAddr = Arrays.copyOf(bestMacAddr, EUI64_MAC_ADDRESS_LENGTH); } return bestMacAddr; } + /** + * Returns the result of {@link #bestAvailableMac()} if non-{@code null} otherwise returns a random EUI-64 MAC + * address. + */ + public static byte[] defaultMachineId() { + byte[] bestMacAddr = MacAddressUtil.bestAvailableMac(); + if (bestMacAddr == null) { + bestMacAddr = new byte[EUI64_MAC_ADDRESS_LENGTH]; + ThreadLocalRandom.current().nextBytes(bestMacAddr); + logger.warn( + "Failed to find a usable hardware address from the network interfaces; using random bytes: {}", + MacAddressUtil.formatAddress(bestMacAddr)); + } + return bestMacAddr; + } + + /** + * Parse a EUI-48, MAC-48, or EUI-64 MAC address from a {@link String} and return it as a {@code byte[]}. + * @param value The string representation of the MAC address. + * @return The byte representation of the MAC address. + */ + public static byte[] parseMAC(String value) { + final byte[] machineId; + final char separator; + switch (value.length()) { + case 17: + separator = value.charAt(2); + validateMacSeparator(separator); + machineId = new byte[EUI48_MAC_ADDRESS_LENGTH]; + break; + case 23: + separator = value.charAt(2); + validateMacSeparator(separator); + machineId = new byte[EUI64_MAC_ADDRESS_LENGTH]; + break; + default: + throw new IllegalArgumentException("value is not supported [MAC-48, EUI-48, EUI-64]"); + } + + final int end = machineId.length - 1; + int j = 0; + for (int i = 0; i < end; ++i, j += 3) { + final int sIndex = j + 2; + machineId[i] = (byte) Integer.parseInt(value.substring(j, sIndex), 16); + if (value.charAt(sIndex) != separator) { + throw new IllegalArgumentException("expected separator '" + separator + " but got '" + + value.charAt(sIndex) + "' at index: " + sIndex); + } + } + + machineId[end] = (byte) Integer.parseInt(value.substring(j, value.length()), 16); + + return machineId; + } + + private static void validateMacSeparator(char separator) { + if (separator != ':' && separator != '-') { + throw new IllegalArgumentException("unsupported seperator: " + separator + " (expected: [:-])"); + } + } + /** * @param addr byte array of a MAC address. * @return hex formatted MAC address. @@ -146,12 +204,7 @@ public static String formatAddress(byte[] addr) { * @return positive - current is better, 0 - cannot tell from MAC addr, negative - candidate is better. */ private static int compareAddresses(byte[] current, byte[] candidate) { - if (candidate == null) { - return 1; - } - - // Must be EUI-48 or longer. - if (candidate.length < 6) { + if (candidate == null || candidate.length < EUI48_MAC_ADDRESS_LENGTH) { return 1; } @@ -174,7 +227,7 @@ private static int compareAddresses(byte[] current, byte[] candidate) { } // Prefer globally unique address. - if ((current[0] & 2) == 0) { + if (current.length == 0 || (current[0] & 2) == 0) { if ((candidate[0] & 2) == 0) { // Both current and candidate are globally unique addresses. return 0; @@ -182,15 +235,12 @@ private static int compareAddresses(byte[] current, byte[] candidate) { // Only current is globally unique. return 1; } - } else { - if ((candidate[0] & 2) == 0) { - // Only candidate is globally unique. - return -1; - } else { - // Both current and candidate are non-unique. - return 0; - } + } else if ((candidate[0] & 2) == 0) { + // Only candidate is globally unique. + return -1; } + // Both current and candidate are non-unique. + return 0; } /** diff --git a/transport/src/main/java/io/netty/channel/DefaultChannelId.java b/transport/src/main/java/io/netty/channel/DefaultChannelId.java index 0bb2a8afa41..734094f91c2 100644 --- a/transport/src/main/java/io/netty/channel/DefaultChannelId.java +++ b/transport/src/main/java/io/netty/channel/DefaultChannelId.java @@ -28,7 +28,9 @@ import java.lang.reflect.Method; import java.util.Arrays; import java.util.concurrent.atomic.AtomicInteger; -import java.util.regex.Pattern; + +import static io.netty.util.internal.MacAddressUtil.defaultMachineId; +import static io.netty.util.internal.MacAddressUtil.parseMAC; /** * The default {@link ChannelId} implementation. @@ -38,9 +40,6 @@ public final class DefaultChannelId implements ChannelId { private static final long serialVersionUID = 3884076183504074063L; private static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultChannelId.class); - - private static final Pattern MACHINE_ID_PATTERN = Pattern.compile("^(?:[0-9a-fA-F][:-]?){6,8}$"); - private static final int MACHINE_ID_LEN = MacAddressUtil.MAC_ADDRESS_LENGTH; private static final byte[] MACHINE_ID; private static final int PROCESS_ID_LEN = 4; private static final int PROCESS_ID; @@ -54,9 +53,7 @@ public final class DefaultChannelId implements ChannelId { * Returns a new {@link DefaultChannelId} instance. */ public static DefaultChannelId newInstance() { - DefaultChannelId id = new DefaultChannelId(); - id.init(); - return id; + return new DefaultChannelId(); } static { @@ -89,11 +86,13 @@ public static DefaultChannelId newInstance() { byte[] machineId = null; String customMachineId = SystemPropertyUtil.get("io.netty.machineId"); if (customMachineId != null) { - if (MACHINE_ID_PATTERN.matcher(customMachineId).matches()) { - machineId = parseMachineId(customMachineId); + try { + machineId = parseMAC(customMachineId); + } catch (Exception e) { + logger.warn("-Dio.netty.machineId: {} (malformed)", customMachineId, e); + } + if (machineId != null) { logger.debug("-Dio.netty.machineId: {} (user-set)", customMachineId); - } else { - logger.warn("-Dio.netty.machineId: {} (malformed)", customMachineId); } } @@ -107,31 +106,6 @@ public static DefaultChannelId newInstance() { MACHINE_ID = machineId; } - @SuppressWarnings("DynamicRegexReplaceableByCompiledPattern") - private static byte[] parseMachineId(String value) { - // Strip separators. - value = value.replaceAll("[:-]", ""); - - byte[] machineId = new byte[MACHINE_ID_LEN]; - for (int i = 0; i < value.length(); i += 2) { - machineId[i] = (byte) Integer.parseInt(value.substring(i, i + 2), 16); - } - - return machineId; - } - - private static byte[] defaultMachineId() { - byte[] bestMacAddr = MacAddressUtil.bestAvailableMac(); - if (bestMacAddr == null) { - bestMacAddr = new byte[MacAddressUtil.MAC_ADDRESS_LENGTH]; - ThreadLocalRandom.current().nextBytes(bestMacAddr); - logger.warn( - "Failed to find a usable hardware address from the network interfaces; using random bytes: {}", - MacAddressUtil.formatAddress(bestMacAddr)); - } - return bestMacAddr; - } - private static int defaultProcessId() { final ClassLoader loader = PlatformDependent.getClassLoader(DefaultChannelId.class); String value; @@ -178,20 +152,19 @@ private static int defaultProcessId() { return pid; } - private final byte[] data = new byte[MACHINE_ID_LEN + PROCESS_ID_LEN + SEQUENCE_LEN + TIMESTAMP_LEN + RANDOM_LEN]; + private final byte[] data; private int hashCode; private transient String shortValue; private transient String longValue; - private DefaultChannelId() { } - - private void init() { + private DefaultChannelId() { + data = new byte[MACHINE_ID.length + PROCESS_ID_LEN + SEQUENCE_LEN + TIMESTAMP_LEN + RANDOM_LEN]; int i = 0; // machineId - System.arraycopy(MACHINE_ID, 0, data, i, MACHINE_ID_LEN); - i += MACHINE_ID_LEN; + System.arraycopy(MACHINE_ID, 0, data, i, MACHINE_ID.length); + i += MACHINE_ID.length; // processId i = writeInt(i, PROCESS_ID); @@ -234,8 +207,7 @@ private int writeLong(int i, long value) { public String asShortText() { String shortValue = this.shortValue; if (shortValue == null) { - this.shortValue = shortValue = ByteBufUtil.hexDump( - data, MACHINE_ID_LEN + PROCESS_ID_LEN + SEQUENCE_LEN + TIMESTAMP_LEN, RANDOM_LEN); + this.shortValue = shortValue = ByteBufUtil.hexDump(data, data.length - RANDOM_LEN, RANDOM_LEN); } return shortValue; } @@ -252,7 +224,7 @@ public String asLongText() { private String newLongValue() { StringBuilder buf = new StringBuilder(2 * data.length + 5); int i = 0; - i = appendHexDumpField(buf, i, MACHINE_ID_LEN); + i = appendHexDumpField(buf, i, MACHINE_ID.length); i = appendHexDumpField(buf, i, PROCESS_ID_LEN); i = appendHexDumpField(buf, i, SEQUENCE_LEN); i = appendHexDumpField(buf, i, TIMESTAMP_LEN);
diff --git a/common/src/test/java/io/netty/util/internal/MacAddressUtilTest.java b/common/src/test/java/io/netty/util/internal/MacAddressUtilTest.java new file mode 100644 index 00000000000..84d4e1229c7 --- /dev/null +++ b/common/src/test/java/io/netty/util/internal/MacAddressUtilTest.java @@ -0,0 +1,99 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.util.internal; + +import org.junit.Test; + +import static io.netty.util.internal.MacAddressUtil.parseMAC; +import static org.junit.Assert.assertArrayEquals; + +public class MacAddressUtilTest { + @Test + public void testParseMacEUI48() { + assertArrayEquals(new byte[]{0, (byte) 0xaa, 0x11, (byte) 0xbb, 0x22, (byte) 0xcc}, + parseMAC("00-AA-11-BB-22-CC")); + assertArrayEquals(new byte[]{0, (byte) 0xaa, 0x11, (byte) 0xbb, 0x22, (byte) 0xcc}, + parseMAC("00:AA:11:BB:22:CC")); + } + + @Test + public void testParseMacMAC48ToEUI64() { + // MAC-48 into an EUI-64 + assertArrayEquals(new byte[]{0, (byte) 0xaa, 0x11, (byte) 0xff, (byte) 0xff, (byte) 0xbb, 0x22, (byte) 0xcc}, + parseMAC("00-AA-11-FF-FF-BB-22-CC")); + assertArrayEquals(new byte[]{0, (byte) 0xaa, 0x11, (byte) 0xff, (byte) 0xff, (byte) 0xbb, 0x22, (byte) 0xcc}, + parseMAC("00:AA:11:FF:FF:BB:22:CC")); + } + + @Test + public void testParseMacEUI48ToEUI64() { + // EUI-48 into an EUI-64 + assertArrayEquals(new byte[]{0, (byte) 0xaa, 0x11, (byte) 0xff, (byte) 0xfe, (byte) 0xbb, 0x22, (byte) 0xcc}, + parseMAC("00-AA-11-FF-FE-BB-22-CC")); + assertArrayEquals(new byte[]{0, (byte) 0xaa, 0x11, (byte) 0xff, (byte) 0xfe, (byte) 0xbb, 0x22, (byte) 0xcc}, + parseMAC("00:AA:11:FF:FE:BB:22:CC")); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalid7HexGroupsA() { + parseMAC("00-AA-11-BB-22-CC-FF"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalid7HexGroupsB() { + parseMAC("00:AA:11:BB:22:CC:FF"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI48MixedSeparatorA() { + parseMAC("00-AA:11-BB-22-CC"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI48MixedSeparatorB() { + parseMAC("00:AA-11:BB:22:CC"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI64MixedSeparatorA() { + parseMAC("00-AA-11-FF-FE-BB-22:CC"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI64MixedSeparatorB() { + parseMAC("00:AA:11:FF:FE:BB:22-CC"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI48TrailingSeparatorA() { + parseMAC("00-AA-11-BB-22-CC-"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI48TrailingSeparatorB() { + parseMAC("00:AA:11:BB:22:CC:"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI64TrailingSeparatorA() { + parseMAC("00-AA-11-FF-FE-BB-22-CC-"); + } + + @Test(expected = IllegalArgumentException.class) + public void testParseMacInvalidEUI64TrailingSeparatorB() { + parseMAC("00:AA:11:FF:FE:BB:22:CC:"); + } +}
train
train
2016-12-20T21:59:00
"2016-12-14T13:40:40Z"
jo-kin
val
netty/netty/6150_6160
netty/netty
netty/netty/6150
netty/netty/6160
[ "timestamp(timedelta=59.0, similarity=0.9681198608469817)" ]
56ddc47f23611f6e1c8009caf0b67f535a41802c
3b9887c549d0ecf40176e2ea412946b8957630f1
[ "Whether this is a bug or missing feature is open to interpretation ... however if we do want to provide this functionality I don't think we should change the existing default behavior for `IdleStateHandler` for backwards compatibility reasons.\r\n\r\n`ChannelOutboundBuffer` may be able to provide the tools we need...here are some random thoughts:\r\n- The ChannelOutboundBuffer provides a counter of \"pending bytes\" but it is currently only updated at the \"message\" (aka `ByteBuf`, `FileRegion`, etc...) granularity). Even if this was updated at a finer granularity it will not work to answer the \"has something changed\" between successive calls (ABA situation may occur).\r\n- An alternative approach of \"save a reference to the head of the queue, and current size then check these changed after the timeout\" may also be problematic because we pool objects (again ABA). However if we couple this second approach with the completion of the future we may be able to overcome this ABA type problem. The completion of the future would mean we are now looking at a \"different\" object (even if it happens to be the same object because of pooling).\r\n\r\nIt would be good to explore if we can pull this off with existing tools rather than add explicit support for it in the core (unless we think this will drive other useful features).", "@rkapsi @Scottmitch why can't this be fixed by using a `ChannelProgressivePromise` ?", "@normanmaurer it'd work at the expense of garbage. `IdleStateHandler` (or some other class) would have to create and swap out the promise in the `write(...)` method and a listener that will notify the original promise that was passed into `write(...)`. A technique I'd be fine using in my project where I know the nature of the ChannelPipeline. Adding something like that to Netty could be risky. Ideally I'd like to avoid the garbage though.\r\n\r\nI think for pure idless detection there is a difference in measuring \"change\" and \"progress\". And then there's also overall progress and progress in the context of a `write(...)`. The `ChannelProgressivePromise` covers the latter but it comes at an expense. I'd be happy with an overall progress indicator (a `long` value that says: \"This Channel has transferred this many bytes in its lifetime\"). Lastly something that indicates change would be fine too (for this purpose) but I don't think it'd be a good API addition.\r\n\r\n\r\n\r\n", "@rkapsi - does the approach I mentioned in https://github.com/netty/netty/issues/6150#issuecomment-268414509 work for you?", "@Scottmitch - I think No. 1 wouldn't work for the reason you described. No. 2 should work. We'd get the `ChannelOutboundBuffer` via `Channel#unsafe()`?", "> We'd get the ChannelOutboundBuffer via Channel#unsafe()?\r\n\r\nYip. Yah the second bullet is the approach I was referring to ... the first bullet was just to clarify that only using the existing \"pending bytes\" counter won't be sufficient.", "@Scottmitch want me to look into this? I think it can be added to `IdleStateHandler` with a configuration option (i.e. default will retain the current behavior).", "@rkapsi - Sounds good to me. Thanks!" ]
[ "maybe not when this method is currently called but it is possible the `ChannelOutboundBuffer` may be `null` (e.g. channel is closed) .. consider protecting against this or adding a comment as to why it isn't necessary.", "protect against `buf == null`.", "nit: put on previous line?", "consider just setting the variable unconditionally. conditionals can be expensive and this variable is not volatile.", "can we make this method always return NS granularity and do the conversion in the unit tests?", "use `channel.finishAndReleaseAll();` instead for `EmbeddedChannel`.", "use `channel.finishAndReleaseAll();` instead for `EmbeddedChannel`.", "use `channel.finishAndReleaseAll();` instead for `EmbeddedChannel`.", "we should take care to release any object returned by `channel.consume()`", "hah mathematical unit test proof done I guess", "Can be done but is there a downside with the current impl? `TimeUnit#convert(...)` will be a ~noop if source and destination units are the same (which is the case inside the IdleStateHandler class).", "My preference is to keep the production code minimal, and not have to think/worry about implementation details and/or what optimizations may or may not be made if possible. In this case it looks like the extra complexity can be isolated in the unit tests and thus simplify the production code, right?", "Certainly and fair enough. I'm personally of the opinion that time without units is among some of the most useless pieces of information one can have in computer programs and I never write code without stating it explicitly (encoding the unit in the field/method name being my least favorite but still better than noting stated at all).", "Agreed that time (or values in general) without units can be error prone. However not accepting `TimeUnit` as an argument doesn't mean you cannot indicate the units in other ways (as you indicated). For example in the function name (e.g. `timeNanos()`, `nanoTime()`) and javadocs. This function is also \"internal\" to this class and it not intended to be part of the public API of this class so the risk of confusion due to units is lower in this case. This is effectively equivalent to `System.nanoTime()`.", "Done." ]
"2016-12-28T16:37:25Z"
[]
Detecting actual Channels idleness vs. slowness
I don't know if you'd consider it a missing feature or a bug but there appears to be no effective way of measuring write idleness. The natural choice is to use Netty's own `IdleStateHandler` but it may not work as expected for `write()`. For the sake of simplicity I'm providing a HTTP example but we encountered it with H2. Basically... The user's (our) intention is to close an idle connection. The `IdleStateHandler` uses the write's `ChannelFuture` to determine if a client is idle but it doesn't take into consideration that the client might be just slow and we've just thrown a large `ByteBuf` at it. I think we ran into this with H2 due to things like `CoalescingBufferQueue` which try to aggregate multiple writes into a single one. The repro is very simple. You just need to throttle your client a little bit. I'm using Chrome's [built-in](http://imgur.com/a/WfcZQ) throtting feature for it. On the server side (the code below) we want to close the connection if the client appears to be idle for 30 seconds. The problem is that `IdleStateHandler` will make a pure binary decision on the completeness of the `ChannelFuture` vs. taking actual progression into consideration and it appears there is no API for it. I haven't looked at all `Channel` implementations but `NioSocketChannel` and `EpollSocketChannel` (via `AbstractEpollStreamChannel`) do have knowledge about the number of bytes written in each cycle. An easy fix could be that channel's sum up these values in a `volatile long` field and expose it to the user. Or it could be a simple `++` for each write cycle where more than 1 byte was written to the underlying socket. The user could then use it to assess whether or not any or some progress was made between two calls. ```java public class IdleChannelTest { private static final int PORT = 8080; public static void main(String[] args) throws Exception { ChannelHandler handler = new ChannelInitializer<Channel>() { @Override protected void initChannel(Channel ch) throws Exception { ChannelPipeline pipeline = ch.pipeline(); pipeline.addLast(new HttpServerCodec()); pipeline.addLast(new IdleStateHandler(0L, 0L, 30L, TimeUnit.SECONDS)); pipeline.addLast(new HttpRequestAndIdleHandler()); } }; EventLoopGroup group = new NioEventLoopGroup(); try { ServerBootstrap bootstrap = new ServerBootstrap() .channel(NioServerSocketChannel.class) .group(group) .childHandler(handler); Channel channel = bootstrap.bind(PORT) .syncUninterruptibly() .channel(); try { System.out.println(new Date() + ": http://localhost:" + PORT + "/"); Thread.sleep(Long.MAX_VALUE); } finally { channel.close(); } } finally { group.shutdownGracefully(); } } private static class HttpRequestAndIdleHandler extends ChannelInboundHandlerAdapter { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { try { if (!(msg instanceof HttpRequest)) { return; } HttpRequest request = (HttpRequest)msg; ByteBuf content = Unpooled.wrappedBuffer(new byte[64*1024*1024]); // 64MB FullHttpResponse response = new DefaultFullHttpResponse( HttpVersion.HTTP_1_1, HttpResponseStatus.OK, content); HttpHeaders headers = response.headers(); headers.set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes()); headers.set(HttpHeaderNames.CONTENT_TYPE, "application/octet-stream"); System.out.println(new Date() + ": Received request, writing response: " + ctx.channel() + ", " + request.uri()); ChannelFuture future = ctx.writeAndFlush(response); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { System.out.println(new Date() + ": Write complete: " + ctx.channel()); } }); } finally { ReferenceCountUtil.release(msg); } } @Override public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { if (evt instanceof IdleStateEvent) { IdleStateEvent event = (IdleStateEvent)evt; System.out.println(new Date() + ": Channel is idle, closing it: " + ctx.channel() + ", " + event.state()); ctx.close(); } ctx.fireUserEventTriggered(evt); } } } ```
[ "handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java" ]
[ "handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java" ]
[ "handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java b/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java index 3d2acd11650..6d6229c0e64 100644 --- a/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java +++ b/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java @@ -17,11 +17,13 @@ import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.Channel; +import io.netty.channel.Channel.Unsafe; import io.netty.channel.ChannelDuplexHandler; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInitializer; +import io.netty.channel.ChannelOutboundBuffer; import io.netty.channel.ChannelPromise; import io.netty.util.concurrent.EventExecutor; @@ -101,11 +103,12 @@ public class IdleStateHandler extends ChannelDuplexHandler { private final ChannelFutureListener writeListener = new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { - lastWriteTime = System.nanoTime(); + lastWriteTime = ticksInNanos(); firstWriterIdleEvent = firstAllIdleEvent = true; } }; + private final boolean observeOutput; private final long readerIdleTimeNanos; private final long writerIdleTimeNanos; private final long allIdleTimeNanos; @@ -124,6 +127,10 @@ public void operationComplete(ChannelFuture future) throws Exception { private byte state; // 0 - none, 1 - initialized, 2 - destroyed private boolean reading; + private long lastChangeCheckTimeStamp; + private int lastMessageHashCode; + private long lastPendingWriteBytes; + /** * Creates a new instance firing {@link IdleStateEvent}s. * @@ -149,9 +156,21 @@ public IdleStateHandler( TimeUnit.SECONDS); } + /** + * @see #IdleStateHandler(boolean, long, long, long, TimeUnit) + */ + public IdleStateHandler( + long readerIdleTime, long writerIdleTime, long allIdleTime, + TimeUnit unit) { + this(false, readerIdleTime, writerIdleTime, allIdleTime, unit); + } + /** * Creates a new instance firing {@link IdleStateEvent}s. * + * @param observeOutput + * whether or not the consumption of {@code bytes} should be taken into + * consideration when assessing write idleness. The default is {@code false}. * @param readerIdleTime * an {@link IdleStateEvent} whose state is {@link IdleState#READER_IDLE} * will be triggered when no read was performed for the specified @@ -168,13 +187,15 @@ public IdleStateHandler( * the {@link TimeUnit} of {@code readerIdleTime}, * {@code writeIdleTime}, and {@code allIdleTime} */ - public IdleStateHandler( + public IdleStateHandler(boolean observeOutput, long readerIdleTime, long writerIdleTime, long allIdleTime, TimeUnit unit) { if (unit == null) { throw new NullPointerException("unit"); } + this.observeOutput = observeOutput; + if (readerIdleTime <= 0) { readerIdleTimeNanos = 0; } else { @@ -269,7 +290,7 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception @Override public void channelReadComplete(ChannelHandlerContext ctx) throws Exception { if ((readerIdleTimeNanos > 0 || allIdleTimeNanos > 0) && reading) { - lastReadTime = System.nanoTime(); + lastReadTime = ticksInNanos(); reading = false; } ctx.fireChannelReadComplete(); @@ -297,27 +318,37 @@ private void initialize(ChannelHandlerContext ctx) { } state = 1; + initOutputChanged(ctx); - EventExecutor loop = ctx.executor(); - - lastReadTime = lastWriteTime = System.nanoTime(); + lastReadTime = lastWriteTime = ticksInNanos(); if (readerIdleTimeNanos > 0) { - readerIdleTimeout = loop.schedule( - new ReaderIdleTimeoutTask(ctx), + readerIdleTimeout = schedule(ctx, new ReaderIdleTimeoutTask(ctx), readerIdleTimeNanos, TimeUnit.NANOSECONDS); } if (writerIdleTimeNanos > 0) { - writerIdleTimeout = loop.schedule( - new WriterIdleTimeoutTask(ctx), + writerIdleTimeout = schedule(ctx, new WriterIdleTimeoutTask(ctx), writerIdleTimeNanos, TimeUnit.NANOSECONDS); } if (allIdleTimeNanos > 0) { - allIdleTimeout = loop.schedule( - new AllIdleTimeoutTask(ctx), + allIdleTimeout = schedule(ctx, new AllIdleTimeoutTask(ctx), allIdleTimeNanos, TimeUnit.NANOSECONDS); } } + /** + * This method is visible for testing! + */ + long ticksInNanos() { + return System.nanoTime(); + } + + /** + * This method is visible for testing! + */ + ScheduledFuture<?> schedule(ChannelHandlerContext ctx, Runnable task, long delay, TimeUnit unit) { + return ctx.executor().schedule(task, delay, unit); + } + private void destroy() { state = 2; @@ -355,15 +386,77 @@ protected IdleStateEvent newIdleStateEvent(IdleState state, boolean first) { case WRITER_IDLE: return first ? IdleStateEvent.FIRST_WRITER_IDLE_STATE_EVENT : IdleStateEvent.WRITER_IDLE_STATE_EVENT; default: - throw new Error(); + throw new IllegalArgumentException("Unhandled: state=" + state + ", first=" + first); } } - private final class ReaderIdleTimeoutTask implements Runnable { + /** + * @see #hasOutputChanged(ChannelHandlerContext, boolean) + */ + private void initOutputChanged(ChannelHandlerContext ctx) { + if (observeOutput) { + Channel channel = ctx.channel(); + Unsafe unsafe = channel.unsafe(); + ChannelOutboundBuffer buf = unsafe.outboundBuffer(); + + if (buf != null) { + lastMessageHashCode = System.identityHashCode(buf.current()); + lastPendingWriteBytes = buf.totalPendingWriteBytes(); + } + } + } + + /** + * Returns {@code true} if and only if the {@link IdleStateHandler} was constructed + * with {@link #observeOutput} enabled and there has been an observed change in the + * {@link ChannelOutboundBuffer} between two consecutive calls of this method. + * + * https://github.com/netty/netty/issues/6150 + */ + private boolean hasOutputChanged(ChannelHandlerContext ctx, boolean first) { + if (observeOutput) { + + // We can take this shortcut if the ChannelPromises that got passed into write() + // appear to complete. It indicates "change" on message level and we simply assume + // that there's change happening on byte level. If the user doesn't observe channel + // writability events then they'll eventually OOME and there's clearly a different + // problem and idleness is least of their concerns. + if (lastChangeCheckTimeStamp != lastWriteTime) { + lastChangeCheckTimeStamp = lastWriteTime; + + // But this applies only if it's the non-first call. + if (!first) { + return true; + } + } + + Channel channel = ctx.channel(); + Unsafe unsafe = channel.unsafe(); + ChannelOutboundBuffer buf = unsafe.outboundBuffer(); + + if (buf != null) { + int messageHashCode = System.identityHashCode(buf.current()); + long pendingWriteBytes = buf.totalPendingWriteBytes(); + + if (messageHashCode != lastMessageHashCode || pendingWriteBytes != lastPendingWriteBytes) { + lastMessageHashCode = messageHashCode; + lastPendingWriteBytes = pendingWriteBytes; + + if (!first) { + return true; + } + } + } + } + + return false; + } + + private abstract static class AbstractIdleTask implements Runnable { private final ChannelHandlerContext ctx; - ReaderIdleTimeoutTask(ChannelHandlerContext ctx) { + AbstractIdleTask(ChannelHandlerContext ctx) { this.ctx = ctx; } @@ -373,98 +466,107 @@ public void run() { return; } + run(ctx); + } + + protected abstract void run(ChannelHandlerContext ctx); + } + + private final class ReaderIdleTimeoutTask extends AbstractIdleTask { + + ReaderIdleTimeoutTask(ChannelHandlerContext ctx) { + super(ctx); + } + + @Override + protected void run(ChannelHandlerContext ctx) { long nextDelay = readerIdleTimeNanos; if (!reading) { - nextDelay -= System.nanoTime() - lastReadTime; + nextDelay -= ticksInNanos() - lastReadTime; } if (nextDelay <= 0) { // Reader is idle - set a new timeout and notify the callback. - readerIdleTimeout = - ctx.executor().schedule(this, readerIdleTimeNanos, TimeUnit.NANOSECONDS); - try { - IdleStateEvent event = newIdleStateEvent(IdleState.READER_IDLE, firstReaderIdleEvent); - if (firstReaderIdleEvent) { - firstReaderIdleEvent = false; - } + readerIdleTimeout = schedule(ctx, this, readerIdleTimeNanos, TimeUnit.NANOSECONDS); + boolean first = firstReaderIdleEvent; + firstReaderIdleEvent = false; + + try { + IdleStateEvent event = newIdleStateEvent(IdleState.READER_IDLE, first); channelIdle(ctx, event); } catch (Throwable t) { ctx.fireExceptionCaught(t); } } else { // Read occurred before the timeout - set a new timeout with shorter delay. - readerIdleTimeout = ctx.executor().schedule(this, nextDelay, TimeUnit.NANOSECONDS); + readerIdleTimeout = schedule(ctx, this, nextDelay, TimeUnit.NANOSECONDS); } } } - private final class WriterIdleTimeoutTask implements Runnable { - - private final ChannelHandlerContext ctx; + private final class WriterIdleTimeoutTask extends AbstractIdleTask { WriterIdleTimeoutTask(ChannelHandlerContext ctx) { - this.ctx = ctx; + super(ctx); } @Override - public void run() { - if (!ctx.channel().isOpen()) { - return; - } + protected void run(ChannelHandlerContext ctx) { long lastWriteTime = IdleStateHandler.this.lastWriteTime; - long nextDelay = writerIdleTimeNanos - (System.nanoTime() - lastWriteTime); + long nextDelay = writerIdleTimeNanos - (ticksInNanos() - lastWriteTime); if (nextDelay <= 0) { // Writer is idle - set a new timeout and notify the callback. - writerIdleTimeout = ctx.executor().schedule( - this, writerIdleTimeNanos, TimeUnit.NANOSECONDS); + writerIdleTimeout = schedule(ctx, this, writerIdleTimeNanos, TimeUnit.NANOSECONDS); + + boolean first = firstWriterIdleEvent; + firstWriterIdleEvent = false; + try { - IdleStateEvent event = newIdleStateEvent(IdleState.WRITER_IDLE, firstWriterIdleEvent); - if (firstWriterIdleEvent) { - firstWriterIdleEvent = false; + if (hasOutputChanged(ctx, first)) { + return; } + IdleStateEvent event = newIdleStateEvent(IdleState.WRITER_IDLE, first); channelIdle(ctx, event); } catch (Throwable t) { ctx.fireExceptionCaught(t); } } else { // Write occurred before the timeout - set a new timeout with shorter delay. - writerIdleTimeout = ctx.executor().schedule(this, nextDelay, TimeUnit.NANOSECONDS); + writerIdleTimeout = schedule(ctx, this, nextDelay, TimeUnit.NANOSECONDS); } } } - private final class AllIdleTimeoutTask implements Runnable { - - private final ChannelHandlerContext ctx; + private final class AllIdleTimeoutTask extends AbstractIdleTask { AllIdleTimeoutTask(ChannelHandlerContext ctx) { - this.ctx = ctx; + super(ctx); } @Override - public void run() { - if (!ctx.channel().isOpen()) { - return; - } + protected void run(ChannelHandlerContext ctx) { long nextDelay = allIdleTimeNanos; if (!reading) { - nextDelay -= System.nanoTime() - Math.max(lastReadTime, lastWriteTime); + nextDelay -= ticksInNanos() - Math.max(lastReadTime, lastWriteTime); } if (nextDelay <= 0) { // Both reader and writer are idle - set a new timeout and // notify the callback. - allIdleTimeout = ctx.executor().schedule( - this, allIdleTimeNanos, TimeUnit.NANOSECONDS); + allIdleTimeout = schedule(ctx, this, allIdleTimeNanos, TimeUnit.NANOSECONDS); + + boolean first = firstAllIdleEvent; + firstAllIdleEvent = false; + try { - IdleStateEvent event = newIdleStateEvent(IdleState.ALL_IDLE, firstAllIdleEvent); - if (firstAllIdleEvent) { - firstAllIdleEvent = false; + if (hasOutputChanged(ctx, first)) { + return; } + IdleStateEvent event = newIdleStateEvent(IdleState.ALL_IDLE, first); channelIdle(ctx, event); } catch (Throwable t) { ctx.fireExceptionCaught(t); @@ -472,7 +574,7 @@ public void run() { } else { // Either read or write occurred before the timeout - set a new // timeout with shorter delay. - allIdleTimeout = ctx.executor().schedule(this, nextDelay, TimeUnit.NANOSECONDS); + allIdleTimeout = schedule(ctx, this, nextDelay, TimeUnit.NANOSECONDS); } } }
diff --git a/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java b/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java new file mode 100644 index 00000000000..f7e1c42153e --- /dev/null +++ b/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java @@ -0,0 +1,395 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.timeout; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertSame; +import static org.junit.Assert.assertTrue; + +import io.netty.buffer.Unpooled; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.ChannelOutboundBuffer; +import io.netty.channel.embedded.EmbeddedChannel; +import io.netty.util.ReferenceCountUtil; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.TimeUnit; + +import org.junit.Test; + +public class IdleStateHandlerTest { + + @Test + public void testReaderIdle() throws Exception { + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + false, 1L, 0L, 0L, TimeUnit.SECONDS); + + // We start with one FIRST_READER_IDLE_STATE_EVENT, followed by an infinite number of READER_IDLE_STATE_EVENTs + anyIdle(idleStateHandler, IdleStateEvent.FIRST_READER_IDLE_STATE_EVENT, + IdleStateEvent.READER_IDLE_STATE_EVENT, IdleStateEvent.READER_IDLE_STATE_EVENT); + } + + @Test + public void testWriterIdle() throws Exception { + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + false, 0L, 1L, 0L, TimeUnit.SECONDS); + + anyIdle(idleStateHandler, IdleStateEvent.FIRST_WRITER_IDLE_STATE_EVENT, + IdleStateEvent.WRITER_IDLE_STATE_EVENT, IdleStateEvent.WRITER_IDLE_STATE_EVENT); + } + + @Test + public void testAllIdle() throws Exception { + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + false, 0L, 0L, 1L, TimeUnit.SECONDS); + + anyIdle(idleStateHandler, IdleStateEvent.FIRST_ALL_IDLE_STATE_EVENT, + IdleStateEvent.ALL_IDLE_STATE_EVENT, IdleStateEvent.ALL_IDLE_STATE_EVENT); + } + + private void anyIdle(TestableIdleStateHandler idleStateHandler, Object... expected) throws Exception { + + assertTrue("The number of expected events must be >= 1", expected.length >= 1); + + final List<Object> events = new ArrayList<Object>(); + ChannelInboundHandlerAdapter handler = new ChannelInboundHandlerAdapter() { + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { + events.add(evt); + } + }; + + EmbeddedChannel channel = new EmbeddedChannel(idleStateHandler, handler); + try { + // For each expected event advance the ticker and run() the task. Each + // step should yield in an IdleStateEvent because we haven't written + // or read anything from the channel. + for (int i = 0; i < expected.length; i++) { + idleStateHandler.tickRun(); + } + + assertEquals(expected.length, events.size()); + + // Compare the expected with the actual IdleStateEvents + for (int i = 0; i < expected.length; i++) { + Object evt = events.get(i); + assertSame("Element " + i + " is not matching", expected[i], evt); + } + } finally { + channel.finishAndReleaseAll(); + } + } + + @Test + public void testReaderNotIdle() throws Exception { + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + false, 1L, 0L, 0L, TimeUnit.SECONDS); + + Action action = new Action() { + @Override + public void run(EmbeddedChannel channel) throws Exception { + channel.writeInbound("Hello, World!"); + } + }; + + anyNotIdle(idleStateHandler, action, IdleStateEvent.FIRST_READER_IDLE_STATE_EVENT); + } + + @Test + public void testWriterNotIdle() throws Exception { + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + false, 0L, 1L, 0L, TimeUnit.SECONDS); + + Action action = new Action() { + @Override + public void run(EmbeddedChannel channel) throws Exception { + channel.writeAndFlush("Hello, World!"); + } + }; + + anyNotIdle(idleStateHandler, action, IdleStateEvent.FIRST_WRITER_IDLE_STATE_EVENT); + } + + @Test + public void testAllNotIdle() throws Exception { + // Reader... + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + false, 0L, 0L, 1L, TimeUnit.SECONDS); + + Action reader = new Action() { + @Override + public void run(EmbeddedChannel channel) throws Exception { + channel.writeInbound("Hello, World!"); + } + }; + + anyNotIdle(idleStateHandler, reader, IdleStateEvent.FIRST_ALL_IDLE_STATE_EVENT); + + // Writer... + idleStateHandler = new TestableIdleStateHandler( + false, 0L, 0L, 1L, TimeUnit.SECONDS); + + Action writer = new Action() { + @Override + public void run(EmbeddedChannel channel) throws Exception { + channel.writeAndFlush("Hello, World!"); + } + }; + + anyNotIdle(idleStateHandler, writer, IdleStateEvent.FIRST_ALL_IDLE_STATE_EVENT); + } + + private void anyNotIdle(TestableIdleStateHandler idleStateHandler, + Action action, Object expected) throws Exception { + + final List<Object> events = new ArrayList<Object>(); + ChannelInboundHandlerAdapter handler = new ChannelInboundHandlerAdapter() { + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { + events.add(evt); + } + }; + + EmbeddedChannel channel = new EmbeddedChannel(idleStateHandler, handler); + try { + idleStateHandler.tick(1L, TimeUnit.NANOSECONDS); + action.run(channel); + + // Advance the ticker by some fraction and run() the task. + // There shouldn't be an IdleStateEvent getting fired because + // we've just performed an action on the channel that is meant + // to reset the idle task. + long delayInNanos = idleStateHandler.delay(TimeUnit.NANOSECONDS); + assertNotEquals(0L, delayInNanos); + + idleStateHandler.tickRun(delayInNanos / 2L, TimeUnit.NANOSECONDS); + assertEquals(0, events.size()); + + // Advance the ticker by the full amount and it should yield + // in an IdleStateEvent. + idleStateHandler.tickRun(); + assertEquals(1, events.size()); + assertSame(expected, events.get(0)); + } finally { + channel.finishAndReleaseAll(); + } + } + + @Test + public void testObserveWriterIdle() throws Exception { + observeOutputIdle(true); + } + + @Test + public void testObserveAllIdle() throws Exception { + observeOutputIdle(false); + } + + private void observeOutputIdle(boolean writer) throws Exception { + + long writerIdleTime = 0L; + long allIdleTime = 0L; + IdleStateEvent expeced = null; + + if (writer) { + writerIdleTime = 5L; + expeced = IdleStateEvent.FIRST_WRITER_IDLE_STATE_EVENT; + } else { + allIdleTime = 5L; + expeced = IdleStateEvent.FIRST_ALL_IDLE_STATE_EVENT; + } + + TestableIdleStateHandler idleStateHandler = new TestableIdleStateHandler( + true, 0L, writerIdleTime, allIdleTime, TimeUnit.SECONDS); + + final List<Object> events = new ArrayList<Object>(); + ChannelInboundHandlerAdapter handler = new ChannelInboundHandlerAdapter() { + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { + events.add(evt); + } + }; + + ObservableChannel channel = new ObservableChannel(idleStateHandler, handler); + try { + // We're writing 3 messages that will be consumed at different rates! + channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[] { 1 })); + channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[] { 2 })); + channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[] { 3 })); + + // Establish a baseline. We're not consuming anything and let it idle once. + idleStateHandler.tickRun(); + assertEquals(1, events.size()); + assertSame(expeced, events.get(0)); + events.clear(); + + // Our ticker should be at second 5 + assertEquals(5L, idleStateHandler.tick(TimeUnit.SECONDS)); + + // Consume one message in 4 seconds, then be idle for 2 seconds, + // then run the task and we shouldn't get an IdleStateEvent because + // we haven't been idle for long enough! + idleStateHandler.tick(4L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consume()); + + idleStateHandler.tickRun(2L, TimeUnit.SECONDS); + assertEquals(0, events.size()); + assertEquals(11L, idleStateHandler.tick(TimeUnit.SECONDS)); // 5s + 4s + 2s + + // Consume one message in 3 seconds, then be idle for 4 seconds, + // then run the task and we shouldn't get an IdleStateEvent because + // we haven't been idle for long enough! + idleStateHandler.tick(3L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consume()); + + idleStateHandler.tickRun(4L, TimeUnit.SECONDS); + assertEquals(0, events.size()); + assertEquals(18L, idleStateHandler.tick(TimeUnit.SECONDS)); // 11s + 3s + 4s + + // Don't consume a message and be idle for 5 seconds. + // We should get an IdleStateEvent! + idleStateHandler.tickRun(5L, TimeUnit.SECONDS); + assertEquals(1, events.size()); + assertEquals(23L, idleStateHandler.tick(TimeUnit.SECONDS)); // 18s + 5s + events.clear(); + + // Consume one message in 2 seconds, then be idle for 1 seconds, + // then run the task and we shouldn't get an IdleStateEvent because + // we haven't been idle for long enough! + idleStateHandler.tick(2L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consume()); + + idleStateHandler.tickRun(1L, TimeUnit.SECONDS); + assertEquals(0, events.size()); + assertEquals(26L, idleStateHandler.tick(TimeUnit.SECONDS)); // 23s + 2s + 1s + + // There are no messages left! Advance the ticker by 3 seconds, + // attempt a consume() but it will be null, then advance the + // ticker by an another 2 seconds and we should get an IdleStateEvent + // because we've been idle for 5 seconds. + idleStateHandler.tick(3L, TimeUnit.SECONDS); + assertNull(channel.consume()); + + idleStateHandler.tickRun(2L, TimeUnit.SECONDS); + assertEquals(1, events.size()); + assertEquals(31L, idleStateHandler.tick(TimeUnit.SECONDS)); // 26s + 3s + 2s + + // q.e.d. + } finally { + channel.finishAndReleaseAll(); + } + } + + private static void assertNotNullAndRelease(Object msg) { + assertNotNull(msg); + ReferenceCountUtil.release(msg); + } + + private interface Action { + void run(EmbeddedChannel channel) throws Exception; + } + + private static class TestableIdleStateHandler extends IdleStateHandler { + + private Runnable task; + + private long delayInNanos; + + private long ticksInNanos; + + public TestableIdleStateHandler(boolean observeOutput, + long readerIdleTime, long writerIdleTime, long allIdleTime, + TimeUnit unit) { + super(observeOutput, readerIdleTime, writerIdleTime, allIdleTime, unit); + } + + public long delay(TimeUnit unit) { + return unit.convert(delayInNanos, TimeUnit.NANOSECONDS); + } + + public void run() { + task.run(); + } + + public void tickRun() { + tickRun(delayInNanos, TimeUnit.NANOSECONDS); + } + + public void tickRun(long delay, TimeUnit unit) { + tick(delay, unit); + run(); + } + + /** + * Advances the current ticker by the given amount. + */ + public void tick(long delay, TimeUnit unit) { + ticksInNanos += unit.toNanos(delay); + } + + /** + * Returns {@link #ticksInNanos()} in the given {@link TimeUnit}. + */ + public long tick(TimeUnit unit) { + return unit.convert(ticksInNanos(), TimeUnit.NANOSECONDS); + } + + @Override + long ticksInNanos() { + return ticksInNanos; + } + + @Override + ScheduledFuture<?> schedule(ChannelHandlerContext ctx, Runnable task, long delay, TimeUnit unit) { + this.task = task; + this.delayInNanos = unit.toNanos(delay); + return null; + } + } + + private static class ObservableChannel extends EmbeddedChannel { + + public ObservableChannel(ChannelHandler... handlers) { + super(handlers); + } + + @Override + protected void doWrite(ChannelOutboundBuffer in) throws Exception { + // Overridden to change EmbeddedChannel's default behavior. We went to keep + // the messages in the ChannelOutboundBuffer. + } + + public Object consume() { + ChannelOutboundBuffer buf = unsafe().outboundBuffer(); + if (buf != null) { + Object msg = buf.current(); + if (msg != null) { + ReferenceCountUtil.retain(msg); + buf.remove(); + return msg; + } + } + return null; + } + } +}
test
train
2016-12-30T21:00:21
"2016-12-20T23:27:27Z"
rkapsi
val
netty/netty/6167_6189
netty/netty
netty/netty/6167
netty/netty/6189
[ "timestamp(timedelta=24.0, similarity=0.894930691689767)" ]
eb5dc4bcedc7503738e19ee8635f4df58f7062a2
f224ac2668d960900218441b29616f4d01225eff
[ "@markdascher I think this is fixed in 4.1.7.Final-SNAPSHOT (current 4.1 branch). Can you please test and let us know ?", "Wow, wasn't expecting an answer so fast. 👍\r\n\r\nI just downloaded [netty-all-4.1.7.Final-20161230.120821-77.jar](https://oss.sonatype.org/content/repositories/snapshots/io/netty/netty-all/4.1.7.Final-SNAPSHOT/netty-all-4.1.7.Final-20161230.120821-77.jar), but the behavior is the same.", "Ok thx will have a look", "thanks for the detailed issue description @markdascher! seems reasonable to me and very helpful.", "Yay!\r\n\r\nThe best workaround I've found, for the moment, is to add a listener to `SslHandler.sslCloseFuture()` to close the TCP connection immediately. For example, replace [line 47](https://github.com/netty/netty/blob/netty-4.1.6.Final/example/src/main/java/io/netty/example/securechat/SecureChatServerInitializer.java#L47) of SecureChatServerInitializer.java with:\r\n\r\n```\r\nSslHandler sslHnd = sslCtx.newHandler(ch.alloc());\r\nsslHnd.sslCloseFuture().addListener(\r\n\tnew GenericFutureListener<Future<Channel>>() {\r\n\t\t@Override\r\n\t\tpublic void operationComplete(Future<Channel> future) throws Exception {\r\n\t\tif (future.isSuccess()) {\r\n\t\t\tfuture.getNow().close();\r\n\t\t}\r\n\t}\r\n});\r\npipeline.addLast(sslHnd);\r\n```\r\n\r\nThat at least prevents the connection from sitting idle while the client waits for an answer, but it'll still be better to have the close_notify go out first. I'm pretty new to Netty, so please correct me if there's a better way to do it. (e.g. Would `.disconnect()` be better than `.close()`?)", "Looking now...", "Stay tuned for a PR... I think I know the best way to fix this.", "@markdascher https://github.com/netty/netty/pull/6189 should fix it... Can you check ?", "Looks like that does the trick! I built netty from the `fix_close_notify` branch, and now see the close_notify whether or not I use `netty-tcnative-boringssl-static-1.1.33.Fork25.jar`.\r\n\r\nThanks!", "@markdascher boom! Thanks for reporting back :)", "@markdascher and thanks again for the detailed description how to reproduce it easily. Without this it would have take a lot longer.", "Fixed by https://github.com/netty/netty/pull/6189" ]
[ "nit: revert to be consistent with other switch cases (and make merging easier :) )", "futher -> further\r\n\r\nor just `UNWRAP/WRAP are not expected after this point`", "fixed", "fixed" ]
"2017-01-10T07:21:46Z"
[ "defect" ]
OpenSslEngine doesn't respond to close_notify
### Expected behavior When a Netty server receives a TLS close_notify, it should respond with a close_notify. ### Actual behavior Netty's [OpenSslEngine](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java) (actually [ReferenceCountedOpenSslEngine](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java)) never sends the close_notify response. When a close_notify comes in, it's processed inside of [unwrap()](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java#L699), eventually reaching this [closeAll()](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java#L845) call on line 845. Then, [closeInbound()](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java#L1007) calls [shutdown()](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java#L359), which deallocates the OpenSSL data structures, before we have a chance for netty to grab the data. The native Java [SSLEngine](https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLEngine.html) works fine. This [unwrap()](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/SslHandler.java#L968) call returns an SSLEngineResult where `status == CLOSED`, `handshakeStatus == NEED_WRAP`. As a result, [wrapNonAppData(ctx, true)](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/SslHandler.java#L1005) sets [needsFlush = true](https://github.com/netty/netty/blob/netty-4.1.6.Final/handler/src/main/java/io/netty/handler/ssl/SslHandler.java#L609), which lets netty know to flush the close_notify out to the network. ### Steps to reproduce * In terminal window 1, run a tcpdump to monitor TLS traffic: `sudo tcpdump -lilo0 port 8992 | fgrep -v 'length 0'` * In terminal window 2, build the Secure Chat example, and run with `netty-tcnative-boringssl-static-1.1.33.Fork25.jar` in the CLASSPATH. Test from terminal window 3: `openssl s_client -connect 127.0.0.1:8992 -tls1`. Send a couple test messages, and then close the TLS connection by entering `Q` on a line by itself. The tcpdump shows a close_notify sent from the client, but nothing coming back from the server. This shows that close_notify goes unanswered when using OpenSSL. Terminate the server. * In terminal window 2, build the Secure Chat example, but run without `netty-tcnative-boringssl-static-1.1.33.Fork25.jar` in the CLASSPATH. Test from terminal window 3: `openssl s_client -connect 127.0.0.1:8992 -tls1`. Send a couple test messages, and then close the TLS connection by entering `Q` on a line by itself. The tcpdump shows a close_notify sent from the client, and a response sent from the server. This shows that close_notify works properly when using the native JDK implementation. Terminate the server. ### Minimal yet complete reproducer code Secure Chat example from netty-4.1.6.Final: [SecureChatServer.java](https://github.com/netty/netty/blob/netty-4.1.6.Final/example/src/main/java/io/netty/example/securechat/SecureChatServer.java) [SecureChatServerHandler.java](https://github.com/netty/netty/blob/netty-4.1.6.Final/example/src/main/java/io/netty/example/securechat/SecureChatServerHandler.java) [SecureChatServerInitializer.java](https://github.com/netty/netty/blob/netty-4.1.6.Final/example/src/main/java/io/netty/example/securechat/SecureChatServerInitializer.java) ### Netty version [netty-all-4.1.6.Final.jar](https://repo1.maven.org/maven2/io/netty/netty-all/4.1.6.Final/netty-all-4.1.6.Final.jar) [netty-tcnative-boringssl-static-1.1.33.Fork25.jar](https://repo1.maven.org/maven2/io/netty/netty-tcnative-boringssl-static/1.1.33.Fork25/netty-tcnative-boringssl-static-1.1.33.Fork25.jar) ### JVM version ``` java version "1.8.0_111" Java(TM) SE Runtime Environment (build 1.8.0_111-b14) ``` ### OS version Mac OS X El Capitan v10.11.6 with latest updates
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java index 5594f3a4e66..23c666796ff 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java @@ -544,11 +544,14 @@ private SSLEngineResult readPendingBytesFromBIO(ByteBuffer dst, int bytesConsume final SSLEngineResult.Status rs; // If isOutboundDone, then the data from the network BIO - // was the close_notify message and all was consumed we are not required to wait - // for the receipt the peer's close_notify message -- shutdown. + // was the close_notify message, see if we also received the response yet. if (isOutboundDone()) { rs = CLOSED; - shutdown(); + if (isInboundDone()) { + // If the inbound was done as well, we need to ensure we return NOT_HANDSHAKING to signal we are + // done. + hs = NOT_HANDSHAKING; + } } else { rs = OK; } @@ -1054,7 +1057,11 @@ public final synchronized void closeInbound() throws SSLException { isInboundDone = true; - shutdown(); + if (isOutboundDone()) { + // Only call shutdown if there is no outbound data pending. + // See https://github.com/netty/netty/issues/6167 + shutdown(); + } if (handshakeState != HandshakeState.NOT_STARTED && !receivedShutdown) { throw new SSLException(
diff --git a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java index 98d629206b9..140d95829f2 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java @@ -1214,4 +1214,113 @@ private static void testCloseInboundAfterBeginHandshake(SSLEngine engine) throws // expected } } + + @Test + public void testCloseNotifySequence() throws Exception { + SelfSignedCertificate cert = new SelfSignedCertificate(); + + clientSslCtx = SslContextBuilder + .forClient() + .trustManager(cert.cert()) + .sslProvider(sslClientProvider()) + .build(); + SSLEngine client = clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT); + + serverSslCtx = SslContextBuilder + .forServer(cert.certificate(), cert.privateKey()) + .sslProvider(sslServerProvider()) + .build(); + SSLEngine server = serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT); + + try { + ByteBuffer plainClientOut = ByteBuffer.allocate(client.getSession().getApplicationBufferSize()); + ByteBuffer plainServerOut = ByteBuffer.allocate(server.getSession().getApplicationBufferSize()); + + ByteBuffer encryptedClientToServer = ByteBuffer.allocate(client.getSession().getPacketBufferSize()); + ByteBuffer encryptedServerToClient = ByteBuffer.allocate(server.getSession().getPacketBufferSize()); + ByteBuffer empty = ByteBuffer.allocate(0); + + handshake(client, server); + + // This will produce a close_notify + client.closeOutbound(); + + // Something still pending in the outbound buffer. + assertFalse(client.isOutboundDone()); + assertFalse(client.isInboundDone()); + + // Now wrap and so drain the outbound buffer. + SSLEngineResult result = client.wrap(empty, encryptedClientToServer); + encryptedClientToServer.flip(); + + assertEquals(SSLEngineResult.Status.CLOSED, result.getStatus()); + // Need an UNWRAP to read the response of the close_notify + assertEquals(SSLEngineResult.HandshakeStatus.NEED_UNWRAP, result.getHandshakeStatus()); + + int produced = result.bytesProduced(); + int consumed = result.bytesConsumed(); + int closeNotifyLen = produced; + + assertTrue(produced > 0); + assertEquals(0, consumed); + assertEquals(produced, encryptedClientToServer.remaining()); + // Outbound buffer should be drained now. + assertTrue(client.isOutboundDone()); + assertFalse(client.isInboundDone()); + + assertFalse(server.isOutboundDone()); + assertFalse(server.isInboundDone()); + result = server.unwrap(encryptedClientToServer, plainServerOut); + plainServerOut.flip(); + + assertEquals(SSLEngineResult.Status.CLOSED, result.getStatus()); + // Need a WRAP to respond to the close_notify + assertEquals(SSLEngineResult.HandshakeStatus.NEED_WRAP, result.getHandshakeStatus()); + + produced = result.bytesProduced(); + consumed = result.bytesConsumed(); + assertEquals(closeNotifyLen, consumed); + assertEquals(0, produced); + // Should have consumed the complete close_notify + assertEquals(0, encryptedClientToServer.remaining()); + assertEquals(0, plainServerOut.remaining()); + + assertFalse(server.isOutboundDone()); + assertTrue(server.isInboundDone()); + + result = server.wrap(empty, encryptedServerToClient); + encryptedServerToClient.flip(); + assertEquals(SSLEngineResult.Status.CLOSED, result.getStatus()); + // UNWRAP/WRAP are not expected after this point + assertEquals(SSLEngineResult.HandshakeStatus.NOT_HANDSHAKING, result.getHandshakeStatus()); + + produced = result.bytesProduced(); + consumed = result.bytesConsumed(); + assertEquals(closeNotifyLen, produced); + assertEquals(0, consumed); + + assertEquals(produced, encryptedServerToClient.remaining()); + assertTrue(server.isOutboundDone()); + assertTrue(server.isInboundDone()); + + result = client.unwrap(encryptedServerToClient, plainClientOut); + plainClientOut.flip(); + assertEquals(SSLEngineResult.Status.CLOSED, result.getStatus()); + // UNWRAP/WRAP are not expected after this point + assertEquals(SSLEngineResult.HandshakeStatus.NOT_HANDSHAKING, result.getHandshakeStatus()); + + produced = result.bytesProduced(); + consumed = result.bytesConsumed(); + assertEquals(closeNotifyLen, consumed); + assertEquals(0, produced); + assertEquals(0, encryptedServerToClient.remaining()); + + assertTrue(client.isOutboundDone()); + assertTrue(client.isInboundDone()); + } finally { + cert.delete(); + cleanupClientSslEngine(client); + cleanupServerSslEngine(server); + } + } }
train
train
2017-01-12T07:51:37
"2016-12-30T18:08:38Z"
markdascher
val
netty/netty/6152_6195
netty/netty
netty/netty/6152
netty/netty/6195
[ "timestamp(timedelta=12.0, similarity=0.8699828685304778)" ]
d771526f8cfe262914ef40f0613e1e4b9e3a3aca
2d1970219a4a740da5e27c3acd97a80e0d3a36a9
[ "@ZuluForce maybe you can come up with a PR that fixes the issue for you ?", "We should consider using [toAddressString](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/NetUtil.java#L948) for the address portion which follows the RFC 5952 formatting (last time I checked java didn't). We could also add a method to format a address+port which handles ipv6 and ipv4 correctly ... this is common enough.", "@Scottmitch +1 Want to take a stab ?", "Thanks for the replies @normanmaurer and @Scottmitch. I would need to get the time from my employer to work on this and I currently don't have the Netty development environment setup or approval for the CCLA. If I could submit something it wouldn't be until after the holidays (ie early to mid January).", "@ZuluForce @Scottmitch let me take care.", "@ZuluForce PTAL https://github.com/netty/netty/pull/6195", "Fixed by https://github.com/netty/netty/pull/6195" ]
[ "I wonder if it would should be more explicit about the string concatenation operations used here. For example it seems unlikely JIT will be able to combine all of these conditional concatenation operations into a single `StringBuilder`. Can we create a single `StringBuilder` and manually do the `append` calls (e.g. including what is now building `rhost`) to be sure we don't create multiple `String`/`StringBuilder` operations?", "`enought` -> `enough`", "also remove the space before the period?", "`enought` -> `enough`\r\n\r\nalso remove the space before the period?", "nit: no need for `else`", "I added the space before to make it clear it is not part of it... But I guess its ok to remove it as well." ]
"2017-01-10T13:22:07Z"
[ "defect" ]
HttpProxyHandler incorrectly formats IPv6 host and port in CONNECT request
### Expected behavior The HttpProxyHandler is expected to be capable of issuing a valid CONNECT request for a tunneled connection to an IPv6 host. In this case we are passing an IPv6 address (eg fd00:c0de:42::c:293a:5736) rather than a host name. ### Actual behavior The HttpProxyHandler does not properly concatenate the IPv6 address and port. The resulting error after we fail to connect will show you the problem: ``` io.netty.handler.proxy.ProxyConnectException: http, none, /fd00:c0de:42:0:5:562d:d54:1:3128 => /fd00:c0de:42:0:50:5694:2fda:1:4287, status: 503 Service Unavailable ``` In both cases you can see the IPv6 address is formatted without brackets: `fd00:c0de:42:0:50:5694:2fda:1:4287` should be `[fd00:c0de:42:0:50:5694:2fda:1]:4287`. This is just an exception message so it doesn't prove it's formatting incorrectly. However, if you look at the request on the wire you can see it is certainly wrong: ``` CONNECT fd00:c0de:42:0:50:5694:2fda:1:4287 HTTP/1.1\r\n host: fd00:c0de:42:0:50:5694:2fda:1:4287\r\n \r\n ``` Here is the problem method from HttpProxyHandler: ``` @Override protected Object newInitialMessage(ChannelHandlerContext ctx) throws Exception { InetSocketAddress raddr = destinationAddress(); String rhost; if (raddr.isUnresolved()) { rhost = raddr.getHostString(); } else { rhost = raddr.getAddress().getHostAddress(); } final String host = rhost + ':' + raddr.getPort(); FullHttpRequest req = new DefaultFullHttpRequest( HttpVersion.HTTP_1_1, HttpMethod.CONNECT, host, Unpooled.EMPTY_BUFFER, false); req.headers().set(HttpHeaderNames.HOST, host); if (authorization != null) { req.headers().set(HttpHeaderNames.PROXY_AUTHORIZATION, authorization); } return req; } ``` Specifically: ` final String host = rhost + ':' + raddr.getPort();` ### Steps to reproduce * Setup an HTTP Proxy with IPv6. * Setup a target server with an IPv6 address. * Attempt to establish a connection to the target server through the proxy, giving the HttpProxyHandler an IPv6 IP address for the target. ** We can successfully connect to the proxy server with IPv6. The problem seems to be specific to the target of the tunneled connection using an IPv6 address. ### Minimal yet complete reproducer code (or URL to code) * I am not able to create a reproducer or patch with my company at this time. I think the issue is relatively straightforward though. ### Netty version 4.1.4.Final ### JVM version (e.g. `java -version`) Java 8 update 92 ### OS version (e.g. `uname -a`) Attempted on Ubuntu Linux 14.04 and Mac OSX. I don't think the OS matters in this case.
[ "common/src/main/java/io/netty/util/NetUtil.java", "handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java" ]
[ "common/src/main/java/io/netty/util/NetUtil.java", "handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java" ]
[ "common/src/test/java/io/netty/util/NetUtilTest.java", "handler-proxy/src/test/java/io/netty/handler/proxy/HttpProxyHandlerTest.java" ]
diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java index a645cd70d63..a442ff4ce9b 100644 --- a/common/src/main/java/io/netty/util/NetUtil.java +++ b/common/src/main/java/io/netty/util/NetUtil.java @@ -25,6 +25,7 @@ import java.net.Inet4Address; import java.net.Inet6Address; import java.net.InetAddress; +import java.net.InetSocketAddress; import java.net.NetworkInterface; import java.net.SocketException; import java.net.UnknownHostException; @@ -933,6 +934,38 @@ public static Inet6Address getByName(CharSequence ip, boolean ipv4Mapped) { } } + /** + * Returns the {@link String} representation of an {@link InetSocketAddress}. + * <p> + * The output does not include Scope ID. + * @param addr {@link InetSocketAddress} to be converted to an address string + * @return {@code String} containing the text-formatted IP address + */ + public static String toSocketAddressString(InetSocketAddress addr) { + String port = String.valueOf(addr.getPort()); + final StringBuilder sb; + + if (addr.isUnresolved()) { + String hostString = PlatformDependent.javaVersion() >= 7 ? addr.getHostString() : addr.getHostName(); + sb = newSocketAddressStringBuilder(hostString, port, !isValidIpV6Address(hostString)); + } else { + InetAddress address = addr.getAddress(); + String hostString = toAddressString(address); + sb = newSocketAddressStringBuilder(hostString, port, address instanceof Inet4Address); + } + return sb.append(':').append(port).toString(); + } + + private static StringBuilder newSocketAddressStringBuilder(String hostString, String port, boolean ipv4) { + if (ipv4) { + // Need to include enough space for hostString:port. + return new StringBuilder(hostString.length() + 1 + port.length()).append(hostString); + } + // Need to include enough space for [hostString]:port. + return new StringBuilder( + hostString.length() + 3 + port.length()).append('[').append(hostString).append(']'); + } + /** * Returns the {@link String} representation of an {@link InetAddress}. * <ul> diff --git a/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java b/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java index 53d6fbb823d..aecf5f3e7ee 100644 --- a/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java +++ b/handler-proxy/src/main/java/io/netty/handler/proxy/HttpProxyHandler.java @@ -32,6 +32,7 @@ import io.netty.handler.codec.http.LastHttpContent; import io.netty.util.AsciiString; import io.netty.util.CharsetUtil; +import io.netty.util.NetUtil; import java.net.InetSocketAddress; import java.net.SocketAddress; @@ -112,14 +113,7 @@ protected void removeDecoder(ChannelHandlerContext ctx) throws Exception { @Override protected Object newInitialMessage(ChannelHandlerContext ctx) throws Exception { InetSocketAddress raddr = destinationAddress(); - String rhost; - if (raddr.isUnresolved()) { - rhost = raddr.getHostString(); - } else { - rhost = raddr.getAddress().getHostAddress(); - } - - final String host = rhost + ':' + raddr.getPort(); + final String host = NetUtil.toSocketAddressString(raddr); FullHttpRequest req = new DefaultFullHttpRequest( HttpVersion.HTTP_1_1, HttpMethod.CONNECT, host,
diff --git a/common/src/test/java/io/netty/util/NetUtilTest.java b/common/src/test/java/io/netty/util/NetUtilTest.java index aa04b2a4c04..09afacca357 100644 --- a/common/src/test/java/io/netty/util/NetUtilTest.java +++ b/common/src/test/java/io/netty/util/NetUtilTest.java @@ -18,6 +18,7 @@ import org.junit.Test; import java.net.InetAddress; +import java.net.InetSocketAddress; import java.net.UnknownHostException; import java.util.HashMap; import java.util.Map; @@ -27,6 +28,7 @@ import static io.netty.util.NetUtil.createByteArrayFromIpAddressString; import static io.netty.util.NetUtil.getByName; import static io.netty.util.NetUtil.toAddressString; +import static io.netty.util.NetUtil.toSocketAddressString; import static org.junit.Assert.*; public class NetUtilTest { @@ -500,6 +502,22 @@ public void testinvalidIpv4MappedIp6GetByName() { } } + @Test + public void testIp6InetSocketAddressToString() throws UnknownHostException { + for (Entry<byte[], String> testEntry : ipv6ToAddressStrings.entrySet()) { + assertEquals('[' + testEntry.getValue() + "]:9999", + toSocketAddressString(new InetSocketAddress(InetAddress.getByAddress(testEntry.getKey()), 9999))); + } + } + + @Test + public void testIp4SocketAddressToString() throws UnknownHostException { + for (Entry<String, String> e : validIpV4Hosts.entrySet()) { + assertEquals(e.getKey() + ":9999", + toSocketAddressString(new InetSocketAddress(InetAddress.getByAddress(unhex(e.getValue())), 9999))); + } + } + private static void assertHexDumpEquals(String expected, byte[] actual) { assertEquals(expected, hex(actual)); } diff --git a/handler-proxy/src/test/java/io/netty/handler/proxy/HttpProxyHandlerTest.java b/handler-proxy/src/test/java/io/netty/handler/proxy/HttpProxyHandlerTest.java new file mode 100644 index 00000000000..61fd6911a0a --- /dev/null +++ b/handler-proxy/src/test/java/io/netty/handler/proxy/HttpProxyHandlerTest.java @@ -0,0 +1,80 @@ +/* + * Copyright 2016 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.proxy; + +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelPromise; +import io.netty.handler.codec.http.FullHttpRequest; +import io.netty.handler.codec.http.HttpHeaderNames; +import io.netty.handler.codec.http.HttpVersion; +import io.netty.util.NetUtil; +import org.junit.Test; + +import java.net.InetAddress; +import java.net.InetSocketAddress; + +import static org.junit.Assert.assertEquals; +import static org.mockito.Mockito.*; + +public class HttpProxyHandlerTest { + + @Test + public void testIpv6() throws Exception { + InetSocketAddress socketAddress = new InetSocketAddress(InetAddress.getByName("::1"), 8080); + testInitialMessage(socketAddress, "[::1]:8080"); + } + + @Test + public void testIpv6Unresolved() throws Exception { + InetSocketAddress socketAddress = InetSocketAddress.createUnresolved("::1", 8080); + testInitialMessage(socketAddress, "[::1]:8080"); + } + + @Test + public void testIpv4() throws Exception { + InetSocketAddress socketAddress = new InetSocketAddress(InetAddress.getByName("10.0.0.1"), 8080); + testInitialMessage(socketAddress, "10.0.0.1:8080"); + } + + @Test + public void testIpv4Unresolved() throws Exception { + InetSocketAddress socketAddress = InetSocketAddress.createUnresolved("10.0.0.1", 8080); + testInitialMessage(socketAddress, "10.0.0.1:8080"); + } + + private static void testInitialMessage(InetSocketAddress socketAddress, String expected) throws Exception { + InetSocketAddress proxyAddress = new InetSocketAddress(NetUtil.LOCALHOST, 8080); + + ChannelPromise promise = mock(ChannelPromise.class); + verifyNoMoreInteractions(promise); + + ChannelHandlerContext ctx = mock(ChannelHandlerContext.class); + when(ctx.connect(same(proxyAddress), isNull(InetSocketAddress.class), same(promise))).thenReturn(promise); + + HttpProxyHandler handler = new HttpProxyHandler(new InetSocketAddress(NetUtil.LOCALHOST, 8080)); + handler.connect(ctx, socketAddress, null, promise); + + FullHttpRequest request = (FullHttpRequest) handler.newInitialMessage(ctx); + try { + assertEquals(HttpVersion.HTTP_1_1, request.protocolVersion()); + assertEquals(expected, request.uri()); + assertEquals(expected, request.headers().get(HttpHeaderNames.HOST)); + } finally { + request.release(); + } + verify(ctx).connect(proxyAddress, null, promise); + } +}
val
train
2017-01-11T21:23:45
"2016-12-21T19:47:24Z"
ZuluForce
val
netty/netty/6169_6198
netty/netty
netty/netty/6169
netty/netty/6198
[ "timestamp(timedelta=30.0, similarity=0.8704484361788252)" ]
2368f238ad75935f68221f35330934c4ee734ee8
44c7fe73c0047de0188062697144b7525f5159bd
[ "Thanks for reporting @rkapsi ... standby for a PR." ]
[]
"2017-01-10T19:29:10Z"
[ "defect" ]
ByteBuf#compareTo() "violates its general contract"
### Expected behavior `ByteBuf` implements the Comparable interface but it appears to not work correctly. ### Actual behavior The following exception occurs: ```java Exception in thread "main" java.lang.IllegalArgumentException: Comparison method violates its general contract! at java.util.ComparableTimSort.mergeLo(ComparableTimSort.java:744) at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:481) at java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:406) at java.util.ComparableTimSort.sort(ComparableTimSort.java:213) at java.util.Arrays.sort(Arrays.java:1312) at java.util.Arrays.sort(Arrays.java:1506) at java.util.ArrayList.sort(ArrayList.java:1454) at java.util.Collections.sort(Collections.java:141) at Main.main(Main.java:23) ``` ### Steps to reproduce See below. ### Minimal yet complete reproducer code (or URL to code) ```java public class Main { public static void main(String[] args) { Random generator = new Random(); List<ByteBuf> values = new ArrayList<>(); for (int i = 0; i < 10000; i++) { byte[] value = new byte[20]; generator.nextBytes(value); values.add(Unpooled.wrappedBuffer(value)); } Collections.sort(values); } } ``` ### Netty version 4.1.7.Final-SNAPSHOT ### JVM version (e.g. `java -version`) ``` $ java -version openjdk version "1.8.0_112" OpenJDK Runtime Environment (build 1.8.0_112-b15) OpenJDK 64-Bit Server VM (build 25.112-b15, mixed mode) ``` ### OS version (e.g. `uname -a`) Linux
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java index 79403c61be1..1ddcab5ab0e 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java @@ -246,7 +246,7 @@ public static int compare(ByteBuf bufferA, ByteBuf bufferB) { } if (res != 0) { // Ensure we not overflow when cast - return (int) Math.min(Integer.MAX_VALUE, res); + return (int) Math.min(Integer.MAX_VALUE, Math.max(Integer.MIN_VALUE, res)); } aIndex += uintCountIncrement; bIndex += uintCountIncrement;
diff --git a/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java b/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java index 35613786b1f..720894df296 100644 --- a/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/AbstractByteBufTest.java @@ -38,6 +38,7 @@ import java.nio.channels.ScatteringByteChannel; import java.nio.channels.WritableByteChannel; import java.util.Arrays; +import java.util.Collections; import java.util.HashSet; import java.util.Random; import java.util.Set; @@ -62,6 +63,8 @@ import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; +import static org.junit.Assume.assumeFalse; +import static org.junit.Assume.assumeTrue; /** * An abstract test class for channel buffers @@ -103,6 +106,24 @@ public void dispose() { } } + @Test + public void comparableInterfaceNotViolated() { + assumeFalse(buffer.isReadOnly()); + buffer.writerIndex(buffer.readerIndex()); + assumeTrue(buffer.writableBytes() >= 4); + + buffer.writeLong(0); + ByteBuf buffer2 = newBuffer(CAPACITY); + assumeFalse(buffer2.isReadOnly()); + buffer2.writerIndex(buffer2.readerIndex()); + // Write an unsigned integer that will cause buffer.getUnsignedInt() - buffer2.getUnsignedInt() to underflow the + // int type and wrap around on the negative side. + buffer2.writeLong(0xF0000000L); + assertTrue(buffer.compareTo(buffer2) < 0); + assertTrue(buffer2.compareTo(buffer) > 0); + buffer2.release(); + } + @Test public void initialState() { assertEquals(CAPACITY, buffer.capacity()); @@ -1909,7 +1930,7 @@ public void testIndexOf() { @Test public void testNioBuffer1() { - Assume.assumeTrue(buffer.nioBufferCount() == 1); + assumeTrue(buffer.nioBufferCount() == 1); byte[] value = new byte[buffer.capacity()]; random.nextBytes(value); @@ -1921,7 +1942,7 @@ public void testNioBuffer1() { @Test public void testToByteBuffer2() { - Assume.assumeTrue(buffer.nioBufferCount() == 1); + assumeTrue(buffer.nioBufferCount() == 1); byte[] value = new byte[buffer.capacity()]; random.nextBytes(value); @@ -1947,7 +1968,7 @@ private static void assertRemainingEquals(ByteBuffer expected, ByteBuffer actual @Test public void testToByteBuffer3() { - Assume.assumeTrue(buffer.nioBufferCount() == 1); + assumeTrue(buffer.nioBufferCount() == 1); assertEquals(buffer.order(), buffer.nioBuffer().order()); }
val
train
2017-01-10T13:49:52
"2017-01-02T19:22:45Z"
rkapsi
val
netty/netty/6209_6229
netty/netty
netty/netty/6209
netty/netty/6229
[ "timestamp(timedelta=57946.0, similarity=0.8625178401546824)" ]
c590e3bd63c43e9c1104ce2ab3a5307f14bee960
51e1e35a380612d9e87741ba9df9ed53fac86864
[ "@Scottmitch @nmittler can you take a look ?", "I've been helping @exell-christopher with debugging this.\r\n\r\nOur current understanding is that there really is no safe way to recover from an exception thrown in the Decoder except for tearing down the connection as the index tables in client and server can have gotten out of synch with no way to safely bring them back in sync.", "This is even called out in the RFC. Makes sense since HPACK is stateful.\r\n\r\nhttps://tools.ietf.org/html/rfc7540#section-10.5.1\r\n> The header block MUST be processed to ensure a consistent\r\n connection state, unless the connection is closed.\r\n\r\nIntroducing a \"limit for how much we allow exceeding another limit\" is starting to get intense. To make things interesting we enforce the limit at 2 levels (during accumulating the header blocks, and during the decompression). So implementing the \"continue to process and bit bucket\" approach will not be limited in scope to `hpack.Decoder`. A simpler alternative is just to GO_AWAY/close the connection if the peer violates the settings we advertise.", "For our use case, sending a go-away as the behavior for any overflow is not desirable. The client in our case is a proxy and the inbound connection is heavily multiplexed. With what you are suggesting, one request that exceeds the limit by even one byte causes all other requests to be dropped. Having a reasonable overflow amount allows us to continue servicing the other requests and reject just the one that was oversized, but not particularly malicious.", "@exell-christopher - Yes I understand the implications. Just requires a bit more effort to implement, and the memory upper bound would not be set by `SETTINGS_MAX_HEADER_LIST_SIZE` but some other implementation dependent limit. This is not necessarily a bad thing but just needs to be communicated clearly (via javadocs, docs, etc...).\r\n\r\nJust curious in your scenario what do you anticipate your \"malicious threshold\" to be vs. `SETTINGS_MAX_HEADER_LIST_SIZE`?", "I would likely use MAX_HEADER_LIST_SIZE + (MAX_HEADER_LIST_SIZE * .25) I would expect that the .25 would be user configurable", "I'll take a stab at this...there is some other related cruft that can be cleaned up too.", "Thanks. Happy to submit a PR as well.", "@exell-christopher - Already in progress :) I'll ping you for review when I have a PR ready.", "Some information about summarize the header length,\r\nIn [rfc7540](https://tools.ietf.org/html/rfc7540#section-6.5.2)\r\n> SETTINGS_MAX_HEADER_LIST_SIZE (0x6): This advisory setting informs a\r\n peer of the maximum size of header list that the sender is\r\n prepared to accept, in octets. The value is based on the\r\n uncompressed size of header fields, including the length of the\r\n name and value in octets plus an overhead of 32 octets for each\r\n header field.\r\n\r\nI think that means when we sum up all headers, we should plus 32 byte for each header.\r\nMaybe it could be implemented at the same time! [Decoder.jara#L404](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java#L404)", "Sorry, there is a better reference [rfc7541](https://tools.ietf.org/html/rfc7541#section-4.1)\r\n> The size of an entry is the sum of its name's length in octets (as\r\n defined in Section 5.2), its value's length in octets, and 32.", "@Scottmitch Let me know if there's anything you need on this. ", "@exell-christopher - All good for now thanks. I will ping you once the PR is pushed (should be soon).", "@chhsiao90 - We can consider this as a followup PR if you want. The `SETTINGS_MAX_HEADER_LIST_SIZE` is not required to be strictly enforced. We account for the `32` bits of overhead on the `encoding` size, but not on the decode size. So we are currently following the \"be liberal in what you accept and conservative in what you send\" principle which I'm not yet convinced is a problem in this case.", "@Scottmitch \r\nSure, I will rebase once the PR is merged.\r\n\r\nAbout the 32 bytes overhead,\r\nI found that encoder will calculate the header size by add 32 bytes for each header. [Encoder](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java#L122)\r\nBut the decoder didn't add the 32 bytes in current 4.1 branch.\r\nOr maybe I missed that the 32 bytes is added in somewhere when encoding.", "> I found that encoder will calculate the header size by add 32 ... But the decoder didn't add the 32 bytes\r\n\r\nYip this reflects the current state. This is what I described in https://github.com/netty/netty/issues/6209#issuecomment-273365577.", "OK, sorry, I missed read that.... :(\r\nSo, the 32 bytes overhead could implement or not depends on need!\r\nThanks for your clarify!" ]
[]
"2017-01-17T23:55:51Z"
[ "defect" ]
HTTP/2 HPACK decoder oversized header handling causes client/server Dynamic Table inconsistencies
### Problem Description When a client sends oversized headers that causes the decoder to exceed the MAX_HEADER_LIST_SIZE setting, the decoder stops processing headers, and throws a HeaderListSizeException which causes a 431, and a RST_STREAM. If the incoming request had additional headers after the header that caused the HeaderListSizeException to be thrown that would have been incrementally indexed, they are ignored. This results in subsequent requests from the same client using what the client thinks are indexed headers to be rejected with GO_AWAY COMPRESSION_ERROR as the server side decoder dynamic table does not contain the indexed headers. ### Expected behavior When oversized headers are received, netty will continue processing headers, and updating the dynamic table until all headers are received. Once header processing is complete the HeaderListSizeException should be thrown. To prevent malicious users from sending an infinite number of header frames, and tying up resources, there should be a capped MAX_HEADER_LIST_SIZE overflow that netty will accept then it should issue a GO_AWAY to the client and close the connection. ### Netty version 4.1.6 and later ### Repro Case See attached maven project zip ### Other things worth noting While this case occurs when oversized headers are received, any exception that occurs during header processing that is not a connection error (GO_AWAY) can potentially cause dynamic table corruption, and should result in a GO_AWAY I'm planning on working on a pull request to fix this, but figured this is worth some discussion, so I'm opening the issue first. [netty-decoder-debug.zip](https://github.com/netty/netty/files/702509/netty-decoder-debug.zip)
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HuffmanEncoder.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HuffmanEncoder.java", "microbench/src/main/java/io/netty/microbench/http2/internal/hpack/DecoderULE128Benchmark.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/DecoderTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/EncoderTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java index 509069fb87a..aea195cca00 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Decoder.java @@ -51,10 +51,12 @@ import static io.netty.util.internal.ThrowableUtil.unknownStackTrace; public final class Decoder { - private static final Http2Exception DECODE_DECOMPRESSION_EXCEPTION = unknownStackTrace( - connectionError(COMPRESSION_ERROR, "HPACK - decompression failure"), Decoder.class, "decode(...)"); private static final Http2Exception DECODE_ULE_128_DECOMPRESSION_EXCEPTION = unknownStackTrace( connectionError(COMPRESSION_ERROR, "HPACK - decompression failure"), Decoder.class, "decodeULE128(...)"); + private static final Http2Exception DECODE_ULE_128_TO_LONG_DECOMPRESSION_EXCEPTION = unknownStackTrace( + connectionError(COMPRESSION_ERROR, "HPACK - long overflow"), Decoder.class, "decodeULE128(...)"); + private static final Http2Exception DECODE_ULE_128_TO_INT_DECOMPRESSION_EXCEPTION = unknownStackTrace( + connectionError(COMPRESSION_ERROR, "HPACK - int overflow"), Decoder.class, "decodeULE128ToInt(...)"); private static final Http2Exception DECODE_ILLEGAL_INDEX_VALUE = unknownStackTrace( connectionError(COMPRESSION_ERROR, "HPACK - illegal index value"), Decoder.class, "decode(...)"); private static final Http2Exception INDEX_HEADER_ILLEGAL_INDEX_VALUE = unknownStackTrace( @@ -184,7 +186,7 @@ public void decode(int streamId, ByteBuf in, Http2Headers headers) throws Http2E break; case READ_MAX_DYNAMIC_TABLE_SIZE: - setDynamicTableSize(decodeULE128(in, index)); + setDynamicTableSize(decodeULE128(in, (long) index)); state = READ_HEADER_REPRESENTATION; break; @@ -346,7 +348,7 @@ HeaderField getHeaderField(int index) { return dynamicTable.getEntry(index + 1); } - private void setDynamicTableSize(int dynamicTableSize) throws Http2Exception { + private void setDynamicTableSize(long dynamicTableSize) throws Http2Exception { if (dynamicTableSize > maxDynamicTableSize) { throw INVALID_MAX_DYNAMIC_TABLE_SIZE; } @@ -422,26 +424,53 @@ private static IllegalArgumentException notEnoughDataException(ByteBuf in) { return new IllegalArgumentException("decode only works with an entire header block! " + in); } - // Unsigned Little Endian Base 128 Variable-Length Integer Encoding - private static int decodeULE128(ByteBuf in, int result) throws Http2Exception { + /** + * Unsigned Little Endian Base 128 Variable-Length Integer Encoding + * <p> + * Visible for testing only! + */ + static int decodeULE128(ByteBuf in, int result) throws Http2Exception { + final int readerIndex = in.readerIndex(); + final long v = decodeULE128(in, (long) result); + if (v > Integer.MAX_VALUE) { + // the maximum value that can be represented by a signed 32 bit number is: + // [0x1,0x7f] + 0x7f + (0x7f << 7) + (0x7f << 14) + (0x7f << 21) + (0x6 << 28) + // OR + // 0x0 + 0x7f + (0x7f << 7) + (0x7f << 14) + (0x7f << 21) + (0x7 << 28) + // we should reset the readerIndex if we overflowed the int type. + in.readerIndex(readerIndex); + throw DECODE_ULE_128_TO_INT_DECOMPRESSION_EXCEPTION; + } + return (int) v; + } + + /** + * Unsigned Little Endian Base 128 Variable-Length Integer Encoding + * <p> + * Visible for testing only! + */ + static long decodeULE128(ByteBuf in, long result) throws Http2Exception { assert result <= 0x7f && result >= 0; + final boolean resultStartedAtZero = result == 0; final int writerIndex = in.writerIndex(); - for (int readerIndex = in.readerIndex(), shift = 0; - readerIndex < writerIndex; ++readerIndex, shift += 7) { + for (int readerIndex = in.readerIndex(), shift = 0; readerIndex < writerIndex; ++readerIndex, shift += 7) { byte b = in.getByte(readerIndex); - if (shift == 28 && ((b & 0x80) != 0 || b > 6)) { - // the maximum value that can be represented by a signed 32 bit number is: - // 0x7f + 0x7f + (0x7f << 7) + (0x7f << 14) + (0x7f << 21) + (0x6 << 28) + if (shift == 56 && ((b & 0x80) != 0 || b == 0x7F && !resultStartedAtZero)) { + // the maximum value that can be represented by a signed 64 bit number is: + // [0x01L, 0x7fL] + 0x7fL + (0x7fL << 7) + (0x7fL << 14) + (0x7fL << 21) + (0x7fL << 28) + (0x7fL << 35) + // + (0x7fL << 42) + (0x7fL << 49) + (0x7eL << 56) + // OR + // 0x0L + 0x7fL + (0x7fL << 7) + (0x7fL << 14) + (0x7fL << 21) + (0x7fL << 28) + (0x7fL << 35) + + // (0x7fL << 42) + (0x7fL << 49) + (0x7fL << 56) // this means any more shifts will result in overflow so we should break out and throw an error. - in.readerIndex(readerIndex + 1); - break; + throw DECODE_ULE_128_TO_LONG_DECOMPRESSION_EXCEPTION; } if ((b & 0x80) == 0) { in.readerIndex(readerIndex + 1); - return result + ((b & 0x7F) << shift); + return result + ((b & 0x7FL) << shift); } - result += (b & 0x7F) << shift; + result += (b & 0x7FL) << shift; } throw DECODE_ULE_128_DECOMPRESSION_EXCEPTION; diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java index 8c201590959..805f09530f9 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/Encoder.java @@ -204,7 +204,7 @@ public void setMaxHeaderTableSize(ByteBuf out, long maxHeaderTableSize) throws H this.maxHeaderTableSize = maxHeaderTableSize; ensureCapacity(0); // Casting to integer is safe as we verified the maxHeaderTableSize is a valid unsigned int. - encodeInteger(out, 0x20, 5, (int) maxHeaderTableSize); + encodeInteger(out, 0x20, 5, maxHeaderTableSize); } /** @@ -227,20 +227,27 @@ public long getMaxHeaderListSize() { } /** - * Encode integer according to Section 5.1. + * Encode integer according to <a href="https://tools.ietf.org/html/rfc7541#section-5.1">Section 5.1</a>. */ private static void encodeInteger(ByteBuf out, int mask, int n, int i) { + encodeInteger(out, mask, n, (long) i); + } + + /** + * Encode integer according to <a href="https://tools.ietf.org/html/rfc7541#section-5.1">Section 5.1</a>. + */ + private static void encodeInteger(ByteBuf out, int mask, int n, long i) { assert n >= 0 && n <= 8 : "N: " + n; int nbits = 0xFF >>> (8 - n); if (i < nbits) { - out.writeByte(mask | i); + out.writeByte((int) (mask | i)); } else { out.writeByte(mask | nbits); - int length = i - nbits; + long length = i - nbits; for (; (length & ~0x7F) != 0; length >>>= 7) { - out.writeByte((length & 0x7F) | 0x80); + out.writeByte((int) ((length & 0x7F) | 0x80)); } - out.writeByte(length); + out.writeByte((int) length); } } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HuffmanEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HuffmanEncoder.java index 136f6c67c71..aad58198130 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HuffmanEncoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/internal/hpack/HuffmanEncoder.java @@ -118,7 +118,7 @@ private void encodeSlowPath(ByteBuf out, CharSequence data) { * @param data the string literal to be Huffman encoded * @return the number of bytes required to Huffman encode <code>data</code> */ - public int getEncodedLength(CharSequence data) { + int getEncodedLength(CharSequence data) { if (data instanceof AsciiString) { AsciiString string = (AsciiString) data; try { diff --git a/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/DecoderULE128Benchmark.java b/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/DecoderULE128Benchmark.java new file mode 100644 index 00000000000..a13c978f246 --- /dev/null +++ b/microbench/src/main/java/io/netty/microbench/http2/internal/hpack/DecoderULE128Benchmark.java @@ -0,0 +1,156 @@ +/* + * Copyright 2017 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.microbench.http2.internal.hpack; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.handler.codec.http2.Http2Error; +import io.netty.handler.codec.http2.Http2Exception; +import io.netty.microbench.util.AbstractMicrobenchmark; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Threads; +import org.openjdk.jmh.annotations.Warmup; + +import java.util.concurrent.TimeUnit; + +@Threads(1) +@State(Scope.Benchmark) +@Fork(1) +@Warmup(iterations = 5) +@Measurement(iterations = 10) +@OutputTimeUnit(TimeUnit.NANOSECONDS) +public class DecoderULE128Benchmark extends AbstractMicrobenchmark { + private static final Http2Exception DECODE_ULE_128_TO_LONG_DECOMPRESSION_EXCEPTION = + new Http2Exception(Http2Error.COMPRESSION_ERROR); + private static final Http2Exception DECODE_ULE_128_TO_INT_DECOMPRESSION_EXCEPTION = + new Http2Exception(Http2Error.COMPRESSION_ERROR); + private static final Http2Exception DECODE_ULE_128_DECOMPRESSION_EXCEPTION = + new Http2Exception(Http2Error.COMPRESSION_ERROR); + + private ByteBuf longMaxBuf; + private ByteBuf intMaxBuf; + + @Setup + public void setup() { + byte[] longMax = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, + (byte) 0xFF, (byte) 0x7F}; + longMaxBuf = Unpooled.wrappedBuffer(longMax); + byte[] intMax = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x07}; + intMaxBuf = Unpooled.wrappedBuffer(intMax); + } + + @Benchmark + @BenchmarkMode(Mode.AverageTime) + public long decodeMaxLong() throws Http2Exception { + long v = decodeULE128(longMaxBuf, 0L); + longMaxBuf.readerIndex(0); + return v; + } + + @Benchmark + @BenchmarkMode(Mode.AverageTime) + public long decodeMaxIntWithLong() throws Http2Exception { + long v = decodeULE128(intMaxBuf, 0L); + intMaxBuf.readerIndex(0); + return v; + } + + @Benchmark + @BenchmarkMode(Mode.AverageTime) + public int decodeMaxInt() throws Http2Exception { + int v = decodeULE128(intMaxBuf, 0); + intMaxBuf.readerIndex(0); + return v; + } + + @Benchmark + @BenchmarkMode(Mode.AverageTime) + public int decodeMaxIntUsingLong() throws Http2Exception { + int v = decodeULE128UsingLong(intMaxBuf, 0); + intMaxBuf.readerIndex(0); + return v; + } + + static int decodeULE128UsingLong(ByteBuf in, int result) throws Http2Exception { + final int readerIndex = in.readerIndex(); + final long v = decodeULE128(in, (long) result); + if (v > Integer.MAX_VALUE) { + in.readerIndex(readerIndex); + throw DECODE_ULE_128_TO_INT_DECOMPRESSION_EXCEPTION; + } + return (int) v; + } + + static long decodeULE128(ByteBuf in, long result) throws Http2Exception { + assert result <= 0x7f && result >= 0; + final boolean resultStartedAtZero = result == 0; + final int writerIndex = in.writerIndex(); + for (int readerIndex = in.readerIndex(), shift = 0; readerIndex < writerIndex; ++readerIndex, shift += 7) { + byte b = in.getByte(readerIndex); + if (shift == 56 && ((b & 0x80) != 0 || b == 0x7F && !resultStartedAtZero)) { + // the maximum value that can be represented by a signed 64 bit number is: + // [0x01L, 0x7fL] + 0x7fL + (0x7fL << 7) + (0x7fL << 14) + (0x7fL << 21) + (0x7fL << 28) + (0x7fL << 35) + // + (0x7fL << 42) + (0x7fL << 49) + (0x7eL << 56) + // OR + // 0x0L + 0x7fL + (0x7fL << 7) + (0x7fL << 14) + (0x7fL << 21) + (0x7fL << 28) + (0x7fL << 35) + + // (0x7fL << 42) + (0x7fL << 49) + (0x7fL << 56) + // this means any more shifts will result longMaxBuf overflow so we should break out and throw an error. + throw DECODE_ULE_128_TO_LONG_DECOMPRESSION_EXCEPTION; + } + + if ((b & 0x80) == 0) { + in.readerIndex(readerIndex + 1); + return result + ((b & 0x7FL) << shift); + } + result += (b & 0x7FL) << shift; + } + + throw DECODE_ULE_128_DECOMPRESSION_EXCEPTION; + } + + static int decodeULE128(ByteBuf in, int result) throws Http2Exception { + assert result <= 0x7f && result >= 0; + final boolean resultStartedAtZero = result == 0; + final int writerIndex = in.writerIndex(); + for (int readerIndex = in.readerIndex(), shift = 0; readerIndex < writerIndex; ++readerIndex, shift += 7) { + byte b = in.getByte(readerIndex); + if (shift == 28 && ((b & 0x80) != 0 || !resultStartedAtZero && b > 6 || resultStartedAtZero && b > 7)) { + // the maximum value that can be represented by a signed 32 bit number is: + // [0x1,0x7f] + 0x7f + (0x7f << 7) + (0x7f << 14) + (0x7f << 21) + (0x6 << 28) + // OR + // 0x0 + 0x7f + (0x7f << 7) + (0x7f << 14) + (0x7f << 21) + (0x7 << 28) + // this means any more shifts will result longMaxBuf overflow so we should break out and throw an error. + throw DECODE_ULE_128_TO_INT_DECOMPRESSION_EXCEPTION; + } + + if ((b & 0x80) == 0) { + in.readerIndex(readerIndex + 1); + return result + ((b & 0x7F) << shift); + } + result += (b & 0x7F) << shift; + } + + throw DECODE_ULE_128_DECOMPRESSION_EXCEPTION; + } +}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/DecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/DecoderTest.java index db282044026..76c070d91e4 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/DecoderTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/DecoderTest.java @@ -38,7 +38,7 @@ import org.junit.Before; import org.junit.Test; -import static io.netty.handler.codec.http2.Http2TestUtil.newTestDecoder; +import static io.netty.handler.codec.http2.internal.hpack.Decoder.decodeULE128; import static io.netty.util.AsciiString.EMPTY_STRING; import static io.netty.util.AsciiString.of; import static java.lang.Integer.MAX_VALUE; @@ -50,10 +50,6 @@ import static org.mockito.Mockito.verifyNoMoreInteractions; public class DecoderTest { - - private static final int MAX_HEADER_LIST_SIZE = 8192; - private static final int MAX_HEADER_TABLE_SIZE = 4096; - private Decoder decoder; private Http2Headers mockHeaders; @@ -77,6 +73,109 @@ public void setUp() throws Http2Exception { mockHeaders = mock(Http2Headers.class); } + @Test + public void testDecodeULE128IntMax() throws Http2Exception { + byte[] input = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x07}; + ByteBuf in = Unpooled.wrappedBuffer(input); + try { + assertEquals(Integer.MAX_VALUE, decodeULE128(in, 0)); + } finally { + in.release(); + } + } + + @Test(expected = Http2Exception.class) + public void testDecodeULE128IntOverflow1() throws Http2Exception { + byte[] input = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x07}; + ByteBuf in = Unpooled.wrappedBuffer(input); + final int readerIndex = in.readerIndex(); + try { + decodeULE128(in, 1); + } finally { + assertEquals(readerIndex, in.readerIndex()); + in.release(); + } + } + + @Test(expected = Http2Exception.class) + public void testDecodeULE128IntOverflow2() throws Http2Exception { + byte[] input = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x08}; + ByteBuf in = Unpooled.wrappedBuffer(input); + final int readerIndex = in.readerIndex(); + try { + decodeULE128(in, 0); + } finally { + assertEquals(readerIndex, in.readerIndex()); + in.release(); + } + } + + @Test + public void testDecodeULE128LongMax() throws Http2Exception { + byte[] input = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, + (byte) 0xFF, (byte) 0x7F}; + ByteBuf in = Unpooled.wrappedBuffer(input); + try { + assertEquals(Long.MAX_VALUE, decodeULE128(in, 0L)); + } finally { + in.release(); + } + } + + @Test(expected = Http2Exception.class) + public void testDecodeULE128LongOverflow1() throws Http2Exception { + byte[] input = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, + (byte) 0xFF, (byte) 0xFF}; + ByteBuf in = Unpooled.wrappedBuffer(input); + final int readerIndex = in.readerIndex(); + try { + decodeULE128(in, 0L); + } finally { + assertEquals(readerIndex, in.readerIndex()); + in.release(); + } + } + + @Test(expected = Http2Exception.class) + public void testDecodeULE128LongOverflow2() throws Http2Exception { + byte[] input = {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, + (byte) 0xFF, (byte) 0x7F}; + ByteBuf in = Unpooled.wrappedBuffer(input); + final int readerIndex = in.readerIndex(); + try { + decodeULE128(in, 1L); + } finally { + assertEquals(readerIndex, in.readerIndex()); + in.release(); + } + } + + @Test + public void testSetTableSizeWithMaxUnsigned32BitValueSucceeds() throws Http2Exception { + byte[] input = {(byte) 0x3F, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x0E}; + ByteBuf in = Unpooled.wrappedBuffer(input); + try { + final long expectedHeaderSize = 4026531870L; // based on the input above + decoder.setMaxHeaderTableSize(expectedHeaderSize); + decoder.decode(0, in, mockHeaders); + assertEquals(expectedHeaderSize, decoder.getMaxHeaderTableSize()); + } finally { + in.release(); + } + } + + @Test(expected = Http2Exception.class) + public void testSetTableSizeOverLimitFails() throws Http2Exception { + byte[] input = {(byte) 0x3F, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x0E}; + ByteBuf in = Unpooled.wrappedBuffer(input); + try { + decoder.setMaxHeaderTableSize(4026531870L - 1); // based on the input above ... 1 less than is above. + decoder.decode(0, in, mockHeaders); + } finally { + in.release(); + } + } + @Test public void testLiteralHuffmanEncodedWithEmptyNameAndValue() throws Http2Exception { byte[] input = {0, (byte) 0x80, 0}; @@ -123,7 +222,7 @@ public void testLiteralHuffmanEncodedWithPaddingNotCorrespondingToMSBThrows() th } @Test(expected = Http2Exception.class) - public void testIncompleteIndex() throws Http2Exception, Http2Exception { + public void testIncompleteIndex() throws Http2Exception { byte[] compressed = Hex.decodeHex("FFF0".toCharArray()); ByteBuf in = Unpooled.wrappedBuffer(compressed); try { diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/EncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/EncoderTest.java new file mode 100644 index 00000000000..6e72b41628d --- /dev/null +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/internal/hpack/EncoderTest.java @@ -0,0 +1,60 @@ +/* + * Copyright 2017 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.http2.internal.hpack; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.handler.codec.http2.Http2Exception; +import io.netty.handler.codec.http2.Http2Headers; +import org.junit.Before; +import org.junit.Test; + +import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_HEADER_TABLE_SIZE; +import static org.junit.Assert.assertEquals; +import static org.mockito.Mockito.mock; + +public class EncoderTest { + private Decoder decoder; + private Encoder encoder; + private Http2Headers mockHeaders; + + @Before + public void setUp() throws Http2Exception { + encoder = new Encoder(); + decoder = new Decoder(); + mockHeaders = mock(Http2Headers.class); + } + + @Test + public void testSetMaxHeaderTableSizeToMaxValue() throws Http2Exception { + ByteBuf buf = Unpooled.buffer(); + encoder.setMaxHeaderTableSize(buf, MAX_HEADER_TABLE_SIZE); + decoder.setMaxHeaderTableSize(MAX_HEADER_TABLE_SIZE); + decoder.decode(0, buf, mockHeaders); + assertEquals(MAX_HEADER_TABLE_SIZE, decoder.getMaxHeaderTableSize()); + buf.release(); + } + + @Test(expected = Http2Exception.class) + public void testSetMaxHeaderTableSizeOverflow() throws Http2Exception { + ByteBuf buf = Unpooled.buffer(); + try { + encoder.setMaxHeaderTableSize(buf, MAX_HEADER_TABLE_SIZE + 1); + } finally { + buf.release(); + } + } +}
train
train
2017-01-18T08:02:15
"2017-01-12T18:20:35Z"
exell-christopher
val
netty/netty/6192_6235
netty/netty
netty/netty/6192
netty/netty/6235
[ "timestamp(timedelta=83.0, similarity=0.8546401581537069)" ]
c590e3bd63c43e9c1104ce2ab3a5307f14bee960
3e64b7393d27273b75fac2ebebf10ec5481716bf
[ "@wangyuntao we warn in `ServerBootstrap` and `Bootstrap`. The rest is up to the user when the methods return false.", "@normanmaurer , I checked the code and found that [Bootstrap](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/bootstrap/Bootstrap.java#L272) has warn logs, but [ServerBootstrap](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java#L144) does't.", "@wangyuntao ups... let me fix this then.", "Should be fixed by https://github.com/netty/netty/pull/6235", "Fixed by https://github.com/netty/netty/pull/6235" ]
[ "May be need rethrow or catch Exception instead of Throwable?", "@fenik17 I just moved the code around that was there before tho... @Scottmitch @nmittler @trustin WDYT ?", "fine for this PR. We will look at `Throwable` in a more broad sense for https://github.com/netty/netty/issues/6096", "@Scottmitch +1 :)" ]
"2017-01-18T07:19:01Z"
[ "defect" ]
Add some warn log about unknown channel option?
Should add some WARN log [at this point](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/DefaultChannelConfig.java#L109) ? like [this one](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java#L242) ?
[ "transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java", "transport/src/main/java/io/netty/bootstrap/Bootstrap.java", "transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java" ]
[ "transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java", "transport/src/main/java/io/netty/bootstrap/Bootstrap.java", "transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java" ]
[]
diff --git a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java index d6d0e3efde7..8140ebf9d68 100644 --- a/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/AbstractBootstrap.java @@ -30,6 +30,7 @@ import io.netty.util.concurrent.EventExecutor; import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.internal.StringUtil; +import io.netty.util.internal.logging.InternalLogger; import java.net.InetAddress; import java.net.InetSocketAddress; @@ -436,6 +437,33 @@ final Map<AttributeKey<?>, Object> attrs() { return copiedMap(attrs); } + static void setChannelOptions( + Channel channel, Map<ChannelOption<?>, Object> options, InternalLogger logger) { + for (Map.Entry<ChannelOption<?>, Object> e: options.entrySet()) { + setChannelOption(channel, e.getKey(), e.getValue(), logger); + } + } + + static void setChannelOptions( + Channel channel, Map.Entry<ChannelOption<?>, Object>[] options, InternalLogger logger) { + for (Map.Entry<ChannelOption<?>, Object> e: options) { + setChannelOption(channel, e.getKey(), e.getValue(), logger); + } + } + + @SuppressWarnings("unchecked") + private static void setChannelOption( + Channel channel, ChannelOption<?> option, Object value, InternalLogger logger) { + try { + if (!channel.config().setOption((ChannelOption<Object>) option, value)) { + logger.warn("Unknown channel option '{}' for channel '{}'", option, channel); + } + } catch (Throwable t) { + logger.warn( + "Failed to set channel option '{}' with value '{}' for channel '{}'", option, channel, channel, t); + } + } + @Override public String toString() { StringBuilder buf = new StringBuilder() diff --git a/transport/src/main/java/io/netty/bootstrap/Bootstrap.java b/transport/src/main/java/io/netty/bootstrap/Bootstrap.java index 7e2716ec975..17a84d46d56 100644 --- a/transport/src/main/java/io/netty/bootstrap/Bootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/Bootstrap.java @@ -266,15 +266,7 @@ void init(Channel channel) throws Exception { final Map<ChannelOption<?>, Object> options = options0(); synchronized (options) { - for (Entry<ChannelOption<?>, Object> e: options.entrySet()) { - try { - if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) { - logger.warn("Unknown channel option: " + e); - } - } catch (Throwable t) { - logger.warn("Failed to set a channel option: " + channel, t); - } - } + setChannelOptions(channel, options, logger); } final Map<AttributeKey<?>, Object> attrs = attrs0(); diff --git a/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java b/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java index c142a1144e2..98730e9426b 100644 --- a/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java +++ b/transport/src/main/java/io/netty/bootstrap/ServerBootstrap.java @@ -141,7 +141,7 @@ public ServerBootstrap childHandler(ChannelHandler childHandler) { void init(Channel channel) throws Exception { final Map<ChannelOption<?>, Object> options = options0(); synchronized (options) { - channel.config().setOptions(options); + setChannelOptions(channel, options, logger); } final Map<AttributeKey<?>, Object> attrs = attrs0(); @@ -204,13 +204,13 @@ public ServerBootstrap validate() { } @SuppressWarnings("unchecked") - private static Entry<ChannelOption<?>, Object>[] newOptionArray(int size) { + private static Entry<AttributeKey<?>, Object>[] newAttrArray(int size) { return new Entry[size]; } @SuppressWarnings("unchecked") - private static Entry<AttributeKey<?>, Object>[] newAttrArray(int size) { - return new Entry[size]; + private static Map.Entry<ChannelOption<?>, Object>[] newOptionArray(int size) { + return new Map.Entry[size]; } private static class ServerBootstrapAcceptor extends ChannelInboundHandlerAdapter { @@ -237,13 +237,7 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) { child.pipeline().addLast(childHandler); for (Entry<ChannelOption<?>, Object> e: childOptions) { - try { - if (!child.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) { - logger.warn("Unknown channel option: " + e); - } - } catch (Throwable t) { - logger.warn("Failed to set a channel option: " + child, t); - } + setChannelOptions(child, childOptions, logger); } for (Entry<AttributeKey<?>, Object> e: childAttrs) {
null
test
train
2017-01-18T08:02:15
"2017-01-10T07:40:59Z"
ytcoode
val
netty/netty/6264_6267
netty/netty
netty/netty/6264
netty/netty/6267
[ "timestamp(timedelta=658.0, similarity=0.9354032902362656)" ]
7a39afd031accea9ee38653afbd58eb1c466deda
d171f82f96f2893eeb93fed0abb0cdbc6cc48e9b
[ "please submit a PR if you this this is an improvement. I don't see `wakeup` taken into account.", "@Scottmitch Thank you for your advice, here is the PR #6267 " ]
[ "This is a change in behavior, are you sure this is desired? Previously the state was changed to shutdown before the thread was started, now we unconditionally start the thread before attempting shutdown. Perhaps now the thread will be allowed to process jobs that it previously wouldn't have been allowed to.", "@Scottmitch , Please check it again.", "I think you just moved this check from line 611 ... however I wonder if the intention is that this method should be able to \"force the shutdown\" even if the state is `ST_SHUTTING_DOWN` (@normanmaurer - WDYT). If that is the case this method should only exit prematurely if `state >= ST_SHUTDOWN`.", "(for the line above): minor optimization ... `isShuttingDown()` is called above which will use the state, and you are also getting the state below in a loop. you could get the state outside the loop, use it above, and then refresh it at the end of the loop.\r\n\r\n```java\r\nint oldState = STATE_UPDATER.get(this);\r\nif (isShuttingDown(state)) {\r\n return terminationFuture();\r\n}\r\n\r\nfor(;;) {\r\n//try atomic set\r\n oldState = STATE_UPDATER.get(this);\r\n}\r\n\r\n@Override\r\npublic boolean isShuttingDown() {\r\n return isShuttingDown(STATE_UPDATER.get(this));\r\n}\r\n\r\nprivate static boolean isShuttingDown(int state) {\r\n return state >= ST_SHUTDOWN;\r\n}\r\n```", "Yes, maybe you are right!", "@Scottmitch , how about we just remove the shutdown check before the for loop?", "this is a change in behavior. the new code will always try to change the state but the old code, but the old code only changed the state in the event loop. @normanmaurer - WDYT (here is the original change [1])\r\n\r\n[1] https://github.com/netty/netty/commit/16a85e6cca46cfdcfe07c9e76a3b935c72c5ec1d#diff-cde1b8aa1fe0c3a083412f79421bafcfR501" ]
"2017-01-24T03:11:00Z"
[]
Code suggestion about shutdownGracefully
Here's the current [code](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java#L557), and the following is a code snippet: ```java boolean inEventLoop = inEventLoop(); boolean wakeup; int oldState; for (;;) { if (isShuttingDown()) { return terminationFuture(); } int newState; wakeup = true; oldState = STATE_UPDATER.get(this); if (inEventLoop) { newState = ST_SHUTTING_DOWN; } else { switch (oldState) { case ST_NOT_STARTED: case ST_STARTED: newState = ST_SHUTTING_DOWN; break; default: newState = oldState; wakeup = false; } } if (STATE_UPDATER.compareAndSet(this, oldState, newState)) { break; } } gracefulShutdownQuietPeriod = unit.toNanos(quietPeriod); gracefulShutdownTimeout = unit.toNanos(timeout); ``` Would it be better to code like this? ```java int oldState; for (;;) { oldState = STATE_UPDATER.get(this); if (oldState >= ST_SHUTTING_DOWN) { return terminationFuture(); } if (STATE_UPDATER.compareAndSet(this, oldState, ST_SHUTTING_DOWN)) { break; } } gracefulShutdownQuietPeriod = unit.toNanos(quietPeriod); gracefulShutdownTimeout = unit.toNanos(timeout); ```
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java" ]
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java index e90358cffd0..22be86b9b0f 100644 --- a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java +++ b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java @@ -547,37 +547,18 @@ public Future<?> shutdownGracefully(long quietPeriod, long timeout, TimeUnit uni throw new NullPointerException("unit"); } - if (isShuttingDown()) { - return terminationFuture(); - } - - boolean inEventLoop = inEventLoop(); - boolean wakeup; int oldState; for (;;) { - if (isShuttingDown()) { - return terminationFuture(); - } - int newState; - wakeup = true; oldState = STATE_UPDATER.get(this); - if (inEventLoop) { - newState = ST_SHUTTING_DOWN; - } else { - switch (oldState) { - case ST_NOT_STARTED: - case ST_STARTED: - newState = ST_SHUTTING_DOWN; - break; - default: - newState = oldState; - wakeup = false; - } + if (oldState >= ST_SHUTTING_DOWN) { + return terminationFuture(); } - if (STATE_UPDATER.compareAndSet(this, oldState, newState)) { + + if (STATE_UPDATER.compareAndSet(this, oldState, ST_SHUTTING_DOWN)) { break; } } + gracefulShutdownQuietPeriod = unit.toNanos(quietPeriod); gracefulShutdownTimeout = unit.toNanos(timeout); @@ -585,10 +566,7 @@ public Future<?> shutdownGracefully(long quietPeriod, long timeout, TimeUnit uni doStartThread(); } - if (wakeup) { - wakeup(inEventLoop); - } - + wakeup(inEventLoop()); return terminationFuture(); } @@ -604,31 +582,14 @@ public void shutdown() { return; } - boolean inEventLoop = inEventLoop(); - boolean wakeup; int oldState; for (;;) { - if (isShuttingDown()) { - return; - } - int newState; - wakeup = true; oldState = STATE_UPDATER.get(this); - if (inEventLoop) { - newState = ST_SHUTDOWN; - } else { - switch (oldState) { - case ST_NOT_STARTED: - case ST_STARTED: - case ST_SHUTTING_DOWN: - newState = ST_SHUTDOWN; - break; - default: - newState = oldState; - wakeup = false; - } + if (oldState >= ST_SHUTTING_DOWN) { + return; } - if (STATE_UPDATER.compareAndSet(this, oldState, newState)) { + + if (STATE_UPDATER.compareAndSet(this, oldState, ST_SHUTDOWN)) { break; } } @@ -637,9 +598,7 @@ public void shutdown() { doStartThread(); } - if (wakeup) { - wakeup(inEventLoop); - } + wakeup(inEventLoop()); } @Override
null
train
train
2017-01-27T23:37:10
"2017-01-23T04:36:15Z"
ytcoode
val
netty/netty/6282_6288
netty/netty
netty/netty/6282
netty/netty/6288
[ "timestamp(timedelta=267.0, similarity=0.8924097440792926)" ]
e13da218e9509058d99ba2ec9b60bbf46d6d67ce
270aa1293a7bd73501248983ff4d87b4d201036a
[ "@isdom this is because of the caches... when you deallocate the buffers they will first end up in ThreadLocal caches. Only after these removed from the caches they are really deallocated. You can disable the caches to see it.", "@normanmaurer but I have already disable ThreadLocal caches by set JVM flag: \r\n```\r\n-Dio.netty.allocator.tinyCacheSize=0 \r\n-Dio.netty.allocator.smallCacheSize=0 \r\n-Dio.netty.allocator.normalCacheSize=0 \r\n```\r\nand Take a look at the following [test code](https://github.com/isdom/jocean-http/blob/88e15e8a3cecab118da550eaa991a9322035f99d/jocean-http/src/test/java/org/jocean/netty/buffer/PoolArenaTestCase.java):\r\n\r\n```\r\n public final void testPoolArenaAllocationCounter() {\r\n // create PooledByteBufAllocator instance with ThreadLocal Cache disabled\r\n final PooledByteBufAllocator allocator = new PooledByteBufAllocator(\r\n true, //boolean preferDirect, \r\n 0, //int nHeapArena, \r\n 1, //int nDirectArena, \r\n 8192, //int pageSize, \r\n 11, //int maxOrder,\r\n 0, //int tinyCacheSize, \r\n 0, //int smallCacheSize, \r\n 0, //int normalCacheSize,\r\n true //boolean useCacheForAllThreads\r\n );\r\n \r\n // alloc tiny buf\r\n final ByteBuf b1 = allocator.directBuffer(24);\r\n \r\n // alloc small buf\r\n final ByteBuf b2 = allocator.directBuffer(800);\r\n \r\n // alloc normal buf\r\n final ByteBuf b3 = allocator.directBuffer(8192 * 2);\r\n \r\n // make sure alloc buf success \r\n assertNotNull(b1);\r\n assertNotNull(b2);\r\n assertNotNull(b3);\r\n \r\n // then release buf to deallocated buffers memory, and bcs threadlocal cache has been disabled\r\n // allocations counter value must equals deallocations counter value\r\n assertTrue(b1.release());\r\n assertTrue(b2.release());\r\n assertTrue(b3.release());\r\n \r\n assertTrue(allocator.directArenas().size() >= 1);\r\n \r\n final PoolArenaMetric metric = allocator.directArenas().get(0);\r\n \r\n // 1 tiny allocation + 1 small allocation + 1 normal allocation\r\n assertEquals(3, metric.numDeallocations()); // assertion PASS\r\n assertEquals(3, metric.numAllocations()); // assertion PASS\r\n \r\n // BUT following assertion fail\r\n assertEquals(metric.numTinyDeallocations(), metric.numTinyAllocations());\r\n assertEquals(metric.numSmallDeallocations(), metric.numSmallAllocations());\r\n assertEquals(metric.numNormalDeallocations(), metric.numNormalAllocations());\r\n }\r\n```\r\n\r\nI think PoolArena try to allocate tiny or small buffer, when PoolSubpage is empty( [head.next == head](https://github.com/netty/netty/blob/5986c229c463f064e7b787a4547c0689d2302a03/buffer/src/main/java/io/netty/buffer/PoolArena.java#L199) ), tiny or small buffer initialized via function [allocateNormal(buf, reqCapacity, normCapacity) ](https://github.com/netty/netty/blob/5986c229c463f064e7b787a4547c0689d2302a03/buffer/src/main/java/io/netty/buffer/PoolArena.java#L228), but **allocateNormal** always increase normal's counter at [code1](https://github.com/netty/netty/blob/5986c229c463f064e7b787a4547c0689d2302a03/buffer/src/main/java/io/netty/buffer/PoolArena.java#L232) and [code2](https://github.com/netty/netty/blob/5986c229c463f064e7b787a4547c0689d2302a03/buffer/src/main/java/io/netty/buffer/PoolArena.java#L239), these two increment action could be replaced by following code:\r\n\r\n```\r\nprivate void updateAllocationCounter(final int normCapacity) {\r\n if (isTiny(normCapacity)) {\r\n allocationsTiny.increment();\r\n } else if (isTinyOrSmall(normCapacity)) {\r\n allocationsSmall.increment();\r\n } else {\r\n ++allocationsNormal;\r\n }\r\n}\r\n```\r\n\r\nPlease check PoolArena's allocation increment code (tiny / small / normal ) again, thanks @normanmaurer \r\n", "@isdom doh! You are right. Want to do a PR or should I take care ?", "@normanmaurer OK,Let me try... ", "@isdom ok cool... Would be nice if just would calculate if its tiny/small/normal allocation one time though and not need to do multiple times 👍 ", "Fixed by https://github.com/netty/netty/pull/6288" ]
[ "@isdom I would prefer if we not need to acquire the `synchronized(this)` two times when we increment for normal allocations. Can you please refactor to ensure we not do ?", "OK", "@normanmaurer Could you give me some advice for my commit: [acquire the synchronized(this) once when we increment for normal allocations](https://github.com/netty/netty/pull/6288/commits/e0b79fe7e7e70ccaf5c3178be773c0a8a552a1fe)?" ]
"2017-01-27T08:38:44Z"
[ "defect" ]
Incorrect allocations value for PoolArena (tiny / small / normal)
I found some PoolArena allocations value was incorrect, more specifically is: long numTinyAllocations(); long numSmallAllocations(); long numNormalAllocations(); and long numNormalDeallocations(); When I alloc Pooled ByteBuf & and release all these Bufs, ### Expected behavior PoolArena's Metric SHOULD meet the following conditions: ``` numTinyAllocations() == numTinyDeallocations() && numSmallAllocations() == numSmallDeallocations() && numNormalAllocations() == numNormalDeallocations() ``` ### Actual behavior BUT now result: ``` numTinyAllocations() < numTinyDeallocations() && numSmallAllocations() < numSmallDeallocations() && numNormalAllocations() > numNormalDeallocations() ``` It seems some tiny & small allocations increase normal's counter, I export PoolArenaMetric as MBean by [code](https://github.com/isdom/jocean-http/blob/9d8d528a2a3ee6bf2ca12f04054523dec94d3507/jocean-http/src/main/java/org/jocean/http/util/NettyStats.java) and show MBean by Web using zkoss, see below: <img width="905" alt="2017-01-25 11 23 34" src="https://cloud.githubusercontent.com/assets/2010691/22338996/a21ee320-e423-11e6-8575-0fee497ae401.png"> In this case, `numAllocations(2404) == numDeallocations(2404)` BUT ``` (numTinyDeallocations - numTinyAllocations) // == 17 + (numSmallDeallocations - numSmallAllocations) // == 9 ``` equals ` (numNormalAllocations - numNormalDeallocations) // == 26` ### Steps to reproduce I start test code with VM flag (disable thread local cache): ``` -XX:MaxDirectMemorySize=96M -Dio.netty.allocator.tinyCacheSize=0 -Dio.netty.allocator.smallCacheSize=0 -Dio.netty.allocator.normalCacheSize=0 -Dio.netty.allocator.type=pooled ``` then alloc some ByteBuf and release Bufs. ### Minimal yet complete reproducer code (or URL to code) TestCase to reproduce: https://github.com/isdom/jocean-http/blob/6bc6cfa9ae1e71395a4dd52355b1b0a8365bb530/jocean-http/src/test/java/org/jocean/netty/buffer/PooledByteBufAllocatorTestCase.java and make sure run this test with VM flag: -XX:MaxDirectMemorySize=96M It fail at: ``` assertEquals(metric.numTinyDeallocations(), metric.numTinyAllocations()); assertEquals(metric.numSmallDeallocations(), metric.numSmallAllocations()); assertEquals(metric.numNormalDeallocations(), metric.numNormalAllocations()); ``` output : ``` java.lang.AssertionError: expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) ``` If this is confirmed as a issue,I can open a PR to fix it. ### Netty version netty-all-4.1.7.Final ### JVM version (e.g. `java -version`) java version "1.8.0_102" Java(TM) SE Runtime Environment (build 1.8.0_102-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode) ### OS version (e.g. `uname -a`) Linux xxx 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[ "buffer/src/main/java/io/netty/buffer/PoolArena.java" ]
[ "buffer/src/main/java/io/netty/buffer/PoolArena.java" ]
[ "buffer/src/test/java/io/netty/buffer/PoolArenaTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/PoolArena.java b/buffer/src/main/java/io/netty/buffer/PoolArena.java index 0b87a96f8df..2faeb09be1f 100644 --- a/buffer/src/main/java/io/netty/buffer/PoolArena.java +++ b/buffer/src/main/java/io/netty/buffer/PoolArena.java @@ -201,16 +201,14 @@ private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int req long handle = s.allocate(); assert handle >= 0; s.chunk.initBufWithSubpage(buf, handle, reqCapacity); - - if (tiny) { - allocationsTiny.increment(); - } else { - allocationsSmall.increment(); - } + incTinySmallNormalAllocation(normCapacity); return; } } - allocateNormal(buf, reqCapacity, normCapacity); + synchronized (this) { + allocateNormal(buf, reqCapacity, normCapacity); + incTinySmallNormalAllocation(normCapacity); + } return; } if (normCapacity <= chunkSize) { @@ -218,30 +216,41 @@ private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int req // was able to allocate out of the cache so move on return; } - allocateNormal(buf, reqCapacity, normCapacity); + synchronized (this) { + allocateNormal(buf, reqCapacity, normCapacity); + incTinySmallNormalAllocation(normCapacity); + } } else { // Huge allocations are never served via the cache so just call allocateHuge allocateHuge(buf, reqCapacity); } } - private synchronized void allocateNormal(PooledByteBuf<T> buf, int reqCapacity, int normCapacity) { + private void allocateNormal(PooledByteBuf<T> buf, int reqCapacity, int normCapacity) { if (q050.allocate(buf, reqCapacity, normCapacity) || q025.allocate(buf, reqCapacity, normCapacity) || q000.allocate(buf, reqCapacity, normCapacity) || qInit.allocate(buf, reqCapacity, normCapacity) || q075.allocate(buf, reqCapacity, normCapacity)) { - ++allocationsNormal; return; } // Add a new chunk. PoolChunk<T> c = newChunk(pageSize, maxOrder, pageShifts, chunkSize); long handle = c.allocate(normCapacity); - ++allocationsNormal; assert handle > 0; c.initBuf(buf, handle, reqCapacity); qInit.add(c); } + private void incTinySmallNormalAllocation(final int normCapacity) { + if (isTiny(normCapacity)) { + allocationsTiny.increment(); + } else if (isTinyOrSmall(normCapacity)) { + allocationsSmall.increment(); + } else { + ++allocationsNormal; + } + } + private void allocateHuge(PooledByteBuf<T> buf, int reqCapacity) { PoolChunk<T> chunk = newUnpooledChunk(reqCapacity); activeBytesHuge.add(chunk.chunkSize());
diff --git a/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java b/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java index f5acd2b4f22..f27c41ca3ef 100644 --- a/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java +++ b/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java @@ -32,4 +32,49 @@ public void testNormalizeCapacity() throws Exception { Assert.assertEquals(expectedResult[i], arena.normalizeCapacity(reqCapacities[i])); } } + + @Test + public final void testAllocationCounter() { + final PooledByteBufAllocator allocator = new PooledByteBufAllocator( + true, // preferDirect + 0, // nHeapArena + 1, // nDirectArena + 8192, // pageSize + 11, // maxOrder + 0, // tinyCacheSize + 0, // smallCacheSize + 0, // normalCacheSize + true // useCacheForAllThreads + ); + + // create tiny buffer + final ByteBuf b1 = allocator.directBuffer(24); + // create small buffer + final ByteBuf b2 = allocator.directBuffer(800); + // create normal buffer + final ByteBuf b3 = allocator.directBuffer(8192 * 2); + + Assert.assertNotNull(b1); + Assert.assertNotNull(b2); + Assert.assertNotNull(b3); + + // then release buffer to deallocated memory while threadlocal cache has been disabled + // allocations counter value must equals deallocations counter value + Assert.assertTrue(b1.release()); + Assert.assertTrue(b2.release()); + Assert.assertTrue(b3.release()); + + Assert.assertTrue(allocator.directArenas().size() >= 1); + final PoolArenaMetric metric = allocator.directArenas().get(0); + + Assert.assertEquals(3, metric.numDeallocations()); + Assert.assertEquals(3, metric.numAllocations()); + + Assert.assertEquals(1, metric.numTinyDeallocations()); + Assert.assertEquals(1, metric.numTinyAllocations()); + Assert.assertEquals(1, metric.numSmallDeallocations()); + Assert.assertEquals(1, metric.numSmallAllocations()); + Assert.assertEquals(1, metric.numNormalDeallocations()); + Assert.assertEquals(1, metric.numNormalAllocations()); + } }
train
train
2017-01-27T08:26:17
"2017-01-26T16:14:10Z"
isdom
val
netty/netty/5951_6295
netty/netty
netty/netty/5951
netty/netty/6295
[ "timestamp(timedelta=234.0, similarity=0.8622783563886524)" ]
d3581b575eea6c9cc2ab6f7ee7db9be965660fc7
7f1c51ee9d82f9df86e50158adfa50a891434801
[ "@nitsanw can you explain what the `advantages` of the new impl is ?\n", "IIUC Netty originally had an unbounded linked queue here. It then moved to using Chunked which is bounded and just set a very very high boundary on max size. Unbounded should offer a slightly faster and simpler resize logic, and a more accurate representation of the original intention.\nAlso note that this release fixes an issue with the Growable variant, but I don't think Netty uses it.\n" ]
[ "I plan to open an issue for these but wanted to discuss first.", "@nitsanw - FYI. I didn't use `MpscLinkedQueue` in this PR for GC concerns. I have not done any evaluation ... we can do this as a followup PR if necessary. I did want to upgrade to the latest version of JCTools though.", "+1", "+1", "@Scottmitch why not do this in `PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks);` itself ?", "see above", "we could however `MAX_VALUE` may not necessarily mean \"unlimited\" in practice. For these particular use cases that is the intention, but the goal was `newMpscQueue(int)` would enforce a limit while `newMpscQueue()` isn't required to enforce a limit." ]
"2017-01-30T18:45:55Z"
[ "feature" ]
Upgrade to JCTools v2.0, use the Unbounded MPSC to replace Chunked
[ "common/src/main/java/io/netty/util/internal/PlatformDependent.java", "pom.xml", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "common/src/main/java/io/netty/util/internal/PlatformDependent.java", "pom.xml", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent.java b/common/src/main/java/io/netty/util/internal/PlatformDependent.java index 2fd3c985a94..e56ba5817cc 100644 --- a/common/src/main/java/io/netty/util/internal/PlatformDependent.java +++ b/common/src/main/java/io/netty/util/internal/PlatformDependent.java @@ -19,9 +19,11 @@ import io.netty.util.internal.logging.InternalLoggerFactory; import org.jctools.queues.MpscArrayQueue; import org.jctools.queues.MpscChunkedArrayQueue; +import org.jctools.queues.MpscUnboundedArrayQueue; import org.jctools.queues.SpscLinkedQueue; import org.jctools.queues.atomic.MpscAtomicArrayQueue; -import org.jctools.queues.atomic.MpscLinkedAtomicQueue; +import org.jctools.queues.atomic.MpscGrowableAtomicArrayQueue; +import org.jctools.queues.atomic.MpscUnboundedAtomicArrayQueue; import org.jctools.queues.atomic.SpscLinkedAtomicQueue; import org.jctools.util.Pow2; import org.jctools.util.UnsafeAccess; @@ -51,6 +53,8 @@ import static io.netty.util.internal.PlatformDependent0.HASH_CODE_C2; import static io.netty.util.internal.PlatformDependent0.hashCodeAsciiSanitize; import static io.netty.util.internal.PlatformDependent0.unalignedAccess; +import static java.lang.Math.max; +import static java.lang.Math.min; /** * Utility that detects various properties specific to the current runtime @@ -79,7 +83,6 @@ public final class PlatformDependent { private static final int MPSC_CHUNK_SIZE = 1024; private static final int MIN_MAX_MPSC_CAPACITY = MPSC_CHUNK_SIZE * 2; - private static final int DEFAULT_MAX_MPSC_CAPACITY = MPSC_CHUNK_SIZE * MPSC_CHUNK_SIZE; private static final int MAX_ALLOWED_MPSC_CAPACITY = Pow2.MAX_POW2; private static final long BYTE_ARRAY_BASE_OFFSET = byteArrayBaseOffset0(); @@ -826,25 +829,27 @@ public Object run() { } static <T> Queue<T> newMpscQueue(final int maxCapacity) { - if (USE_MPSC_CHUNKED_ARRAY_QUEUE) { - // Calculate the max capacity which can not be bigger then MAX_ALLOWED_MPSC_CAPACITY. - // This is forced by the MpscChunkedArrayQueue implementation as will try to round it - // up to the next power of two and so will overflow otherwise. - final int capacity = - Math.max(Math.min(maxCapacity, MAX_ALLOWED_MPSC_CAPACITY), MIN_MAX_MPSC_CAPACITY); - return new MpscChunkedArrayQueue<T>(MPSC_CHUNK_SIZE, capacity); - } else { - return new MpscLinkedAtomicQueue<T>(); - } + // Calculate the max capacity which can not be bigger then MAX_ALLOWED_MPSC_CAPACITY. + // This is forced by the MpscChunkedArrayQueue implementation as will try to round it + // up to the next power of two and so will overflow otherwise. + final int capacity = max(min(maxCapacity, MAX_ALLOWED_MPSC_CAPACITY), MIN_MAX_MPSC_CAPACITY); + return USE_MPSC_CHUNKED_ARRAY_QUEUE ? new MpscChunkedArrayQueue<T>(MPSC_CHUNK_SIZE, capacity) + : new MpscGrowableAtomicArrayQueue<T>(MPSC_CHUNK_SIZE, capacity); + } + + static <T> Queue<T> newMpscQueue() { + return USE_MPSC_CHUNKED_ARRAY_QUEUE ? new MpscUnboundedArrayQueue<T>(MPSC_CHUNK_SIZE) + : new MpscUnboundedAtomicArrayQueue<T>(MPSC_CHUNK_SIZE); } } /** * Create a new {@link Queue} which is safe to use for multiple producers (different threads) and a single * consumer (one thread!). + * @return A MPSC queue which may be unbounded. */ public static <T> Queue<T> newMpscQueue() { - return newMpscQueue(DEFAULT_MAX_MPSC_CAPACITY); + return Mpsc.newMpscQueue(); } /** diff --git a/pom.xml b/pom.xml index 4be0e4421be..4deabc18bc8 100644 --- a/pom.xml +++ b/pom.xml @@ -370,7 +370,7 @@ <dependency> <groupId>org.jctools</groupId> <artifactId>jctools-core</artifactId> - <version>2.0.1</version> + <version>2.0.2</version> </dependency> <dependency> diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java index 909088fde4b..33e0dc72588 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java @@ -171,7 +171,8 @@ void remove(AbstractEpollChannel ch) throws IOException { @Override protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { // This event loop never calls takeTask() - return PlatformDependent.newMpscQueue(maxPendingTasks); + return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.<Runnable>newMpscQueue() + : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } @Override diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java index fea086693b7..ccb91173ee1 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java @@ -298,7 +298,8 @@ protected void run() { @Override protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { // This event loop never calls takeTask() - return PlatformDependent.newMpscQueue(maxPendingTasks); + return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.<Runnable>newMpscQueue() + : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } @Override diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index 3de80fca120..331182e0e77 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -256,7 +256,8 @@ public SelectorProvider selectorProvider() { @Override protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { // This event loop never calls takeTask() - return PlatformDependent.newMpscQueue(maxPendingTasks); + return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.<Runnable>newMpscQueue() + : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } @Override
null
test
train
2017-07-06T02:14:27
"2016-10-28T10:45:59Z"
nitsanw
val
netty/netty/6328_6340
netty/netty
netty/netty/6328
netty/netty/6340
[ "timestamp(timedelta=12.0, similarity=0.8957507968918899)" ]
d06990f43458e9ac7cea803269e83d072ada61f6
526e19f51e12167e28b85ac899617dc2486e2b8e
[ "I can provide a PR.\r\n\r\nThe question is: \"can we use CharSequence all the way down, meaning having CorsConfig::exposedHeaders and CorsConfig:: allowedRequestHeaders return `Set<CharSequence>` instead of `Set<String>`?\".", "> can we use CharSequence all the way down, meaning having CorsConfig::exposedHeaders and CorsConfig:: allowedRequestHeaders return Set<CharSequence> instead of Set<String>?\r\n\r\nAnswer my own question: no without breaking API (Java's Set is invariant).", "[AsciiString#toString()](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/AsciiString.java#L1152) caches the string conversion. So with these static variables you should only have to pay the price 1 time. Also with the string improvements in jdk9 we may investigate removing `AsciiString` when we can support jdk9.", "@Scottmitch Got it. Closing then, thanks!", "Actually, in terms of API, we could just make the `CorsConfigBuilder` accept `CharSequence` and call `toString` internally. WDYT?", "I haven't thought about it too much but the class is final so on the surface that seems reasonable.", "I'll send a PR later tonight.", "@slandelle cool.. Just ensure you also keep the old methods to preserve binary compatibility. ", "@normanmaurer Well, I just plan to expand to super type in covariant position (`foo(String)` -> `foo(CharSequence)`), so it should be fine, shouldn't it?", "Nope this is not fine see also:\n\nhttps://github.com/netty/netty/issues/6283\n\n> Am 09.02.2017 um 20:07 schrieb Stephane Landelle <[email protected]>:\n> \n> @normanmaurer Well, I just plan to expand to super type in covariant position (foo(String) -> foo(CharSequence)), so it should be fine, shouldn't it?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n> \n", "Mmm, #6283 looks very different to me (at least from a type system perspective).\r\n\r\n`addAll(Future[])` (varargs are arrays) is NOT a generalization of `addAll(Promise[])`, event though `Promise` extends `Future` because arrays are invariant, not covariant.\r\n\r\nMy first guess was that it wasn't necessary in #6292 to restore `add(Promise)`.\r\n\r\nThen, I'm not 100% sure of what happens in the Java bytecode, let me check.", "My bad, Java arrays are covariant (they are invariant in Scala).", "I stand corrected :)", "PR: #6340" ]
[]
"2017-02-09T20:49:50Z"
[]
CorsConfigBuilder exposeHeaders and allowedRequestHeaders should accept CharSequence
### Expected behavior In Netty 4.1, `HttpHeaderNames` constants are `AsciiString`s. It should be possible to directly pass those constants to `CorsConfigBuilder` methods such as `exposeHeaders`, `allowedRequestHeaders`. ### Actual behavior Those methods take `String`, so one has to convert and call `toString` on the `AsciiString`s. ### Netty version 4.1.8.Final
[ "codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java" ]
[]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java index 93d76ae82dc..3285581af77 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java @@ -148,6 +148,38 @@ public CorsConfigBuilder exposeHeaders(final String... headers) { return this; } + /** + * Specifies the headers to be exposed to calling clients. + * + * During a simple CORS request, only certain response headers are made available by the + * browser, for example using: + * <pre> + * xhr.getResponseHeader(HttpHeaderNames.CONTENT_TYPE); + * </pre> + * + * The headers that are available by default are: + * <ul> + * <li>Cache-Control</li> + * <li>Content-Language</li> + * <li>Content-Type</li> + * <li>Expires</li> + * <li>Last-Modified</li> + * <li>Pragma</li> + * </ul> + * + * To expose other headers they need to be specified which is what this method enables by + * adding the headers to the CORS 'Access-Control-Expose-Headers' response header. + * + * @param headers the values to be added to the 'Access-Control-Expose-Headers' response header + * @return {@link CorsConfigBuilder} to support method chaining. + */ + public CorsConfigBuilder exposeHeaders(final CharSequence... headers) { + for (CharSequence header: headers) { + exposeHeaders.add(header.toString()); + } + return this; + } + /** * By default cookies are not included in CORS requests, but this method will enable cookies to * be added to CORS requests. Calling this method will set the CORS 'Access-Control-Allow-Credentials' @@ -215,6 +247,29 @@ public CorsConfigBuilder allowedRequestHeaders(final String... headers) { return this; } + /** + * Specifies the if headers that should be returned in the CORS 'Access-Control-Allow-Headers' + * response header. + * + * If a client specifies headers on the request, for example by calling: + * <pre> + * xhr.setRequestHeader('My-Custom-Header', "SomeValue"); + * </pre> + * the server will receive the above header name in the 'Access-Control-Request-Headers' of the + * preflight request. The server will then decide if it allows this header to be sent for the + * real request (remember that a preflight is not the real request but a request asking the server + * if it allow a request). + * + * @param headers the headers to be added to the preflight 'Access-Control-Allow-Headers' response header. + * @return {@link CorsConfigBuilder} to support method chaining. + */ + public CorsConfigBuilder allowedRequestHeaders(final CharSequence... headers) { + for (CharSequence header: headers) { + requestHeaders.add(header.toString()); + } + return this; + } + /** * Returns HTTP response headers that should be added to a CORS preflight response. *
null
train
train
2017-02-09T18:50:55
"2017-02-07T16:23:43Z"
slandelle
val
netty/netty/6333_6352
netty/netty
netty/netty/6333
netty/netty/6352
[ "timestamp(timedelta=4617.0, similarity=0.8719420871899115)" ]
fe522fb18e829226ffcc4cdb92fe4b98e2664d08
205004d9d1bb77ee105f3785af9caf72eae59286
[ "@errandir `CombinedChannelDuplexHandler` implementation can't be `@Sharable` as it contains state. Let me ensure we throw an exception if someone tries to make a sub-class of `CombinedChannelDuplexHandler ` `@Sharable`.\r\n\r\nThanks for reporting", "@normanmaurer There is real lack of such exception and even `CombinedChannelDuplexHandler` comment.", "@errandir yep... will have a PR soon. ", "Fixed by #6352" ]
[]
"2017-02-13T18:34:26Z"
[ "defect" ]
@Sharable CombinedChannelDuplexHandler usage problems
Netty version: 4.1.8.Final @Sharable CombinedChannelDuplexHandler implementation shares ChannelHandlerContext among different connections. Problem could be reproduced with [this code](https://gist.github.com/errandir/af45f3559421e3d12ace556af3e4999b) _(compile and run with netty-all-4.1.8.Final.jar in classpath)_. Output will differ from time to time. For example I get: <pre> io.netty.channel.CombinedChannelDuplexHandler$1@3e234509: [1-1] io.netty.channel.CombinedChannelDuplexHandler$1@3e328346: [1-2] io.netty.channel.CombinedChannelDuplexHandler$1@3e328346: [2-1][2-2][2-3] io.netty.channel.CombinedChannelDuplexHandler$1@3e328346: [2-4] io.netty.channel.CombinedChannelDuplexHandler$1@3e328346: [1-3][1-4] </pre> but expect that `[1-*]` strings will be handled by the first handler (`3e234509`).
[ "codec/src/main/java/io/netty/handler/codec/ByteToMessageCodec.java", "codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java", "codec/src/main/java/io/netty/handler/codec/CodecUtil.java", "transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java", "transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java" ]
[ "codec/src/main/java/io/netty/handler/codec/ByteToMessageCodec.java", "codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java", "transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java", "transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java" ]
[ "transport/src/test/java/io/netty/channel/CombinedChannelDuplexHandlerTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/ByteToMessageCodec.java b/codec/src/main/java/io/netty/handler/codec/ByteToMessageCodec.java index c7c61f7cef2..7dd80f5d433 100644 --- a/codec/src/main/java/io/netty/handler/codec/ByteToMessageCodec.java +++ b/codec/src/main/java/io/netty/handler/codec/ByteToMessageCodec.java @@ -70,7 +70,7 @@ protected ByteToMessageCodec(Class<? extends I> outboundMessageType) { * {@link ByteBuf}, which is backed by an byte array. */ protected ByteToMessageCodec(boolean preferDirect) { - CodecUtil.ensureNotSharable(this); + ensureNotSharable(); outboundMsgMatcher = TypeParameterMatcher.find(this, ByteToMessageCodec.class, "I"); encoder = new Encoder(preferDirect); } @@ -84,7 +84,7 @@ protected ByteToMessageCodec(boolean preferDirect) { * {@link ByteBuf}, which is backed by an byte array. */ protected ByteToMessageCodec(Class<? extends I> outboundMessageType, boolean preferDirect) { - CodecUtil.ensureNotSharable(this); + ensureNotSharable(); outboundMsgMatcher = TypeParameterMatcher.get(outboundMessageType); encoder = new Encoder(preferDirect); } diff --git a/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java b/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java index a4ccb5aee4d..0c8a78365c0 100644 --- a/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java @@ -138,7 +138,7 @@ public ByteBuf cumulate(ByteBufAllocator alloc, ByteBuf cumulation, ByteBuf in) private int numReads; protected ByteToMessageDecoder() { - CodecUtil.ensureNotSharable(this); + ensureNotSharable(); } /** diff --git a/codec/src/main/java/io/netty/handler/codec/CodecUtil.java b/codec/src/main/java/io/netty/handler/codec/CodecUtil.java deleted file mode 100644 index 8526decba17..00000000000 --- a/codec/src/main/java/io/netty/handler/codec/CodecUtil.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright 2014 The Netty Project - * - * The Netty Project licenses this file to you under the Apache License, - * version 2.0 (the "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations - * under the License. - */ -package io.netty.handler.codec; - -import io.netty.channel.ChannelHandlerAdapter; - -final class CodecUtil { - - /** - * Throws {@link IllegalStateException} if {@link ChannelHandlerAdapter#isSharable()} returns {@code true} - */ - static void ensureNotSharable(ChannelHandlerAdapter handler) { - if (handler.isSharable()) { - throw new IllegalStateException("@Sharable annotation is not allowed"); - } - } - - private CodecUtil() { } -} diff --git a/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java b/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java index 1aaf77b6c33..aadc691d669 100644 --- a/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java +++ b/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java @@ -28,6 +28,15 @@ public abstract class ChannelHandlerAdapter implements ChannelHandler { // Not using volatile because it's used only for a sanity check. boolean added; + /** + * Throws {@link IllegalStateException} if {@link ChannelHandlerAdapter#isSharable()} returns {@code true} + */ + protected void ensureNotSharable() { + if (isSharable()) { + throw new IllegalStateException("ChannelHandler " + getClass().getName() + " is not allowed to be shared"); + } + } + /** * Return {@code true} if the implementation is {@link Sharable} and so can be added * to different {@link ChannelPipeline}s. diff --git a/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java b/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java index c1f156080a2..5cb52668d7b 100644 --- a/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java +++ b/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java @@ -45,12 +45,15 @@ public class CombinedChannelDuplexHandler<I extends ChannelInboundHandler, O ext * {@link #init(ChannelInboundHandler, ChannelOutboundHandler)} before adding this handler into a * {@link ChannelPipeline}. */ - protected CombinedChannelDuplexHandler() { } + protected CombinedChannelDuplexHandler() { + ensureNotSharable(); + } /** * Creates a new instance that combines the specified two handlers into one. */ public CombinedChannelDuplexHandler(I inboundHandler, O outboundHandler) { + ensureNotSharable(); init(inboundHandler, outboundHandler); }
diff --git a/transport/src/test/java/io/netty/channel/CombinedChannelDuplexHandlerTest.java b/transport/src/test/java/io/netty/channel/CombinedChannelDuplexHandlerTest.java index 27f9b8d11c2..f697e0b50f4 100644 --- a/transport/src/test/java/io/netty/channel/CombinedChannelDuplexHandlerTest.java +++ b/transport/src/test/java/io/netty/channel/CombinedChannelDuplexHandlerTest.java @@ -387,4 +387,14 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) promise.syncUninterruptibly(); ch.finish(); } + + @Test(expected = IllegalStateException.class) + public void testNotSharable() { + new CombinedChannelDuplexHandler<ChannelInboundHandler, ChannelOutboundHandler>() { + @Override + public boolean isSharable() { + return true; + } + }; + } }
train
train
2017-02-11T14:09:15
"2017-02-08T14:36:02Z"
errandir
val
netty/netty/6345_6356
netty/netty
netty/netty/6345
netty/netty/6356
[ "timestamp(timedelta=18.0, similarity=1.0000000000000002)" ]
34ea09e5524fa9da4f482822bf7b1a16ef69731f
11a0c2b6b6c2193f58176d80e41fca2c255e38b3
[ "@mp911de thanks for reporting... let me fix this", "Started working on a fix... should have a PR soon", "Fixed by https://github.com/netty/netty/pull/6356" ]
[ "@normanmaurer +1 ... can't hurt" ]
"2017-02-13T18:45:29Z"
[ "defect" ]
Initialization of PlatformDependent0 fails on Java 9
Initialization of `PlatformDependent0` fails on Java 9 in static initializer when calling `setAccessible(true)`. ``` Field field = Buffer.class.getDeclaredField("address"); field.setAccessible(true); ``` ### Expected behavior Initialization should not fail without `--add-opens=java.base/java.nio=ALL-UNNAMED` ### Actual behavior Class initialization fails with: ``` java.lang.reflect.InaccessibleObjectException: Unable to make field long java.nio.Buffer.address accessible: module java.base does not "opens java.nio" to unnamed module @598067a5 at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:207) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:171) at java.base/java.lang.reflect.Field.setAccessible(Field.java:165) at io.netty.util.internal.PlatformDependent0$1.run(PlatformDependent0.java:72) at java.base/java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:67) at io.netty.util.internal.PlatformDependent.getSystemClassLoader(PlatformDependent.java:888) at io.netty.util.internal.PlatformDependent.isAndroid0(PlatformDependent.java:905) at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:81) at io.netty.buffer.PooledByteBufAllocator.<clinit>(PooledByteBufAllocator.java:87) ``` ### Steps to reproduce Trigger class initialization of `PlatformDependent0` or call to `PlatformDependent.getContextClassLoader()`. ### Minimal yet complete reproducer code (or URL to code) Variant 1: ``` PlatformDependent.getContextClassLoader() ``` Variant 2: ``` Version nettyBufferVersion = Version.identify().get("netty-buffer"); ``` ### Netty version 4.1.8 ### JVM version (e.g. `java -version`) Java(TM) SE Runtime Environment (build 9-ea+156) Java HotSpot(TM) 64-Bit Server VM (build 9-ea+156, mixed mode) ### OS version (e.g. `uname -a`) macOS Sierra (16.4.0 Darwin Kernel Version 16.4.0: Thu Dec 22 22:53:21 PST 2016; root:xnu-3789.41.3~3/RELEASE_X86_64 x86_64)
[ "common/src/main/java/io/netty/util/internal/PlatformDependent0.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "common/src/main/java/io/netty/util/internal/PlatformDependent0.java", "common/src/main/java/io/netty/util/internal/ReflectionUtil.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java index cbfd8ed4253..6abdb0035f8 100644 --- a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java +++ b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java @@ -36,7 +36,7 @@ final class PlatformDependent0 { private static final InternalLogger logger = InternalLoggerFactory.getInstance(PlatformDependent0.class); - static final Unsafe UNSAFE; + private static final Unsafe UNSAFE; private static final long ADDRESS_FIELD_OFFSET; private static final long BYTE_ARRAY_BASE_OFFSET; private static final Constructor<?> DIRECT_BUFFER_CONSTRUCTOR; @@ -69,7 +69,10 @@ final class PlatformDependent0 { public Object run() { try { final Field field = Buffer.class.getDeclaredField("address"); - field.setAccessible(true); + Throwable cause = ReflectionUtil.trySetAccessible(field); + if (cause != null) { + return cause; + } // if direct really is a direct buffer, address will be non-zero if (field.getLong(direct) == 0) { return null; @@ -102,7 +105,10 @@ public Object run() { public Object run() { try { final Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe"); - unsafeField.setAccessible(true); + Throwable cause = ReflectionUtil.trySetAccessible(unsafeField); + if (cause != null) { + return cause; + } // the unsafe instance return unsafeField.get(null); } catch (NoSuchFieldException e) { @@ -179,7 +185,10 @@ public Object run() { try { final Constructor<?> constructor = direct.getClass().getDeclaredConstructor(long.class, int.class); - constructor.setAccessible(true); + Throwable cause = ReflectionUtil.trySetAccessible(constructor); + if (cause != null) { + return cause; + } return constructor; } catch (NoSuchMethodException e) { return e; @@ -226,10 +235,21 @@ public Object run() { Class<?> bitsClass = Class.forName("java.nio.Bits", false, PlatformDependent.getSystemClassLoader()); Method unalignedMethod = bitsClass.getDeclaredMethod("unaligned"); - unalignedMethod.setAccessible(true); + Throwable cause = ReflectionUtil.trySetAccessible(unalignedMethod); + if (cause != null) { + return cause; + } return unalignedMethod.invoke(null); - } catch (Throwable cause) { - return cause; + } catch (NoSuchMethodException e) { + return e; + } catch (SecurityException e) { + return e; + } catch (IllegalAccessException e) { + return e; + } catch (ClassNotFoundException e) { + return e; + } catch (InvocationTargetException e) { + return e; } } }); diff --git a/common/src/main/java/io/netty/util/internal/ReflectionUtil.java b/common/src/main/java/io/netty/util/internal/ReflectionUtil.java new file mode 100644 index 00000000000..ef39b352150 --- /dev/null +++ b/common/src/main/java/io/netty/util/internal/ReflectionUtil.java @@ -0,0 +1,49 @@ +/* + * Copyright 2017 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.util.internal; + +import java.lang.reflect.AccessibleObject; + +public final class ReflectionUtil { + + private ReflectionUtil() { } + + /** + * Try to call {@link AccessibleObject#setAccessible(boolean)} but will catch any {@link SecurityException} and + * {@link java.lang.reflect.InaccessibleObjectException} and return it. + * The caller must check if it returns {@code null} and if not handle the returned exception. + */ + public static Throwable trySetAccessible(AccessibleObject object) { + try { + object.setAccessible(true); + return null; + } catch (SecurityException e) { + return e; + } catch (RuntimeException e) { + return handleInaccessibleObjectException(e); + } + } + + private static RuntimeException handleInaccessibleObjectException(RuntimeException e) { + // JDK 9 can throw an inaccessible object exception here; since Netty compiles + // against JDK 7 and this exception was only added in JDK 9, we have to weakly + // check the type + if ("java.lang.reflect.InaccessibleObjectException".equals(e.getClass().getName())) { + return e; + } + throw e; + } +} diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index ac25e425907..76f3fc79d63 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -24,6 +24,7 @@ import io.netty.util.IntSupplier; import io.netty.util.concurrent.RejectedExecutionHandler; import io.netty.util.internal.PlatformDependent; +import io.netty.util.internal.ReflectionUtil; import io.netty.util.internal.SystemPropertyUtil; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -195,8 +196,14 @@ public Object run() { Field selectedKeysField = selectorImplClass.getDeclaredField("selectedKeys"); Field publicSelectedKeysField = selectorImplClass.getDeclaredField("publicSelectedKeys"); - selectedKeysField.setAccessible(true); - publicSelectedKeysField.setAccessible(true); + Throwable cause = ReflectionUtil.trySetAccessible(selectedKeysField); + if (cause != null) { + return cause; + } + cause = ReflectionUtil.trySetAccessible(publicSelectedKeysField); + if (cause != null) { + return cause; + } selectedKeysField.set(selector, selectedKeySet); publicSelectedKeysField.set(selector, selectedKeySet); @@ -205,15 +212,6 @@ public Object run() { return e; } catch (IllegalAccessException e) { return e; - } catch (RuntimeException e) { - // JDK 9 can throw an inaccessible object exception here; since Netty compiles - // against JDK 7 and this exception was only added in JDK 9, we have to weakly - // check the type - if ("java.lang.reflect.InaccessibleObjectException".equals(e.getClass().getName())) { - return e; - } else { - throw e; - } } } });
null
train
train
2017-02-14T08:17:33
"2017-02-10T19:26:38Z"
mp911de
val
netty/netty/6336_6363
netty/netty
netty/netty/6336
netty/netty/6363
[ "timestamp(timedelta=18.0, similarity=0.9015923944025254)" ]
54c9ecf682eeafaaf7c826903c60f1c783b84dea
eb5abdfe9b691469fc0b747d61621390bfcaa9d2
[ "any idea what the parameter values are `static String toJava(String openSslCipherSuite, String protocol)`?", "Will find out tomorrow morning. ", "@rkapsi - Just realized you pushed your code so I can do it my self too :)\r\n\r\n```\r\nopenSslCipherSuite: \"(NONE)\"\r\nprotocol: \"TLS\"\r\n```" ]
[]
"2017-02-13T20:04:35Z"
[ "defect" ]
NPE in CipherSuiteConverter.toJava(CipherSuiteConverter.java:281)
Providing repro shortly but it's triggered by an Unit Test for the OCSP Stapling PR. Basically... OpenSSL (native) is calling my OCSP callback (java) which throws for the sake of the test an Exception. Instead of bubbling up the Exceptions it's running into an unhandled (?) NPE and my purposely thrown Exception gets swallowed somewhere. Netty 4.1.9.Final-SNAPSHOT The NPE's stack looks like this: ```java io.netty.handler.codec.DecoderException: java.lang.NullPointerException at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.local.LocalChannel.finishPeerRead0(LocalChannel.java:443) at io.netty.channel.local.LocalChannel.access$11(LocalChannel.java:424) at io.netty.channel.local.LocalChannel$5.run(LocalChannel.java:397) at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at io.netty.handler.ssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:281) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.toJavaCipherSuite(ReferenceCountedOpenSslEngine.java:1539) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.access$4(ReferenceCountedOpenSslEngine.java:1533) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine$OpenSslSession.handshakeFinished(ReferenceCountedOpenSslEngine.java:1862) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.handshake(ReferenceCountedOpenSslEngine.java:1499) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:640) at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:509) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:836) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:659) at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:631) at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1050) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) ... 15 more ``` The throw stack looks like this (OcspTest.java:314 throws a IllegalStateException) ```java java.lang.Exception: Stack trace at java.lang.Thread.dumpStack(Thread.java:1333) at io.netty.handler.ssl.ocsp.OcspTest$13.staple(OcspTest.java:314) at io.netty.handler.ssl.ReferenceCountedOpenSslContext$ServerOcspCallback.callback(ReferenceCountedOpenSslContext.java:893) at io.netty.tcnative.jni.SSL.readFromSSL(Native Method) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:479) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:936) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1042) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1085) at io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:206) at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1123) at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1045) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) at io.netty.channel.local.LocalChannel.finishPeerRead0(LocalChannel.java:443) at io.netty.channel.local.LocalChannel.access$11(LocalChannel.java:424) at io.netty.channel.local.LocalChannel$5.run(LocalChannel.java:397) at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) at java.lang.Thread.run(Thread.java:745) ```
[ "handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java b/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java index a3afeda3fe4..1737ad1e1d5 100644 --- a/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java +++ b/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java @@ -276,6 +276,11 @@ static String toJava(String openSslCipherSuite, String protocol) { Map<String, String> p2j = o2j.get(openSslCipherSuite); if (p2j == null) { p2j = cacheFromOpenSsl(openSslCipherSuite); + // This may happen if this method is queried when OpenSSL doesn't yet have a cipher setup. It will return + // "(NONE)" in this case. + if (p2j == null) { + return null; + } } String javaCipherSuite = p2j.get(protocol);
diff --git a/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java b/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java index 92e1aad49c6..11c6f2d2295 100644 --- a/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java @@ -20,8 +20,10 @@ import io.netty.util.internal.logging.InternalLoggerFactory; import org.junit.Test; -import static org.hamcrest.Matchers.*; -import static org.junit.Assert.*; +import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.sameInstance; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertThat; public class CipherSuiteConverterTest { @@ -271,6 +273,34 @@ public void testCachedJ2OMappings() { testCachedJ2OMapping("TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", "ECDHE-ECDSA-AES128-SHA256"); } + @Test + public void testUnknownOpenSSLCiphersToJava() { + testUnknownOpenSSLCiphersToJava("(NONE)"); + testUnknownOpenSSLCiphersToJava("unknown"); + testUnknownOpenSSLCiphersToJava(""); + } + + @Test + public void testUnknownJavaCiphersToOpenSSL() { + testUnknownJavaCiphersToOpenSSL("(NONE)"); + testUnknownJavaCiphersToOpenSSL("unknown"); + testUnknownJavaCiphersToOpenSSL(""); + } + + private void testUnknownOpenSSLCiphersToJava(String openSslCipherSuite) { + CipherSuiteConverter.clearCache(); + + assertNull(CipherSuiteConverter.toJava(openSslCipherSuite, "TLS")); + assertNull(CipherSuiteConverter.toJava(openSslCipherSuite, "SSL")); + } + + private void testUnknownJavaCiphersToOpenSSL(String javaCipherSuite) { + CipherSuiteConverter.clearCache(); + + assertNull(CipherSuiteConverter.toOpenSsl(javaCipherSuite)); + assertNull(CipherSuiteConverter.toOpenSsl(javaCipherSuite)); + } + private static void testCachedJ2OMapping(String javaCipherSuite, String openSslCipherSuite) { CipherSuiteConverter.clearCache();
test
train
2017-02-13T20:54:09
"2017-02-08T21:53:39Z"
rkapsi
val
netty/netty/6384_6389
netty/netty
netty/netty/6384
netty/netty/6389
[ "timestamp(timedelta=63.0, similarity=0.9471868767507539)" ]
847359fd36c28fa360090459d1243a9603e47786
032443fee09770fbc78a2acba7848b2a583babd0
[ "(derived from a bug report by a customer, not sure how they ended up having no network interfaces, also not sure if its relevant to use netty if you do not have network interfaces :))", "@CodingFabian let me fix this", "Fixed by https://github.com/netty/netty/issues/6389" ]
[ "remove?", "+1", "![MINOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-minor.png) Remove this unused import 'java.util.Collections'. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AUselessImportCheck)\n", "fixed" ]
"2017-02-15T18:58:07Z"
[]
NetworkInterface.getNetworkInterfaces() can return null
To my suprise NetworkInterface.getNetworkInterfaces() can return null: ``` public static Enumeration<NetworkInterface> getNetworkInterfaces() throws SocketException { final NetworkInterface[] netifs = getAll(); // specified to return null if no network interfaces if (netifs == null) return null; ``` (not sure where its "specified") https://github.com/netty/netty/blob/42c035982093d6e08201af9572ea43c7cf269adf/common/src/main/java/io/netty/util/internal/MacAddressUtil.java#L55 https://github.com/netty/netty/blob/3344cd21acc61819c327976ed5281daa49b0c05e/common/src/main/java/io/netty/util/NetUtil.java#L166
[ "common/src/main/java/io/netty/util/NetUtil.java", "common/src/main/java/io/netty/util/internal/MacAddressUtil.java" ]
[ "common/src/main/java/io/netty/util/NetUtil.java", "common/src/main/java/io/netty/util/internal/MacAddressUtil.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java index 97570fa165d..88d4cdd015c 100644 --- a/common/src/main/java/io/netty/util/NetUtil.java +++ b/common/src/main/java/io/netty/util/NetUtil.java @@ -163,11 +163,14 @@ public final class NetUtil { // Retrieve the list of available network interfaces. List<NetworkInterface> ifaces = new ArrayList<NetworkInterface>(); try { - for (Enumeration<NetworkInterface> i = NetworkInterface.getNetworkInterfaces(); i.hasMoreElements();) { - NetworkInterface iface = i.nextElement(); - // Use the interface with proper INET addresses only. - if (SocketUtils.addressesFromNetworkInterface(iface).hasMoreElements()) { - ifaces.add(iface); + Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); + if (interfaces != null) { + while (interfaces.hasMoreElements()) { + NetworkInterface iface = interfaces.nextElement(); + // Use the interface with proper INET addresses only. + if (SocketUtils.addressesFromNetworkInterface(iface).hasMoreElements()) { + ifaces.add(iface); + } } } } catch (SocketException e) { diff --git a/common/src/main/java/io/netty/util/internal/MacAddressUtil.java b/common/src/main/java/io/netty/util/internal/MacAddressUtil.java index 4ac088750c3..eaf7a40a64c 100644 --- a/common/src/main/java/io/netty/util/internal/MacAddressUtil.java +++ b/common/src/main/java/io/netty/util/internal/MacAddressUtil.java @@ -24,6 +24,7 @@ import java.net.NetworkInterface; import java.net.SocketException; import java.util.Arrays; +import java.util.Collections; import java.util.Enumeration; import java.util.LinkedHashMap; import java.util.Map; @@ -52,14 +53,17 @@ public static byte[] bestAvailableMac() { // Retrieve the list of available network interfaces. Map<NetworkInterface, InetAddress> ifaces = new LinkedHashMap<NetworkInterface, InetAddress>(); try { - for (Enumeration<NetworkInterface> i = NetworkInterface.getNetworkInterfaces(); i.hasMoreElements();) { - NetworkInterface iface = i.nextElement(); - // Use the interface with proper INET addresses only. - Enumeration<InetAddress> addrs = SocketUtils.addressesFromNetworkInterface(iface); - if (addrs.hasMoreElements()) { - InetAddress a = addrs.nextElement(); - if (!a.isLoopbackAddress()) { - ifaces.put(iface, a); + Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); + if (interfaces != null) { + while (interfaces.hasMoreElements()) { + NetworkInterface iface = interfaces.nextElement(); + // Use the interface with proper INET addresses only. + Enumeration<InetAddress> addrs = SocketUtils.addressesFromNetworkInterface(iface); + if (addrs.hasMoreElements()) { + InetAddress a = addrs.nextElement(); + if (!a.isLoopbackAddress()) { + ifaces.put(iface, a); + } } } }
null
val
train
2017-02-15T19:20:06
"2017-02-15T16:52:15Z"
CodingFabian
val
netty/netty/5458_6392
netty/netty
netty/netty/5458
netty/netty/6392
[ "timestamp(timedelta=85041.0, similarity=0.8428685248890636)" ]
1843b318850ac7a3990c13b1d275e0e37491301e
32cdfd7b323b4a23e30d83b2079d292fc779c797
[ "@CodingFabian I will fix this.\n", "@CodingFabian seems like there is no way to fix this as we still need to support java6 :(\n", "Huh? Cant you keep the current if else and just wrap with an if else that checks version and if >=8 delegates to the JDK Version? You might need an indirection to avoid classloading in <8\n", "@CodingFabian there are no specific intrinsics for `LongAdder`. However I agree that it is better to use JDK version as netty version differs from latest JDK version.\n\n> You might need an indirection to avoid classloading in <8\n\nCould you please show code example of what do you mean?\n", "@CodingFabian something like this? https://github.com/netty/netty/pull/5794\n", "Fixed by https://github.com/netty/netty/pull/6379" ]
[]
"2017-02-16T07:31:27Z"
[ "improvement" ]
PlatformDependent doesn't prefer JDK8 LongAdder
I am running Netty on Java 8. ``` /** * Creates a new fastest {@link LongCounter} implementaion for the current platform. */ public static LongCounter newLongCounter() { if (HAS_UNSAFE) { return new LongAdderV8(); } else { return new AtomicLongCounter(); } } ``` On a normal modern Java8, where HAS_UNSAFE is true, this will always use the c&p LongAdderV8, otherwise the JDK Shipped AtomicLongCounter. Assuming that the JDK8 contained LongAdder is "better" (e.g. it might use intrinsics) I was wondering why there is no Java >=8 check which will directly use the JDK8 LongAdder (wrapped to comply to netty interface) (also note the c&p typo "implementaion" in many of the javadocs)
[ "pom.xml" ]
[ "pom.xml" ]
[]
diff --git a/pom.xml b/pom.xml index 28334609c54..76062141540 100644 --- a/pom.xml +++ b/pom.xml @@ -596,9 +596,9 @@ <configuration> <rules> <requireJavaVersion> - <!-- Enforce JDK 1.7+ for compilation. --> + <!-- Enforce JDK 1.8+ for compilation. --> <!-- This is needed because of java.util.zip.Deflater and NIO UDP multicast. --> - <version>[1.7.0,)</version> + <version>[1.8.0,)</version> </requireJavaVersion> <requireMavenVersion> <version>[3.1.1,)</version>
null
test
train
2017-02-16T07:59:31
"2016-06-28T05:57:50Z"
CodingFabian
val
netty/netty/6205_6404
netty/netty
netty/netty/6205
netty/netty/6404
[ "timestamp(timedelta=153941.0, similarity=0.8771457455701042)" ]
0623c6c5334bf43299e835cfcf86bfda19e2d4ce
8093c8fbf3beb852ffdbf77e5e73da038e371cba
[ "thanks for reporting!", "This can be closed as these are now merged: \r\n\r\n#6433\r\n#6404" ]
[ "Can't this be simplified to the following:\r\n\r\n```java\r\ninBuff = (value & 0xff0000) | (value & 0xff00) | (value & 0xff);\r\n```", "```java\r\ninBuff = (value & 0xff00) << 8 | (value & 0xff) << 8;\r\n```", "```java\r\ninBuff = (value & 0xff00) << 8 | (value & 0xff0000) >>> 8 | (value & 0xff000000) >>> 24;\r\n```", "if this is the case it makes bytes swapping more straightforward below", "```java\r\ninBuff = (value & 0xff) << 16 | (value & 0xff00);\r\n```", "`dest.order()` -> `src.order()`", "`(alphabet[inBuff >>> 18 ])` -> `alphabet[inBuff >>> 18 ]`", "general comment remove the unnecessary parenthesis", "is `default` (also `0`) an error, or a noop? Can we make this explicit?", "we already love bitshifting so why not `int len34 = (len * 3) >>> 2;` 😉 ", "also nit but you can remove this temporary variable bcz its only used in a single place", "nit: move onto previous line?", "nit: to avoid an additional variable you can simplify to:\r\n\r\n```java\r\n try {\r\n src.forEachByte(off, len, this);\r\n return dest.slice(0, outBuffPosn);\r\n } catch (Throwable cause) {\r\n dest.release();\r\n PlatformDependent.throwException(cause);\r\n return null;\r\n }\r\n```", "```java\r\ndest.setShort(destOffset, (outBuff & 0xff0000) >>> 8 | (outBuff & 0xff00) >>> 8);\r\n```", "```java\r\ndest.setShort(destOffset, (outBuff & 0xff00) | (outBuff & 0xff0000) >>> 16);\r\n```", "`final`", "nit: `else` is unnecessary ", "or may it's possible here `int len34 = len - (len >>> 2);`?", "does this buffer need to be released?", "we discussed this previously and the rational for not doing this was performance ... is this no longer a concern, or do you think it is unlikely that folks use `LE` buffers? The single mask and single shift per byte suggestions makes the endianness conversion more straightforward and potentially avoids 2 intermediate `int` swap operations.", "consider using a consistent approach with the other cases:\r\n\r\n```java\r\nrvalue = (src.getByte(srcOffset) & 0xff) << 16;\r\n```", "nit: kill extra space after `=`.", "you didn't like `int len34 = (len * 3) >>> 2;` with a comment above?\r\n\r\nAlso update the `int len43` in the `encode` method to be consistent.", "`slice()` shares the reference count of the \"parent\" so the ownership is transferred to the caller of `decode` as it was before.", "consider using the following (simpler and reduced casting)\r\n\r\n```java\r\nshort value = (short) ((outBuff & 0xff0000) >>> 8 | (outBuff & 0xff00) >>> 8);\r\n```", "You can save a shift operation and temp variable by doing the following:\r\n\r\n```java\r\ndest.setByte(destOffset, (byte) ((src[srcOffset] & 0xff) << 2 | (src[srcOffset + 1] & 0xff) >>> 4));\r\n```", "@Scottmitch I think this code is easier to understand and little-endian is not very likely. So after thinking more about it I thought we should just keep it simple for now and improve later if needed. WDYT ?", "good point", "missed this. Let me change", "actually he was correct. I missed to call `dest.release()` in the catch block (already fixed)", "+1", "Good idea", "oh it was there when I looked ... code looked like https://github.com/netty/netty/pull/6404#discussion_r101837914 ... good catch @johnou ", "I don't know how likely `LE` is but we can always improve it later if necessary. I may submit a followup PR while all this stuff is fresh on my mind :)", "Just let me know if you would prefer to not double swap and I can make it happen. Just thought it is easer to follow if I just not care too much for now. ", "I would prefer avoiding the double swap but I agree it isn't necessary for this PR (fine for followup PR, or otherwise delay). Since we are already doing pretty low level byte/bit manipulation here and it has been shown to have a noticeable performance gain I think handling the endianness in `Base64` is reasonable.", "this is back :)", "we can reduce the operations if we do the byte swap and shifts in one operation:\r\n\r\n```java\r\nprivate static int toIntLE(short valueBE) {\r\n return (valueBE & 0xff) << 16 | (valueBE & 0xff00);\r\n}\r\n```", "```java\r\nprivate static int toIntLE(int mediumBE) {\r\n return (mediumBE & 0xff) << 16 | (mediumBE & 0xff00) | (mediumBE & 0xff0000) >>> 16;\r\n}\r\n```", "this is back again? (also update encode too).", "`len * 4` -> `len << 2` (not `<< 4`)", "do we have tests for the encode side?", "consider renaming `value` to `mediumValue` or somehow indicate the value only occupies 24 bits.", "```java\r\ndest.setShort(destOffset, (short) ((outBuff & 0xff0000) >>> 8 | (outBuff & 0xff00) >>> 8));\r\n```", "https://github.com/netty/netty/pull/6404#discussion_r102082437", "You can save a shift operation and temp variable by doing the following:\r\n\r\n```java\r\ndest.setByte(destOffset, (byte) ((decodabet[src[0]] & 0xff) << 2 | (decodabet[src[1]] & 0xff) >>> 4));\r\n```", "https://github.com/netty/netty/pull/6404#discussion_r102086391", "you could also get rid of the `outBuff` and just directly put the bytes where they need to be", "```java\r\ndest.setShort(destOffset, (short) ((outBuff & 0xff0000) >>> 16 | (outBuff & 0xff00)));\r\n```", "we could also just directly compute the desired value of `outBuf` here.", "ups... yes in Base64Test. ", "Actually I think it must be `(len << 2) / 3`", "![CRITICAL](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-critical.png) Bitwise OR of signed byte value computed in io.netty.handler.codec.base64.Base64.encode3to4LittleEndian(int, int, ByteBuf, int, byte[]) [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=findbugs%3ABIT_IOR_OF_SIGNED_BYTE)\n", "I wonder why our unit tests didn't catch this", "because it will just re-size under the hood ?", "should we re-catch the `IndexOutOfBoundsException` like above?", "also no need to compute`outBuff` above if in this conditional.", "nit: `return dest.slice(0, outBuffPosn);`" ]
"2017-02-17T16:32:12Z"
[]
Investigate performance of Base64
`Base64.encode(...)` / `Base64.decode(...)` is quite slow as it acts on byte-level. We may want to look into optimise this. See also: https://github.com/netty/netty/pull/6145#issuecomment-271899281
[ "codec/src/main/java/io/netty/handler/codec/base64/Base64.java" ]
[ "codec/src/main/java/io/netty/handler/codec/base64/Base64.java" ]
[ "codec/src/test/java/io/netty/handler/codec/base64/Base64Test.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/base64/Base64.java b/codec/src/main/java/io/netty/handler/codec/base64/Base64.java index dc3f1aff801..770940854bd 100644 --- a/codec/src/main/java/io/netty/handler/codec/base64/Base64.java +++ b/codec/src/main/java/io/netty/handler/codec/base64/Base64.java @@ -21,6 +21,11 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufAllocator; +import io.netty.buffer.ByteBufUtil; +import io.netty.util.ByteProcessor; +import io.netty.util.internal.PlatformDependent; + +import java.nio.ByteOrder; /** * Utility class for {@link ByteBuf} that encodes and decodes to and from @@ -109,7 +114,6 @@ public static ByteBuf encode( public static ByteBuf encode( ByteBuf src, int off, int len, boolean breakLines, Base64Dialect dialect, ByteBufAllocator allocator) { - if (src == null) { throw new NullPointerException("src"); } @@ -117,17 +121,18 @@ public static ByteBuf encode( throw new NullPointerException("dialect"); } - int len43 = len * 4 / 3; + int len43 = (len << 2) / 3; ByteBuf dest = allocator.buffer( len43 + (len % 3 > 0 ? 4 : 0) + // Account for padding (breakLines ? len43 / MAX_LINE_LENGTH : 0)).order(src.order()); // New lines + byte[] alphabet = alphabet(dialect); int d = 0; int e = 0; int len2 = len - 2; int lineLength = 0; for (; d < len2; d += 3, e += 4) { - encode3to4(src, d + off, 3, dest, e, dialect); + encode3to4(src, d + off, 3, dest, e, alphabet); lineLength += 4; @@ -139,7 +144,7 @@ public static ByteBuf encode( } // end for: each piece of array if (d < len) { - encode3to4(src, d + off, len - d, dest, e, dialect); + encode3to4(src, d + off, len - d, dest, e, alphabet); e += 4; } // end if: some padding needed @@ -152,11 +157,7 @@ public static ByteBuf encode( } private static void encode3to4( - ByteBuf src, int srcOffset, int numSigBytes, - ByteBuf dest, int destOffset, Base64Dialect dialect) { - - byte[] ALPHABET = alphabet(dialect); - + ByteBuf src, int srcOffset, int numSigBytes, ByteBuf dest, int destOffset, byte[] alphabet) { // 1 2 3 // 01234567890123456789012345678901 Bit position // --------000000001111111122222222 Array position from threeBytes @@ -168,30 +169,110 @@ private static void encode3to4( // significant bytes passed in the array. // We have to shift left 24 in order to flush out the 1's that appear // when Java treats a value as negative that is cast from a byte to an int. - int inBuff = - (numSigBytes > 0? src.getByte(srcOffset) << 24 >>> 8 : 0) | - (numSigBytes > 1? src.getByte(srcOffset + 1) << 24 >>> 16 : 0) | - (numSigBytes > 2? src.getByte(srcOffset + 2) << 24 >>> 24 : 0); + if (src.order() == ByteOrder.BIG_ENDIAN) { + final int inBuff; + switch (numSigBytes) { + case 1: + inBuff = toInt(src.getByte(srcOffset)); + break; + case 2: + inBuff = toIntBE(src.getShort(srcOffset)); + break; + default: + inBuff = numSigBytes <= 0 ? 0 : toIntBE(src.getMedium(srcOffset)); + break; + } + encode3to4BigEndian(inBuff, numSigBytes, dest, destOffset, alphabet); + } else { + final int inBuff; + switch (numSigBytes) { + case 1: + inBuff = toInt(src.getByte(srcOffset)); + break; + case 2: + inBuff = toIntLE(src.getShort(srcOffset)); + break; + default: + inBuff = numSigBytes <= 0 ? 0 : toIntLE(src.getMedium(srcOffset)); + break; + } + encode3to4LittleEndian(inBuff, numSigBytes, dest, destOffset, alphabet); + } + } + + private static int toInt(byte value) { + return (value & 0xff) << 16; + } + private static int toIntBE(short value) { + return (value & 0xff00) << 8 | (value & 0xff) << 8; + } + + private static int toIntLE(short value) { + return (value & 0xff) << 16 | (value & 0xff00); + } + + private static int toIntBE(int mediumValue) { + return (mediumValue & 0xff0000) | (mediumValue & 0xff00) | (mediumValue & 0xff); + } + + private static int toIntLE(int mediumValue) { + return (mediumValue & 0xff) << 16 | (mediumValue & 0xff00) | (mediumValue & 0xff0000) >>> 16; + } + + private static void encode3to4BigEndian( + int inBuff, int numSigBytes, ByteBuf dest, int destOffset, byte[] alphabet) { + // Packing bytes into an int to reduce bound and reference count checking. switch (numSigBytes) { - case 3: - dest.setByte(destOffset , ALPHABET[inBuff >>> 18 ]); - dest.setByte(destOffset + 1, ALPHABET[inBuff >>> 12 & 0x3f]); - dest.setByte(destOffset + 2, ALPHABET[inBuff >>> 6 & 0x3f]); - dest.setByte(destOffset + 3, ALPHABET[inBuff & 0x3f]); - break; - case 2: - dest.setByte(destOffset , ALPHABET[inBuff >>> 18 ]); - dest.setByte(destOffset + 1, ALPHABET[inBuff >>> 12 & 0x3f]); - dest.setByte(destOffset + 2, ALPHABET[inBuff >>> 6 & 0x3f]); - dest.setByte(destOffset + 3, EQUALS_SIGN); - break; - case 1: - dest.setByte(destOffset , ALPHABET[inBuff >>> 18 ]); - dest.setByte(destOffset + 1, ALPHABET[inBuff >>> 12 & 0x3f]); - dest.setByte(destOffset + 2, EQUALS_SIGN); - dest.setByte(destOffset + 3, EQUALS_SIGN); - break; + case 3: + dest.setInt(destOffset, alphabet[inBuff >>> 18 ] << 24 | + alphabet[inBuff >>> 12 & 0x3f] << 16 | + alphabet[inBuff >>> 6 & 0x3f] << 8 | + alphabet[inBuff & 0x3f]); + break; + case 2: + dest.setInt(destOffset, alphabet[inBuff >>> 18 ] << 24 | + alphabet[inBuff >>> 12 & 0x3f] << 16 | + alphabet[inBuff >>> 6 & 0x3f] << 8 | + EQUALS_SIGN); + break; + case 1: + dest.setInt(destOffset, alphabet[inBuff >>> 18 ] << 24 | + alphabet[inBuff >>> 12 & 0x3f] << 16 | + EQUALS_SIGN << 8 | + EQUALS_SIGN); + break; + default: + // NOOP + break; + } + } + + private static void encode3to4LittleEndian( + int inBuff, int numSigBytes, ByteBuf dest, int destOffset, byte[] alphabet) { + // Packing bytes into an int to reduce bound and reference count checking. + switch (numSigBytes) { + case 3: + dest.setInt(destOffset, alphabet[inBuff >>> 18 ] | + alphabet[inBuff >>> 12 & 0x3f] << 8 | + alphabet[inBuff >>> 6 & 0x3f] << 16 | + alphabet[inBuff & 0x3f] << 24); + break; + case 2: + dest.setInt(destOffset, alphabet[inBuff >>> 18 ] | + alphabet[inBuff >>> 12 & 0x3f] << 8 | + alphabet[inBuff >>> 6 & 0x3f] << 16 | + EQUALS_SIGN << 24); + break; + case 1: + dest.setInt(destOffset, alphabet[inBuff >>> 18 ] | + alphabet[inBuff >>> 12 & 0x3f] << 8 | + EQUALS_SIGN << 16 | + EQUALS_SIGN << 24); + break; + default: + // NOOP + break; } } @@ -200,7 +281,6 @@ public static ByteBuf decode(ByteBuf src) { } public static ByteBuf decode(ByteBuf src, Base64Dialect dialect) { - if (src == null) { throw new NullPointerException("src"); } @@ -222,7 +302,6 @@ public static ByteBuf decode( public static ByteBuf decode( ByteBuf src, int off, int len, Base64Dialect dialect, ByteBufAllocator allocator) { - if (src == null) { throw new NullPointerException("src"); } @@ -230,85 +309,96 @@ public static ByteBuf decode( throw new NullPointerException("dialect"); } - byte[] DECODABET = decodabet(dialect); + // Using a ByteProcessor to reduce bound and reference count checking. + return new Decoder().decode(src, off, len, allocator, dialect); + } + + private static final class Decoder implements ByteProcessor { + private final byte[] b4 = new byte[4]; + private int b4Posn; + private byte sbiCrop; + private byte sbiDecode; + private byte[] decodabet; + private int outBuffPosn; + private ByteBuf dest; + + ByteBuf decode(ByteBuf src, int off, int len, ByteBufAllocator allocator, Base64Dialect dialect) { + int len34 = (len * 3) >>> 2; + dest = allocator.buffer(len34).order(src.order()); // Upper limit on size of output - int len34 = len * 3 / 4; - ByteBuf dest = allocator.buffer(len34).order(src.order()); // Upper limit on size of output - int outBuffPosn = 0; + decodabet = decodabet(dialect); + try { + src.forEachByte(off, len, this); + return dest.slice(0, outBuffPosn); + } catch (Throwable cause) { + dest.release(); + PlatformDependent.throwException(cause); + return null; + } + } - byte[] b4 = new byte[4]; - int b4Posn = 0; - int i; - byte sbiCrop; - byte sbiDecode; - for (i = off; i < off + len; i ++) { - sbiCrop = (byte) (src.getByte(i) & 0x7f); // Only the low seven bits - sbiDecode = DECODABET[sbiCrop]; + @Override + public boolean process(byte value) throws Exception { + sbiCrop = (byte) (value & 0x7f); // Only the low seven bits + sbiDecode = decodabet[sbiCrop]; if (sbiDecode >= WHITE_SPACE_ENC) { // White space, Equals sign or better if (sbiDecode >= EQUALS_SIGN_ENC) { // Equals sign or better b4[b4Posn ++] = sbiCrop; if (b4Posn > 3) { // Quartet built - outBuffPosn += decode4to3( - b4, 0, dest, outBuffPosn, dialect); + outBuffPosn += decode4to3(b4, dest, outBuffPosn, decodabet); b4Posn = 0; // If that was the equals sign, break out of 'for' loop if (sbiCrop == EQUALS_SIGN) { - break; + return false; } } } - } else { - throw new IllegalArgumentException( - "bad Base64 input character at " + i + ": " + - src.getUnsignedByte(i) + " (decimal)"); + return true; } + throw new IllegalArgumentException( + "invalid bad Base64 input character: " + (short) (value & 0xFF) + " (decimal)"); } - return dest.slice(0, outBuffPosn); - } + private static int decode4to3(byte[] src, ByteBuf dest, int destOffset, byte[] decodabet) { + if (src[2] == EQUALS_SIGN) { + // Example: Dk== + dest.setByte(destOffset, (byte) ((decodabet[src[0]] & 0xff) << 2 | (decodabet[src[1]] & 0xff) >>> 4)); + return 1; + } + + if (src[3] == EQUALS_SIGN) { + // Example: DkL= + int outBuff = (decodabet[src[0]] & 0xff) << 18 | + (decodabet[src[1]] & 0xff) << 12 | + (decodabet[src[2]] & 0xff) << 6; + + // Packing bytes into a short to reduce bound and reference count checking. + if (dest.order() == ByteOrder.BIG_ENDIAN) { + dest.setShort(destOffset, (short) ((outBuff & 0xff0000) >>> 8 | (outBuff & 0xff00) >>> 8)); + } else { + dest.setShort(destOffset, (short) ((outBuff & 0xff0000) >>> 16 | (outBuff & 0xff00))); + } + return 2; + } - private static int decode4to3( - byte[] src, int srcOffset, - ByteBuf dest, int destOffset, Base64Dialect dialect) { - - byte[] DECODABET = decodabet(dialect); - - if (src[srcOffset + 2] == EQUALS_SIGN) { - // Example: Dk== - int outBuff = - (DECODABET[src[srcOffset ]] & 0xFF) << 18 | - (DECODABET[src[srcOffset + 1]] & 0xFF) << 12; - - dest.setByte(destOffset, (byte) (outBuff >>> 16)); - return 1; - } else if (src[srcOffset + 3] == EQUALS_SIGN) { - // Example: DkL= - int outBuff = - (DECODABET[src[srcOffset ]] & 0xFF) << 18 | - (DECODABET[src[srcOffset + 1]] & 0xFF) << 12 | - (DECODABET[src[srcOffset + 2]] & 0xFF) << 6; - - dest.setByte(destOffset , (byte) (outBuff >>> 16)); - dest.setByte(destOffset + 1, (byte) (outBuff >>> 8)); - return 2; - } else { // Example: DkLE - int outBuff; + final int outBuff; try { - outBuff = - (DECODABET[src[srcOffset ]] & 0xFF) << 18 | - (DECODABET[src[srcOffset + 1]] & 0xFF) << 12 | - (DECODABET[src[srcOffset + 2]] & 0xFF) << 6 | - DECODABET[src[srcOffset + 3]] & 0xFF; + outBuff = (decodabet[src[0]] & 0xff) << 18 | + (decodabet[src[1]] & 0xff) << 12 | + (decodabet[src[2]] & 0xff) << 6 | + decodabet[src[3]] & 0xff; } catch (IndexOutOfBoundsException ignored) { throw new IllegalArgumentException("not encoded in Base64"); } - - dest.setByte(destOffset , (byte) (outBuff >> 16)); - dest.setByte(destOffset + 1, (byte) (outBuff >> 8)); - dest.setByte(destOffset + 2, (byte) outBuff); + // Just directly set it as medium + if (dest.order() == ByteOrder.BIG_ENDIAN) { + dest.setMedium(destOffset, outBuff); + } else { + dest.setMedium(destOffset, ByteBufUtil.swapMedium(outBuff)); + } return 3; } }
diff --git a/codec/src/test/java/io/netty/handler/codec/base64/Base64Test.java b/codec/src/test/java/io/netty/handler/codec/base64/Base64Test.java index 6b8622ac24a..4be23a8d864 100644 --- a/codec/src/test/java/io/netty/handler/codec/base64/Base64Test.java +++ b/codec/src/test/java/io/netty/handler/codec/base64/Base64Test.java @@ -18,10 +18,12 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.util.CharsetUtil; +import io.netty.util.internal.PlatformDependent; import org.junit.Test; import java.io.ByteArrayInputStream; +import java.nio.ByteOrder; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; @@ -116,4 +118,42 @@ private static void testEncode(ByteBuf src, ByteBuf expectedEncoded) { encoded.release(); } } + + @Test + public void testEncodeDecodeBE() { + testEncodeDecode(ByteOrder.BIG_ENDIAN); + } + + @Test + public void testEncodeDecodeLE() { + testEncodeDecode(ByteOrder.LITTLE_ENDIAN); + } + + private static void testEncodeDecode(ByteOrder order) { + testEncodeDecode(64, order); + testEncodeDecode(128, order); + testEncodeDecode(512, order); + testEncodeDecode(1024, order); + testEncodeDecode(4096, order); + testEncodeDecode(8192, order); + testEncodeDecode(16384, order); + } + + private static void testEncodeDecode(int size, ByteOrder order) { + byte[] bytes = new byte[size]; + PlatformDependent.threadLocalRandom().nextBytes(bytes); + + ByteBuf src = Unpooled.wrappedBuffer(bytes).order(order); + ByteBuf encoded = Base64.encode(src); + ByteBuf decoded = Base64.decode(encoded); + ByteBuf expectedBuf = Unpooled.wrappedBuffer(bytes); + try { + assertEquals(expectedBuf, decoded); + } finally { + src.release(); + encoded.release(); + decoded.release(); + expectedBuf.release(); + } + } }
val
train
2017-02-22T07:31:07
"2017-01-11T20:11:41Z"
normanmaurer
val
netty/netty/5994_6412
netty/netty
netty/netty/5994
netty/netty/6412
[ "timestamp(timedelta=18.0, similarity=0.8557552059067816)" ]
d0a3877535814153306f8f934e2120204cd22fd7
7ca913908733cd05ac2056821ae1f12468849a45
[ "@flozano not sure what you are asking for... isnt it enough for you to import netty-all ?\n", "@normanmaurer \n\n> isnt it enough for you to import netty-all ?\n\nNo. netty-all is not a bom.\nhttp://howtodoinjava.com/maven/maven-bom-bill-of-materials-dependency/\nhttp://stackoverflow.com/questions/38496022/maven-bom-bill-of-materials-dependency\n", "~~I don't think this makes sense at all, Netty is \"zero\" dependency.~~\r\n ", "In the simplest use-case for a BOM, it's just very convenient to import it in the dependencyManagement version and be able to depend on any individual netty artifact without declaring a version. That capability by itself is useful and makes sense.\r\n\r\nBut there's more. Netty may be \"zero dependency\", but projects a project depends on may depend on netty themselves. This is not only about what netty depends on, this is about making it easy to depend on netty, be it directly os transitively. If you want to properly handle conflicting transitive dependencies in a maven project (and in gradle too, but differently), it's a good idea to use `<dependencyManagement>`. Imagine you are depending on two artifacts that depend themselves on some random netty artifacts, but you don't know exactly which ones. Maybe they use netty-codec-http, or netty-codec-mqtt, or something else. Therefore, to be safe, you would need to declare in `dependencyManagement` section *all* the depended artifacts. It's much more convenient and less error-prone to just import a BOM project.\r\n\r\nThere is a big list of common projects out there that produce BOMs, no matter if they have a lot of dependencies or not: Jackson, Spring, JAXB, Jersey, AWS SDK, Guice, HK2...\r\n\r\nExamples of very useful BOM projects:\r\nhttp://search.maven.org/#artifactdetails%7Corg.glassfish.hk2%7Chk2-bom%7C2.5.0-b28%7Cpom\r\nhttp://search.maven.org/#artifactdetails%7Ccom.amazonaws%7Caws-java-sdk-bom%7C1.11.58%7Cpom\r\n\r\nAbout netty-all, it's not a BOM, it's a JAR that comes with everything. In my opinion this is potentially dangerous, because it makes it possible that some of you projects depend on netty-all and some of your projects depend on netty-whatever, and they could be different versions, potentially making your classpath have two versions of same class. Maven can detect that two projects are depending on different versions of netty-codec-http, but it has no way to notice a version conflict between netty-all and netty-codec-http.\r\n\r\nAs an example, mockito had also a `mockito-all` and ditched it in latest version because of all the issues it comes with (https://github.com/mockito/mockito/issues/153). In my opinion, netty-all should be used only in build systems with manual dependencies (pure ant, bazel), but not in build systems with transitive dependencies (ivy, gradle, maven). (clarification: mockito-all also had other terrible problems, such as including external jars...).\r\n\r\nIf you would consider accepting it, I can try to contribute such artifact. It comes with very little maintenance - just add an entry to the BOM project when a new artifact is added.", "Fixed by https://github.com/netty/netty/pull/6412" ]
[ "@johnou Can you open another PR for this change as its not related ? ", "kill line", "kill line", "kill line" ]
"2017-02-19T10:42:51Z"
[]
Bill Of Materials (BOM) pom
Many projects these days have a bill of materials (BOM) project that just allows to import all the managed dependencies for a project with a single entry in dependencyManagement. It would be great to have such BOM in netty.
[ "pom.xml" ]
[ "bom/pom.xml", "pom.xml" ]
[]
diff --git a/bom/pom.xml b/bom/pom.xml new file mode 100644 index 00000000000..20d935d3bc5 --- /dev/null +++ b/bom/pom.xml @@ -0,0 +1,153 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Copyright 2017 The Netty Project + ~ + ~ The Netty Project licenses this file to you under the Apache License, + ~ version 2.0 (the "License"); you may not use this file except in compliance + ~ with the License. You may obtain a copy of the License at: + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + ~ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + ~ License for the specific language governing permissions and limitations + ~ under the License. + --> +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" + xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> + + <modelVersion>4.0.0</modelVersion> + <parent> + <groupId>io.netty</groupId> + <artifactId>netty-parent</artifactId> + <version>4.1.9.Final-SNAPSHOT</version> + </parent> + + <artifactId>netty-bom</artifactId> + <packaging>pom</packaging> + + <name>Netty/BOM</name> + + <dependencyManagement> + <dependencies> + <!-- All release modules --> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-buffer</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-dns</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-haproxy</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-http</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-http2</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-memcache</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-mqtt</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-redis</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-smtp</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-socks</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-stomp</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-codec-xml</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-common</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-handler</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-handler-proxy</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-resolver</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-resolver-dns</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-transport</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-transport-rxtx</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-transport-sctp</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-transport-udt</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-example</artifactId> + <version>${project.version}</version> + </dependency> + </dependencies> + </dependencyManagement> + +</project> \ No newline at end of file diff --git a/pom.xml b/pom.xml index 528e97496c4..855aa521deb 100644 --- a/pom.xml +++ b/pom.xml @@ -284,6 +284,7 @@ <module>testsuite-osgi</module> <module>microbench</module> <module>all</module> + <module>bom</module> <module>tarball</module> </modules>
null
train
train
2017-02-19T13:41:11
"2016-11-09T12:52:34Z"
flozano
val
netty/netty/6436_6440
netty/netty
netty/netty/6436
netty/netty/6440
[ "timestamp(timedelta=22.0, similarity=0.8538178156079667)" ]
7d08b4fc357e12ee2487e87d8fdcbeee1152e5a0
4d3104dc4b0b82847298d0b6be9abb9afb489628
[ "Good catch let me fix it\n\n> Am 23.02.2017 um 02:54 schrieb Max Gortman <[email protected]>:\n> \n> Expected behavior\n> \n> .alloc() returns allocator that buffer originated from. Backed by following behavior:\n> allocator field is reset on deallocate().\n> allocator field is set properly in initUnpooled()\n> Actual behavior\n> \n> .alloc() returns null if buffer is initialized through initUnpooled() and was just created. It also may return wrong allocator if it was previously backed by chunk from different pooled buffer allocator.\n> \n> Steps to reproduce\n> \n> https://github.com/netty/netty/blob/66b9be3a469a2cdcc5d18a8b94c679940ce002a9/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java#L128 -- alloc() is backed by allocator field. As object is pooled it must be initialized during init() or initUnpooled()\n> \n> https://github.com/netty/netty/blob/66b9be3a469a2cdcc5d18a8b94c679940ce002a9/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java#L52 -- allocator initialized (as expected)\n> \n> https://github.com/netty/netty/blob/66b9be3a469a2cdcc5d18a8b94c679940ce002a9/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java#L60 -- allocator is not initialized.\n> \n> Netty version\n> \n> current 4.1 branch code.\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n> \n", "@nayato PTAL https://github.com/netty/netty/pull/6440", "Fixed by https://github.com/netty/netty/pull/6440" ]
[]
"2017-02-23T10:01:20Z"
[ "defect" ]
Unpooled buffer acquired from pooled allocator may have .alloc() returning null
### Expected behavior - .alloc() returns allocator that buffer originated from. Backed by following behavior: - `allocator` field is reset on `deallocate()`. - `allocator` field is set properly in `initUnpooled()` ### Actual behavior `.alloc()` returns null if buffer is initialized through `initUnpooled()` and was just created. It also may return wrong allocator if it was previously backed by chunk from different pooled buffer allocator. ### Steps to reproduce https://github.com/netty/netty/blob/66b9be3a469a2cdcc5d18a8b94c679940ce002a9/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java#L128 -- `alloc()` is backed by `allocator` field. As object is pooled it must be initialized during `init()` or `initUnpooled()` https://github.com/netty/netty/blob/66b9be3a469a2cdcc5d18a8b94c679940ce002a9/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java#L52 -- `allocator` initialized (as expected) https://github.com/netty/netty/blob/66b9be3a469a2cdcc5d18a8b94c679940ce002a9/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java#L60 -- `allocator` is not initialized. ### Netty version current 4.1 branch code.
[ "buffer/src/main/java/io/netty/buffer/PooledByteBuf.java" ]
[ "buffer/src/main/java/io/netty/buffer/PooledByteBuf.java" ]
[ "buffer/src/test/java/io/netty/buffer/PooledByteBufAllocatorTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java b/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java index aac5f531cfa..56a4be38723 100644 --- a/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java @@ -43,30 +43,26 @@ protected PooledByteBuf(Recycler.Handle<? extends PooledByteBuf<T>> recyclerHand } void init(PoolChunk<T> chunk, long handle, int offset, int length, int maxLength, PoolThreadCache cache) { + init0(chunk, handle, offset, length, maxLength, cache); + } + + void initUnpooled(PoolChunk<T> chunk, int length) { + init0(chunk, 0, chunk.offset, length, length, null); + } + + private void init0(PoolChunk<T> chunk, long handle, int offset, int length, int maxLength, PoolThreadCache cache) { assert handle >= 0; assert chunk != null; this.chunk = chunk; - this.handle = handle; memory = chunk.memory; allocator = chunk.arena.parent; + this.cache = cache; + this.handle = handle; this.offset = offset; this.length = length; this.maxLength = maxLength; tmpNioBuf = null; - this.cache = cache; - } - - void initUnpooled(PoolChunk<T> chunk, int length) { - assert chunk != null; - - this.chunk = chunk; - handle = 0; - memory = chunk.memory; - offset = chunk.offset; - this.length = maxLength = length; - tmpNioBuf = null; - cache = null; } /**
diff --git a/buffer/src/test/java/io/netty/buffer/PooledByteBufAllocatorTest.java b/buffer/src/test/java/io/netty/buffer/PooledByteBufAllocatorTest.java index 695d99095be..12c76098729 100644 --- a/buffer/src/test/java/io/netty/buffer/PooledByteBufAllocatorTest.java +++ b/buffer/src/test/java/io/netty/buffer/PooledByteBufAllocatorTest.java @@ -35,6 +35,7 @@ import static java.util.concurrent.TimeUnit.MILLISECONDS; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest { @@ -159,6 +160,26 @@ public void testTinySubpageMetric() { } } + @Test + public void testAllocNotNull() { + PooledByteBufAllocator allocator = new PooledByteBufAllocator(true, 1, 1, 8192, 11, 0, 0, 0); + // Huge allocation + testAllocNotNull(allocator, allocator.chunkSize() + 1); + // Normal allocation + testAllocNotNull(allocator, 1024); + // Small allocation + testAllocNotNull(allocator, 512); + // Tiny allocation + testAllocNotNull(allocator, 1); + } + + private static void testAllocNotNull(PooledByteBufAllocator allocator, int capacity) { + ByteBuf buffer = allocator.heapBuffer(capacity); + assertNotNull(buffer.alloc()); + assertTrue(buffer.release()); + assertNotNull(buffer.alloc()); + } + @Test public void testFreePoolChunk() { int chunkSize = 16 * 1024 * 1024;
train
train
2017-02-23T07:54:42
"2017-02-23T01:54:56Z"
nayato
val
netty/netty/6452_6502
netty/netty
netty/netty/6452
netty/netty/6502
[ "timestamp(timedelta=44.0, similarity=0.8964428327377687)" ]
2b8c8e0805e343f7c06d4fb81d958aed357ecc6a
1d11301e101d7b06c638f1c4ca7f2e1193b78535
[]
[ "better make `length != 1` as first condition bcz it has lower cost.\r\n\r\nUPD: Although may be not. One-char params is not very likely..\r\nsorry for noise.", "IMHO, make sense add comment that if the `value` don't contain non-OWS chars, it returns `length`, but not `-1`.", "If `quoted==true`, can we increment `start` and decrement `last` to exclude check `i == start || i == last` from the loop?", "`' '` -> `0x20`?", "I think `' '` is easier to read... ", "+1", "I just wanted to emphasize specific unicode value.\r\nNot everything can be distinguished in appearance )\r\nSpaces may be different: https://www.cs.tut.fi/~jkorpela/chars/spaces.html", "I honestly haven't found a way to make the code better while doing the suggested increment/decrement. There's more involved than just excluding the `i == start || i == last` from the loop, and I have tried to not alter too much the existing code in this PR.\r\n\r\n@fenik17 please send a follow-up PR to implement the change you envisioned.", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) The Cyclomatic Complexity of this method \"escapeCsv\" is 28 which is greater than 25 authorized. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AMethodCyclomaticComplexity)\n", "Ok. I'll do it soon. Thanks for an effort!", "@ddossot done in #6840" ]
"2017-03-07T02:19:09Z"
[]
CombinedHttpHeaders does not trim optional white space between comma separated values
### Expected behavior The updated HTTP/1.x RFC allows for header values to be CSV and separated by `OWS` [1]. `CombinedHttpHeaders` should remove this `OWS` on insertion. [1] https://tools.ietf.org/html/rfc7230#section-7 ### Actual behavior `CombinedHttpHeaders` doesn't account for the `OWS` and returns it back to the user as part of the value. ### Netty version 4.1.x
[ "codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java", "common/src/main/java/io/netty/util/internal/StringUtil.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java", "common/src/main/java/io/netty/util/internal/StringUtil.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java", "common/src/test/java/io/netty/util/internal/StringUtilTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java index 79c5fda97a0..d3c0ac72664 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java @@ -16,9 +16,9 @@ package io.netty.handler.codec.http; import io.netty.handler.codec.DefaultHeaders; +import io.netty.handler.codec.Headers; import io.netty.handler.codec.ValueConverter; import io.netty.util.HashingStrategy; -import io.netty.handler.codec.Headers; import io.netty.util.internal.StringUtil; import java.util.Collection; @@ -39,6 +39,11 @@ public CombinedHttpHeaders(boolean validate) { super(new CombinedHttpHeadersImpl(CASE_INSENSITIVE_HASHER, valueConverter(validate), nameValidator(validate))); } + @Override + public boolean containsValue(CharSequence name, CharSequence value, boolean ignoreCase) { + return super.containsValue(name, StringUtil.trimOws(value), ignoreCase); + } + private static final class CombinedHttpHeadersImpl extends DefaultHeaders<CharSequence, CharSequence, CombinedHttpHeadersImpl> { /** @@ -53,7 +58,7 @@ private CsvValueEscaper<Object> objectEscaper() { objectEscaper = new CsvValueEscaper<Object>() { @Override public CharSequence escape(Object value) { - return StringUtil.escapeCsv(valueConverter().convertObject(value)); + return StringUtil.escapeCsv(valueConverter().convertObject(value), true); } }; } @@ -65,7 +70,7 @@ private CsvValueEscaper<CharSequence> charSequenceEscaper() { charSequenceEscaper = new CsvValueEscaper<CharSequence>() { @Override public CharSequence escape(CharSequence value) { - return StringUtil.escapeCsv(value); + return StringUtil.escapeCsv(value, true); } }; } @@ -136,7 +141,7 @@ public CombinedHttpHeadersImpl setAll(Headers<? extends CharSequence, ? extends @Override public CombinedHttpHeadersImpl add(CharSequence name, CharSequence value) { - return addEscapedValue(name, StringUtil.escapeCsv(value)); + return addEscapedValue(name, charSequenceEscaper().escape(value)); } @Override @@ -149,6 +154,11 @@ public CombinedHttpHeadersImpl add(CharSequence name, Iterable<? extends CharSeq return addEscapedValue(name, commaSeparate(charSequenceEscaper(), values)); } + @Override + public CombinedHttpHeadersImpl addObject(CharSequence name, Object value) { + return addEscapedValue(name, commaSeparate(objectEscaper(), value)); + } + @Override public CombinedHttpHeadersImpl addObject(CharSequence name, Iterable<?> values) { return addEscapedValue(name, commaSeparate(objectEscaper(), values)); diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java index 2b5d75faec0..ed1d7763776 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java @@ -1573,7 +1573,7 @@ public boolean contains(String name, String value, boolean ignoreCase) { /** * Returns {@code true} if a header with the {@code name} and {@code value} exists, {@code false} otherwise. - * This also handles multiple values that are seperated with a {@code ,}. + * This also handles multiple values that are separated with a {@code ,}. * <p> * If {@code ignoreCase} is {@code true} then a case insensitive compare is done on the value. * @param name the name of the header to find diff --git a/common/src/main/java/io/netty/util/internal/StringUtil.java b/common/src/main/java/io/netty/util/internal/StringUtil.java index b18103bd849..ca45cb6a49e 100644 --- a/common/src/main/java/io/netty/util/internal/StringUtil.java +++ b/common/src/main/java/io/netty/util/internal/StringUtil.java @@ -34,6 +34,7 @@ public final class StringUtil { public static final char LINE_FEED = '\n'; public static final char CARRIAGE_RETURN = '\r'; public static final char TAB = '\t'; + public static final char SPACE = 0x20; private static final String[] BYTE2HEX_PAD = new String[256]; private static final String[] BYTE2HEX_NOPAD = new String[256]; @@ -48,16 +49,16 @@ public final class StringUtil { static { // Generate the lookup table that converts a byte into a 2-digit hexadecimal integer. int i; - for (i = 0; i < 10; i ++) { + for (i = 0; i < 10; i++) { BYTE2HEX_PAD[i] = "0" + i; BYTE2HEX_NOPAD[i] = String.valueOf(i); } - for (; i < 16; i ++) { + for (; i < 16; i++) { char c = (char) ('a' + i - 10); BYTE2HEX_PAD[i] = "0" + c; BYTE2HEX_NOPAD[i] = String.valueOf(c); } - for (; i < BYTE2HEX_PAD.length; i ++) { + for (; i < BYTE2HEX_PAD.length; i++) { String str = Integer.toHexString(i); BYTE2HEX_PAD[i] = str; BYTE2HEX_NOPAD[i] = str; @@ -84,8 +85,8 @@ public static String substringAfter(String value, char delim) { /** * Checks if two strings have the same suffix of specified length * - * @param s string - * @param p string + * @param s string + * @param p string * @param len length of the common suffix * @return true if both s and p are not null and both have the same suffix. Otherwise - false */ @@ -138,7 +139,7 @@ public static <T extends Appendable> T toHexStringPadded(T dst, byte[] src) { */ public static <T extends Appendable> T toHexStringPadded(T dst, byte[] src, int offset, int length) { final int end = offset + length; - for (int i = offset; i < end; i ++) { + for (int i = offset; i < end; i++) { byteToHexStringPadded(dst, src[i]); } return dst; @@ -198,13 +199,13 @@ public static <T extends Appendable> T toHexString(T dst, byte[] src, int offset int i; // Skip preceding zeroes. - for (i = offset; i < endMinusOne; i ++) { + for (i = offset; i < endMinusOne; i++) { if (src[i] != 0) { break; } } - byteToHexString(dst, src[i ++]); + byteToHexString(dst, src[i++]); int remaining = end - i; toHexStringPadded(dst, src, i, remaining); @@ -244,22 +245,51 @@ public static String simpleClassName(Class<?> clazz) { * @return {@link CharSequence} the escaped value if necessary, or the value unchanged */ public static CharSequence escapeCsv(CharSequence value) { + return escapeCsv(value, false); + } + + /** + * Escapes the specified value, if necessary according to + * <a href="https://tools.ietf.org/html/rfc4180#section-2">RFC-4180</a>. + * + * @param value The value which will be escaped according to + * <a href="https://tools.ietf.org/html/rfc4180#section-2">RFC-4180</a> + * @param trimWhiteSpace The value will first be trimmed of its optional white-space characters, + * according to <a href="https://tools.ietf.org/html/rfc7230#section-7">RFC-7230</a> + * @return {@link CharSequence} the escaped value if necessary, or the value unchanged + */ + public static CharSequence escapeCsv(CharSequence value, boolean trimWhiteSpace) { int length = checkNotNull(value, "value").length(); if (length == 0) { return value; } + + int start = 0; int last = length - 1; - boolean quoted = isDoubleQuote(value.charAt(0)) && isDoubleQuote(value.charAt(last)) && length != 1; + boolean trimmed = false; + if (trimWhiteSpace) { + start = indexOfFirstNonOwsChar(value, length); + if (start == length) { + return EMPTY_STRING; + } + last = indexOfLastNonOwsChar(value, start, length); + trimmed = start > 0 || last < length - 1; + if (trimmed) { + length = last - start + 1; + } + } + + StringBuilder result = new StringBuilder(length + CSV_NUMBER_ESCAPE_CHARACTERS); + boolean quoted = isDoubleQuote(value.charAt(start)) && isDoubleQuote(value.charAt(last)) && length != 1; boolean foundSpecialCharacter = false; boolean escapedDoubleQuote = false; - StringBuilder escaped = new StringBuilder(length + CSV_NUMBER_ESCAPE_CHARACTERS).append(DOUBLE_QUOTE); - for (int i = 0; i < length; i++) { + for (int i = start; i <= last; i++) { char current = value.charAt(i); switch (current) { case DOUBLE_QUOTE: - if (i == 0 || i == last) { + if (i == start || i == last) { if (!quoted) { - escaped.append(DOUBLE_QUOTE); + result.append(DOUBLE_QUOTE); } else { continue; } @@ -267,7 +297,7 @@ public static CharSequence escapeCsv(CharSequence value) { boolean isNextCharDoubleQuote = isDoubleQuote(value.charAt(i + 1)); if (!isDoubleQuote(value.charAt(i - 1)) && (!isNextCharDoubleQuote || i + 1 == last)) { - escaped.append(DOUBLE_QUOTE); + result.append(DOUBLE_QUOTE); escapedDoubleQuote = true; } break; @@ -277,10 +307,20 @@ public static CharSequence escapeCsv(CharSequence value) { case COMMA: foundSpecialCharacter = true; } - escaped.append(current); + result.append(current); + } + + if (escapedDoubleQuote || foundSpecialCharacter && !quoted) { + return quote(result); } - return escapedDoubleQuote || foundSpecialCharacter && !quoted ? - escaped.append(DOUBLE_QUOTE) : value; + if (trimmed) { + return quoted ? quote(result) : result; + } + return value; + } + + private static StringBuilder quote(StringBuilder builder) { + return builder.insert(0, DOUBLE_QUOTE).append(DOUBLE_QUOTE); } /** @@ -390,7 +430,7 @@ public static List<CharSequence> unescapeCsvFields(CharSequence value) { return unescaped; } - /**s + /** * Validate if {@code value} is a valid csv field without double-quotes. * * @throws IllegalArgumentException if {@code value} needs to be encoded with double-quotes. @@ -430,7 +470,8 @@ public static boolean isNullOrEmpty(String s) { /** * Find the index of the first non-white space character in {@code s} starting at {@code offset}. - * @param seq The string to search. + * + * @param seq The string to search. * @param offset The offset to start searching at. * @return the index of the first non-white space character or &lt;{@code 0} if none was found. */ @@ -446,6 +487,7 @@ public static int indexOfNonWhiteSpace(CharSequence seq, int offset) { /** * Determine if {@code c} lies within the range of values defined for * <a href="http://unicode.org/glossary/#surrogate_code_point">Surrogate Code Point</a>. + * * @param c the character to check. * @return {@code true} if {@code c} lies within the range of values defined for * <a href="http://unicode.org/glossary/#surrogate_code_point">Surrogate Code Point</a>. {@code false} otherwise. @@ -469,4 +511,47 @@ public static boolean endsWith(CharSequence s, char c) { int len = s.length(); return len > 0 && s.charAt(len - 1) == c; } + + /** + * Trim optional white-space characters from the specified value, + * according to <a href="https://tools.ietf.org/html/rfc7230#section-7">RFC-7230</a>. + * + * @param value the value to trim + * @return {@link CharSequence} the trimmed value if necessary, or the value unchanged + */ + public static CharSequence trimOws(CharSequence value) { + final int length = value.length(); + if (length == 0) { + return value; + } + int start = indexOfFirstNonOwsChar(value, length); + int end = indexOfLastNonOwsChar(value, start, length); + return start == 0 && end == length - 1 ? value : value.subSequence(start, end + 1); + } + + /** + * @return {@code length} if no OWS is found. + */ + private static int indexOfFirstNonOwsChar(CharSequence value, int length) { + int i = 0; + while (i < length && isOws(value.charAt(i))) { + i++; + } + return i; + } + + /** + * @return {@code start} if no OWS is found. + */ + private static int indexOfLastNonOwsChar(CharSequence value, int start, int length) { + int i = length - 1; + while (i > start && isOws(value.charAt(i))) { + i--; + } + return i; + } + + private static boolean isOws(char c) { + return c == SPACE || c == TAB; + } }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java index 6ec9bfaf36a..258208f4625 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java @@ -16,6 +16,7 @@ package io.netty.handler.codec.http; import io.netty.handler.codec.http.HttpHeadersTestUtils.HeaderValue; +import io.netty.util.internal.StringUtil; import org.junit.Test; import java.util.Arrays; @@ -23,6 +24,8 @@ import static io.netty.util.AsciiString.contentEquals; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; public class CombinedHttpHeadersTest { @@ -271,4 +274,32 @@ public void testGetAll() { headers.set(HEADER_NAME, "\"a,b,c\""); assertEquals(Arrays.asList("a,b,c"), headers.getAll(HEADER_NAME)); } + + @Test + public void owsTrimming() { + final CombinedHttpHeaders headers = newCombinedHttpHeaders(); + headers.set(HEADER_NAME, Arrays.asList("\ta", " ", " b ", "\t \t")); + headers.add(HEADER_NAME, " c, d \t"); + + assertEquals(Arrays.asList("a", "", "b", "", "c, d"), headers.getAll(HEADER_NAME)); + assertEquals("a,,b,,\"c, d\"", headers.get(HEADER_NAME)); + + assertTrue(headers.containsValue(HEADER_NAME, "a", true)); + assertTrue(headers.containsValue(HEADER_NAME, " a ", true)); + assertTrue(headers.containsValue(HEADER_NAME, "a", true)); + assertFalse(headers.containsValue(HEADER_NAME, "a,b", true)); + + assertFalse(headers.containsValue(HEADER_NAME, " c, d ", true)); + assertFalse(headers.containsValue(HEADER_NAME, "c, d", true)); + assertTrue(headers.containsValue(HEADER_NAME, " c ", true)); + assertTrue(headers.containsValue(HEADER_NAME, "d", true)); + + assertTrue(headers.containsValue(HEADER_NAME, "\t", true)); + assertTrue(headers.containsValue(HEADER_NAME, "", true)); + + assertFalse(headers.containsValue(HEADER_NAME, "e", true)); + + HttpHeaders copiedHeaders = newCombinedHttpHeaders().add(headers); + assertEquals(Arrays.asList("a", "", "b", "", "c, d"), copiedHeaders.getAll(HEADER_NAME)); + } } diff --git a/common/src/test/java/io/netty/util/internal/StringUtilTest.java b/common/src/test/java/io/netty/util/internal/StringUtilTest.java index 13016259e04..c8a13f58c06 100644 --- a/common/src/test/java/io/netty/util/internal/StringUtilTest.java +++ b/common/src/test/java/io/netty/util/internal/StringUtilTest.java @@ -19,9 +19,22 @@ import java.util.Arrays; -import static io.netty.util.internal.StringUtil.*; -import static org.hamcrest.CoreMatchers.*; -import static org.junit.Assert.*; +import static io.netty.util.internal.StringUtil.NEWLINE; +import static io.netty.util.internal.StringUtil.commonSuffixOfLength; +import static io.netty.util.internal.StringUtil.simpleClassName; +import static io.netty.util.internal.StringUtil.substringAfter; +import static io.netty.util.internal.StringUtil.toHexString; +import static io.netty.util.internal.StringUtil.toHexStringPadded; +import static io.netty.util.internal.StringUtil.unescapeCsv; +import static io.netty.util.internal.StringUtil.unescapeCsvFields; +import static org.hamcrest.CoreMatchers.is; +import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertSame; +import static org.junit.Assert.assertThat; +import static org.junit.Assert.assertTrue; public class StringUtilTest { @@ -331,13 +344,39 @@ public void escapeCsvWithCRLFCharacter() { } private static void escapeCsv(CharSequence value, CharSequence expected) { + escapeCsv(value, expected, false); + } + + private static void escapeCsvWithTrimming(CharSequence value, CharSequence expected) { + escapeCsv(value, expected, true); + } + + private static void escapeCsv(CharSequence value, CharSequence expected, boolean trimOws) { CharSequence escapedValue = value; for (int i = 0; i < 10; ++i) { - escapedValue = StringUtil.escapeCsv(escapedValue); + escapedValue = StringUtil.escapeCsv(escapedValue, trimOws); assertEquals(expected, escapedValue.toString()); } } + @Test + public void escapeCsvWithTrimming() { + assertSame("", StringUtil.escapeCsv("", true)); + assertSame("ab", StringUtil.escapeCsv("ab", true)); + + escapeCsvWithTrimming("", ""); + escapeCsvWithTrimming(" \t ", ""); + escapeCsvWithTrimming("ab", "ab"); + escapeCsvWithTrimming("a b", "a b"); + escapeCsvWithTrimming(" \ta \tb", "a \tb"); + escapeCsvWithTrimming("a \tb \t", "a \tb"); + escapeCsvWithTrimming("\t a \tb \t", "a \tb"); + escapeCsvWithTrimming("\"\t a b \"", "\"\t a b \""); + escapeCsvWithTrimming(" \"\t a b \"\t", "\"\t a b \""); + escapeCsvWithTrimming(" testing\t\n ", "\"testing\t\n\""); + escapeCsvWithTrimming("\ttest,ing ", "\"test,ing\""); + } + @Test public void testUnescapeCsv() { assertEquals("", unescapeCsv("")); @@ -465,4 +504,21 @@ public void testEndsWith() { assertFalse(StringUtil.endsWith("-", 'u')); assertFalse(StringUtil.endsWith("u-", 'u')); } + + @Test + public void trimOws() { + assertSame("", StringUtil.trimOws("")); + assertEquals("", StringUtil.trimOws(" \t ")); + assertSame("a", StringUtil.trimOws("a")); + assertEquals("a", StringUtil.trimOws(" a")); + assertEquals("a", StringUtil.trimOws("a ")); + assertEquals("a", StringUtil.trimOws(" a ")); + assertSame("abc", StringUtil.trimOws("abc")); + assertEquals("abc", StringUtil.trimOws("\tabc")); + assertEquals("abc", StringUtil.trimOws("abc\t")); + assertEquals("abc", StringUtil.trimOws("\tabc\t")); + assertSame("a\t b", StringUtil.trimOws("a\t b")); + assertEquals("", StringUtil.trimOws("\t ").toString()); + assertEquals("a b", StringUtil.trimOws("\ta b \t").toString()); + } }
train
train
2017-03-10T07:46:17
"2017-02-24T19:04:54Z"
Scottmitch
val
netty/netty/6516_6517
netty/netty
netty/netty/6516
netty/netty/6517
[ "timestamp(timedelta=107232.0, similarity=0.8969268022128343)" ]
3ad3356892e841b5e2cd7f6e87882fdf3fa2e68d
b6a7476a16b10a0a21d1056181351ab7f14537fd
[ "And confirmed, that adding the same names but with SSL works. I am going to open a PR, then we can discuss if this is the right thing to do :)", "Hmm it seems that the JSSE spec wants to treat the \"TLS\" and \"SSL\" prefix as interchangeable [1].\r\n\r\n> The names mentioned in the TLS RFCs prefixed with TLS_ are functionally equivalent to the JSSE cipher suites prefixed with SSL_.\r\n\r\nIf the `SupportedCipherSuiteFilter` is used then hopefully adding both prefixes will not have any impact ... but we can verify with unit tests.\r\n\r\n[1] http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html", "closed in 4.1.9" ]
[ "@CodingFabian nit: On OpenJDK and Oracle JDK ?", "Well i consider Oracle JDK to be OpenJDK, but I happily will adjust to your preferred wording.\r\nBut the TLS_ names work correctly on Azul Zulu, so we should add that, or say OpenJDK based JVMs?", "OpenJDK based sounds good", "I think its better to just say that the JSSE allows for the \"TLS\" and \"SSL\" prefix to be interchanged (and also link to the docs) [1]:\r\n\r\n> The names mentioned in the TLS RFCs prefixed with TLS_ are functionally equivalent to the JSSE cipher suites prefixed with SSL_.\r\n\r\n[1] http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html" ]
"2017-03-08T18:57:25Z"
[]
IBM J9 uses slightly different cipher names -> HTTP2.0 connection fails
While investigating https://github.com/netty/netty/issues/6437, i found an interesting problem: ``` SslHandshakeCompletionEvent(javax.net.ssl.SSLHandshakeException: No appropriate protocol, may be no appropriate cipher suite specified or protocols are deactivated) ``` It turns out that netty qants the following http2 ciphers: ``` io.netty.handler.codec.http2.Http2SecurityUtil.CIPHERS /* Java 8 */ "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", /* openssl = ECDHE-ECDSA-AES256-GCM-SHA384 */ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", /* openssl = ECDHE-ECDSA-AES128-GCM-SHA256 */ "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", /* openssl = ECDHE-RSA-AES256-GCM-SHA384 */ /* REQUIRED BY HTTP/2 SPEC */ "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", /* openssl = ECDHE-RSA-AES128-GCM-SHA256 */ /* REQUIRED BY HTTP/2 SPEC */ "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", /* openssl = DHE-RSA-AES128-GCM-SHA256 */ "TLS_DHE_DSS_WITH_AES_128_GCM_SHA256" /* openssl = DHE-DSS-AES128-GCM-SHA256 */)); ``` none of those is provided by J9 However J9 has those: ``` Default Cipher SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA * SSL_DHE_DSS_WITH_AES_128_CBC_SHA * SSL_DHE_DSS_WITH_AES_128_CBC_SHA256 * SSL_DHE_DSS_WITH_AES_128_GCM_SHA256 SSL_DHE_DSS_WITH_DES_CBC_SHA SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA * SSL_DHE_RSA_WITH_AES_128_CBC_SHA * SSL_DHE_RSA_WITH_AES_128_CBC_SHA256 * SSL_DHE_RSA_WITH_AES_128_GCM_SHA256 SSL_DHE_RSA_WITH_DES_CBC_SHA SSL_DH_anon_EXPORT_WITH_DES40_CBC_SHA SSL_DH_anon_WITH_AES_128_CBC_SHA SSL_DH_anon_WITH_AES_128_CBC_SHA256 SSL_DH_anon_WITH_DES_CBC_SHA * SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA * SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 * SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 SSL_ECDHE_ECDSA_WITH_NULL_SHA * SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA * SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA256 * SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256 SSL_ECDHE_RSA_WITH_NULL_SHA * SSL_ECDH_ECDSA_WITH_AES_128_CBC_SHA * SSL_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 * SSL_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 SSL_ECDH_ECDSA_WITH_NULL_SHA * SSL_ECDH_RSA_WITH_AES_128_CBC_SHA * SSL_ECDH_RSA_WITH_AES_128_CBC_SHA256 * SSL_ECDH_RSA_WITH_AES_128_GCM_SHA256 SSL_ECDH_RSA_WITH_NULL_SHA SSL_ECDH_anon_WITH_AES_128_CBC_SHA SSL_ECDH_anon_WITH_NULL_SHA SSL_KRB5_EXPORT_WITH_DES_CBC_40_MD5 SSL_KRB5_EXPORT_WITH_DES_CBC_40_SHA SSL_KRB5_WITH_DES_CBC_MD5 SSL_KRB5_WITH_DES_CBC_SHA SSL_RSA_EXPORT_WITH_DES40_CBC_SHA SSL_RSA_FIPS_WITH_DES_CBC_SHA * SSL_RSA_WITH_AES_128_CBC_SHA * SSL_RSA_WITH_AES_128_CBC_SHA256 * SSL_RSA_WITH_AES_128_GCM_SHA256 SSL_RSA_WITH_DES_CBC_SHA SSL_RSA_WITH_NULL_MD5 SSL_RSA_WITH_NULL_SHA SSL_RSA_WITH_NULL_SHA256 * TLS_EMPTY_RENEGOTIATION_INFO_SCSV ``` (Output from https://confluence.atlassian.com/stashkb/list-ciphers-used-by-jvm-679609085.html) So all ciphers required by the http2 spec are actually present, but their name is slightly different (SSL instead of TLS) Only the SHA384 ciphers are not available (on the the j9 i use, according to docs like https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q113210_.htm they can exist). So probably we just need to extend the list?
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java index 2ddd16f63ce..e0450fba58e 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java @@ -34,26 +34,51 @@ public final class Http2SecurityUtil { * href="https://wiki.mozilla.org/Security/Server_Side_TLS#Non-Backward_Compatible_Ciphersuite">Mozilla Cipher * Suites</a> in accordance with the <a * href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-9.2.2">HTTP/2 Specification</a>. + * + * According to the <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html"> + * JSSE documentation</a> "the names mentioned in the TLS RFCs prefixed with TLS_ are functionally equivalent + * to the JSSE cipher suites prefixed with SSL_". + * Both variants are used to support JVMs supporting the one or the other. */ public static final List<String> CIPHERS; private static final List<String> CIPHERS_JAVA_MOZILLA_INCREASED_SECURITY = Collections.unmodifiableList(Arrays .asList( /* Java 8 */ - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", /* openssl = ECDHE-ECDSA-AES256-GCM-SHA384 */ - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", /* openssl = ECDHE-ECDSA-AES128-GCM-SHA256 */ - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", /* openssl = ECDHE-RSA-AES256-GCM-SHA384 */ + /* openssl = ECDHE-ECDSA-AES256-GCM-SHA384 */ + "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + "SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + /* openssl = ECDHE-ECDSA-AES128-GCM-SHA256 */ + "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + "SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + /* openssl = ECDHE-RSA-AES256-GCM-SHA384 */ + "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", + "SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384", + /* REQUIRED BY HTTP/2 SPEC */ - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", /* openssl = ECDHE-RSA-AES128-GCM-SHA256 */ + /* openssl = ECDHE-RSA-AES128-GCM-SHA256 */ + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + "SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256", + /* REQUIRED BY HTTP/2 SPEC */ - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", /* openssl = DHE-RSA-AES128-GCM-SHA256 */ - "TLS_DHE_DSS_WITH_AES_128_GCM_SHA256" /* openssl = DHE-DSS-AES128-GCM-SHA256 */)); + /* openssl = DHE-RSA-AES128-GCM-SHA256 */ + "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", + "SSL_DHE_RSA_WITH_AES_128_GCM_SHA256", + /* openssl = DHE-DSS-AES128-GCM-SHA256 */ + "TLS_DHE_DSS_WITH_AES_128_GCM_SHA256", + "SSL_DHE_DSS_WITH_AES_128_GCM_SHA256" + )); private static final List<String> CIPHERS_JAVA_NO_MOZILLA_INCREASED_SECURITY = Collections.unmodifiableList(Arrays .asList( /* Java 8 */ - "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384", /* openssl = DHE-RSA-AES256-GCM-SHA384 */ - "TLS_DHE_DSS_WITH_AES_256_GCM_SHA384" /* openssl = DHE-DSS-AES256-GCM-SHA384 */)); + /* openssl = DHE-RSA-AES256-GCM-SHA384 */ + "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384", + "SSL_DHE_RSA_WITH_AES_256_GCM_SHA384", + /* openssl = DHE-DSS-AES256-GCM-SHA384 */ + "TLS_DHE_DSS_WITH_AES_256_GCM_SHA384", + "SSL_DHE_DSS_WITH_AES_256_GCM_SHA384" + )); static { List<String> ciphers = new ArrayList<String>(CIPHERS_JAVA_MOZILLA_INCREASED_SECURITY.size()
null
train
train
2017-03-08T20:07:58
"2017-03-08T18:27:58Z"
CodingFabian
val
netty/netty/6507_6518
netty/netty
netty/netty/6507
netty/netty/6518
[ "timestamp(timedelta=13.0, similarity=1.0000000000000002)" ]
2993760e9261f046db88a0e8ccf9edf4e9b0acad
4c3ae9433ca17a6fedd7b6a4f4d813c5810c1f01
[ "@meshcow will have a look soon", "@meshcow PTAL https://github.com/netty/netty/pull/6518 ", "looks good now!", "Fixed by https://github.com/netty/netty/pull/6518" ]
[]
"2017-03-08T20:07:05Z"
[ "defect" ]
UnorderedThreadPoolEventExecutor consumes 100% CPU when idle
### Expected behavior 0% CPU consumption when idle ### Actual behavior almost 100% CPU consumption when idle ### Steps to reproduce After a submitted task has been successfully executed, all the available cores get busy with doing some stuff. ```java import io.netty.util.concurrent.UnorderedThreadPoolEventExecutor; public class TestUnorderedThreadPoolEventExecutor { public static void main(String[] args) { UnorderedThreadPoolEventExecutor executor = new UnorderedThreadPoolEventExecutor(1); // compare: ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1); executor.submit(() -> { return "completed!"; }); } } ``` ### Netty version ``` <groupId>io.netty</groupId> <artifactId>netty-all</artifactId> <version>4.1.8.Final</version> ``` ### OS and JVM versions #### Linux Linux 3.13.0-042stab120.3 x86_64 x86_64 x86_64 GNU/Linux java version "1.8.0_112" Java(TM) SE Runtime Environment (build 1.8.0_112-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode) #### OSX Darwin 15.6.0 Darwin Kernel Version 15.6.0: Mon Jan 9 23:07:29 PST 2017; root:xnu-3248.60.11.2.1~1/RELEASE_X86_64 x86_64 java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) ### `top -H` on Linux ``` top - 13:37:31 up 92 days, 47 min, 2 users, load average: 0.57, 0.13, 0.04 Threads: 127 total, 3 running, 124 sleeping, 0 stopped, 0 zombie %Cpu(s): 96.4 us, 0.5 sy, 0.0 ni, 3.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2097152 total, 2082604 used, 14548 free, 0 buffers KiB Swap: 0 total, 0 used, 0 free. 1245400 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28999 miha 20 0 2716608 599284 13700 R 96.6 28.6 0:13.76 java 28998 miha 20 0 2716608 599284 13700 R 96.3 28.6 0:13.78 java 29000 miha 20 0 2716608 599284 13700 S 0.7 28.6 0:00.07 java 29011 miha 20 0 21568 1640 1136 R 0.3 0.1 0:00.01 top ... ``` ### top and jstack on OSX ``` $ top Processes: 341 total, 3 running, 12 stuck, 326 sleeping, 1896 threads 13:42:07 Load Avg: 5.01, 3.82, 3.51 CPU usage: 91.67% user, 2.97% sys, 5.35% idle SharedLibs: 90M resident, 14M data, 4096B linkedit. MemRegions: 262828 total, 6832M resident, 79M private, 1222M shared. PhysMem: 15G used (2782M wired), 1431M unused. VM: 977G vsize, 533M framework vsize, 33508915(64) swapins, 35619367(0) swapouts. Networks: packets: 37262432/90G in, 23652973/58G out. Disks: 18020162/1162G read, 13501631/688G written. PID COMMAND %CPU TIME #TH #WQ #PORTS MEM PURG CMPRS PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH 94811 top 4.7 00:00.96 1/1 0 21 6104K+ 0B 0B 94811 702 running *0[1] 0.00000 0.00000 0 4993+ 102 629591+ 314787+ 12608+ 341476+ 94809 java 727.4 01:24.36 23/8 0 87 1511M+ 0B 0B 94809 687 running *0[2] 0.00000 0.00000 501 392525+ 452 284 109 18048+ 599646+ .... $ jstack 94809 2017-03-07 13:42:53 Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.121-b13 mixed mode): "Attach Listener" #13 daemon prio=9 os_prio=31 tid=0x00007f8ebd000000 nid=0x3d0b waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "DestroyJavaVM" #12 prio=5 os_prio=31 tid=0x00007f8eb5802000 nid=0xc07 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "unorderedThreadPoolEventExecutor-1-1" #11 prio=5 os_prio=31 tid=0x00007f8eb592a800 nid=0x5b0f runnable [0x0000700001658000] java.lang.Thread.State: RUNNABLE at java.util.concurrent.ScheduledThreadPoolExecutor.triggerTime(ScheduledThreadPoolExecutor.java:493) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:532) at io.netty.util.concurrent.UnorderedThreadPoolEventExecutor.schedule(UnorderedThreadPoolEventExecutor.java:171) at io.netty.util.concurrent.UnorderedThreadPoolEventExecutor.schedule(UnorderedThreadPoolEventExecutor.java:40) at java.util.concurrent.ScheduledThreadPoolExecutor.execute(ScheduledThreadPoolExecutor.java:622) at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:760) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:428) at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:95) at io.netty.util.concurrent.PromiseTask.setSuccessInternal(PromiseTask.java:106) at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:74) at io.netty.util.concurrent.UnorderedThreadPoolEventExecutor$RunnableScheduledFutureTask.run(UnorderedThreadPoolEventExecutor.java:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) at java.lang.Thread.run(Thread.java:745) "Service Thread" #9 daemon prio=9 os_prio=31 tid=0x00007f8eb5812000 nid=0x5303 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "C1 CompilerThread3" #8 daemon prio=9 os_prio=31 tid=0x00007f8eb5811000 nid=0x5103 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "C2 CompilerThread2" #7 daemon prio=9 os_prio=31 tid=0x00007f8eb4803800 nid=0x4f03 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "C2 CompilerThread1" #6 daemon prio=9 os_prio=31 tid=0x00007f8eb5801000 nid=0x4d03 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "C2 CompilerThread0" #5 daemon prio=9 os_prio=31 tid=0x00007f8eb3056000 nid=0x4b03 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" #4 daemon prio=9 os_prio=31 tid=0x00007f8eb3055800 nid=0x3e0f runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" #3 daemon prio=8 os_prio=31 tid=0x00007f8eb405b000 nid=0x3803 in Object.wait() [0x0000700000d3a000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00000006c679b6b0> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143) - locked <0x00000006c679b6b0> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) "Reference Handler" #2 daemon prio=10 os_prio=31 tid=0x00007f8eb4058000 nid=0x3603 in Object.wait() [0x0000700000c37000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00000006c671fd30> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:502) at java.lang.ref.Reference.tryHandlePending(Reference.java:191) - locked <0x00000006c671fd30> (a java.lang.ref.Reference$Lock) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) "VM Thread" os_prio=31 tid=0x00007f8eb4053800 nid=0x3403 runnable "GC task thread#0 (ParallelGC)" os_prio=31 tid=0x00007f8eb580e000 nid=0x1707 runnable "GC task thread#1 (ParallelGC)" os_prio=31 tid=0x00007f8eb580e800 nid=0x2603 runnable "GC task thread#2 (ParallelGC)" os_prio=31 tid=0x00007f8eb580f800 nid=0x2803 runnable "GC task thread#3 (ParallelGC)" os_prio=31 tid=0x00007f8eb5810000 nid=0x2a03 runnable "GC task thread#4 (ParallelGC)" os_prio=31 tid=0x00007f8eb5810800 nid=0x2c03 runnable "GC task thread#5 (ParallelGC)" os_prio=31 tid=0x00007f8eb3005800 nid=0x2e03 runnable "GC task thread#6 (ParallelGC)" os_prio=31 tid=0x00007f8eb3007000 nid=0x3003 runnable "GC task thread#7 (ParallelGC)" os_prio=31 tid=0x00007f8eb4002800 nid=0x3203 runnable "VM Periodic Task Thread" os_prio=31 tid=0x00007f8eb4028800 nid=0x5503 waiting on condition JNI global references: 249 ```
[ "common/src/main/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutor.java" ]
[ "common/src/main/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutor.java" ]
[ "common/src/test/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutorTest.java" ]
diff --git a/common/src/main/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutor.java index 5d073debec3..4ed94da5375 100644 --- a/common/src/main/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutor.java +++ b/common/src/main/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutor.java @@ -30,6 +30,8 @@ import java.util.concurrent.ThreadFactory; import java.util.concurrent.TimeUnit; +import static java.util.concurrent.TimeUnit.NANOSECONDS; + /** * {@link EventExecutor} implementation which makes no guarantees about the ordering of task execution that * are submitted because there may be multiple threads executing these tasks. @@ -158,7 +160,8 @@ public Iterator<EventExecutor> iterator() { @Override protected <V> RunnableScheduledFuture<V> decorateTask(Runnable runnable, RunnableScheduledFuture<V> task) { - return new RunnableScheduledFutureTask<V>(this, runnable, task); + return runnable instanceof NonNotifyRunnable ? + task : new RunnableScheduledFutureTask<V>(this, runnable, task); } @Override @@ -201,6 +204,11 @@ public <T> Future<T> submit(Callable<T> task) { return (Future<T>) super.submit(task); } + @Override + public void execute(Runnable command) { + super.schedule(new NonNotifyRunnable(command), 0, NANOSECONDS); + } + private static final class RunnableScheduledFutureTask<V> extends PromiseTask<V> implements RunnableScheduledFuture<V>, ScheduledFuture<V> { private final RunnableScheduledFuture<V> future; @@ -248,4 +256,25 @@ public int compareTo(Delayed o) { return future.compareTo(o); } } + + // This is a special wrapper which we will be used in execute(...) to wrap the submitted Runnable. This is needed as + // ScheduledThreadPoolExecutor.execute(...) will delegate to submit(...) which will then use decorateTask(...). + // The problem with this is that decorateTask(...) needs to ensure we only do our own decoration if we not call + // from execute(...) as otherwise we may end up creating an endless loop because DefaultPromise will call + // EventExecutor.execute(...) when notify the listeners of the promise. + // + // See https://github.com/netty/netty/issues/6507 + private static final class NonNotifyRunnable implements Runnable { + + private final Runnable task; + + NonNotifyRunnable(Runnable task) { + this.task = task; + } + + @Override + public void run() { + task.run(); + } + } }
diff --git a/common/src/test/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutorTest.java b/common/src/test/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutorTest.java new file mode 100644 index 00000000000..d96db3fcb07 --- /dev/null +++ b/common/src/test/java/io/netty/util/concurrent/UnorderedThreadPoolEventExecutorTest.java @@ -0,0 +1,57 @@ +/* + * Copyright 2017 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.util.concurrent; + +import org.junit.Assert; +import org.junit.Test; + +import java.util.concurrent.CountDownLatch; + +public class UnorderedThreadPoolEventExecutorTest { + + // See https://github.com/netty/netty/issues/6507 + @Test + public void testNotEndlessExecute() throws Exception { + UnorderedThreadPoolEventExecutor executor = new UnorderedThreadPoolEventExecutor(1); + + try { + final CountDownLatch latch = new CountDownLatch(3); + Runnable task = new Runnable() { + @Override + public void run() { + latch.countDown(); + } + }; + executor.execute(task); + Future<?> future = executor.submit(task).addListener(new FutureListener<Object>() { + @Override + public void operationComplete(Future<Object> future) throws Exception { + latch.countDown(); + } + }); + latch.await(); + future.syncUninterruptibly(); + + // Now just check if the queue stays empty multiple times. This is needed as the submit to execute(...) + // by DefaultPromise may happen in an async fashion + for (int i = 0; i < 10000; i++) { + Assert.assertTrue(executor.getQueue().isEmpty()); + } + } finally { + executor.shutdownGracefully(); + } + } +}
train
train
2017-03-09T07:48:37
"2017-03-07T11:54:59Z"
meshcow
val
netty/netty/6520_6521
netty/netty
netty/netty/6520
netty/netty/6521
[ "timestamp(timedelta=7.0, similarity=0.905203241220413)" ]
f49bf4b201389bbeeb61991981945adcbe38db3c
98bd130e2dae0a71993cba76c655cfa9b903afa2
[ "Reproduction:\r\n\r\nI've used it to confirm (1) above resolves the problem (move `encoder.writeSettingsAck` earlier). In this reproduction, the client is sending a large message and misbehaves.\r\n\r\n```patch\r\ndiff --git a/example/src/main/java/io/netty/example/http2/helloworld/client/Http2Client.java b/example/src/main/java/io/netty/example/http2/helloworld/client/Http2Client.java\r\nindex bc477d52e..4845a9677 100644\r\n--- a/example/src/main/java/io/netty/example/http2/helloworld/client/Http2Client.java\r\n+++ b/example/src/main/java/io/netty/example/http2/helloworld/client/Http2Client.java\r\n@@ -58,8 +58,8 @@ public final class Http2Client {\r\n static final boolean SSL = System.getProperty(\"ssl\") != null;\r\n static final String HOST = System.getProperty(\"host\", \"127.0.0.1\");\r\n static final int PORT = Integer.parseInt(System.getProperty(\"port\", SSL? \"8443\" : \"8080\"));\r\n- static final String URL = System.getProperty(\"url\", \"/whatever\");\r\n- static final String URL2 = System.getProperty(\"url2\");\r\n+ static final String URL = System.getProperty(\"url\");\r\n+ static final String URL2 = System.getProperty(\"url2\", \"/whatever\");\r\n static final String URL2DATA = System.getProperty(\"url2data\", \"test data!\");\r\n \r\n public static void main(String[] args) throws Exception {\r\n@@ -124,7 +124,7 @@ public final class Http2Client {\r\n if (URL2 != null) {\r\n // Create a simple POST request with a body.\r\n FullHttpRequest request = new DefaultFullHttpRequest(HTTP_1_1, POST, URL2,\r\n- wrappedBuffer(URL2DATA.getBytes(CharsetUtil.UTF_8)));\r\n+ wrappedBuffer(new byte[10 * 1000 * 1000]));\r\n request.headers().add(HttpHeaderNames.HOST, hostName);\r\n request.headers().add(HttpConversionUtil.ExtensionHeaderNames.SCHEME.text(), scheme.name());\r\n request.headers().add(HttpHeaderNames.ACCEPT_ENCODING, HttpHeaderValues.GZIP);\r\ndiff --git a/example/src/main/java/io/netty/example/http2/helloworld/server/HelloWorldHttp2Handler.java b/example/src/main/java/io/netty/example/http2/helloworld/server/HelloWorldHttp2Handler.java\r\nindex a763a9de8..bc8fda032 100644\r\n--- a/example/src/main/java/io/netty/example/http2/helloworld/server/HelloWorldHttp2Handler.java\r\n+++ b/example/src/main/java/io/netty/example/http2/helloworld/server/HelloWorldHttp2Handler.java\r\n@@ -92,12 +92,20 @@ public final class HelloWorldHttp2Handler extends Http2ConnectionHandler impleme\r\n if (endOfStream) {\r\n sendResponse(ctx, streamId, data.retain());\r\n }\r\n- return processed;\r\n+ return 0;\r\n }\r\n \r\n @Override\r\n public void onHeadersRead(ChannelHandlerContext ctx, int streamId,\r\n Http2Headers headers, int padding, boolean endOfStream) {\r\n+ try {\r\n+ decoder().flowController().incrementWindowSize(connection().connectionStream(), 20 * 1000 * 1000);\r\n+ } catch (Exception ex) {\r\n+ throw new RuntimeException(ex);\r\n+ }\r\n+ io.netty.handler.codec.http2.Http2Settings settings = new io.netty.handler.codec.http2.Http2Settings();\r\n+ settings.initialWindowSize(20 * 1000 * 1000);\r\n+ encoder().writeSettings(ctx, settings, ctx.newPromise());\r\n if (endOfStream) {\r\n ByteBuf content = ctx.alloc().buffer();\r\n content.writeBytes(RESPONSE_BYTES.duplicate());\r\n```\r\n\r\nServer-side log (where client is misbehaving). You can see that it sends `RST_STREAM: streamId=3, errorCode=3` before the `SETTINGS: ack=true` is received.\r\n\r\n```\r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] HEADERS: streamId=3, headers=DefaultHttp2Headers[:path: /whatever, :method: POST, :scheme: http, :authority: 127.0.0.1:8080, accept-encoding: gzip, accept-encoding: deflate], streamDependency=0, weight=16, exclusive=false, padding=0, endStream=false\r\n------------------------------------\r\n16:43:12.255 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] WINDOW_UPDATE: streamId=0, windowSizeIncrement=20000000\r\n------------------------------------\r\n16:43:12.255 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] SETTINGS: ack=false, settings={INITIAL_WINDOW_SIZE=20000000}\r\n------------------------------------\r\n16:43:12.257 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.258 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.258 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.259 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16143, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.259 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=240, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.260 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.260 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] WINDOW_UPDATE: streamId=3, windowSizeIncrement=32768\r\n------------------------------------\r\n16:43:12.261 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] RST_STREAM: streamId=3, errorCode=3\r\n------------------------------------\r\n16:43:12.262 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] WINDOW_UPDATE: streamId=3, windowSizeIncrement=49151\r\n------------------------------------\r\n16:43:12.262 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.262 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] RST_STREAM: streamId=3, errorCode=5\r\n------------------------------------\r\n16:43:12.264 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.264 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] RST_STREAM: streamId=3, errorCode=5\r\n------------------------------------\r\n16:43:12.264 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] DATA: streamId=3, padding=0, endStream=false, length=16384, bytes=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...\r\n------------------------------------\r\n16:43:12.265 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------OUTBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] RST_STREAM: streamId=3, errorCode=5\r\n------------------------------------\r\n16:43:12.265 [nioEventLoopGroup-2-2] INFO i.n.e.h.h.s.HelloWorldHttp2Handler - \r\n----------------INBOUND--------------------\r\n[id: 0x29fccc8d, L:/127.0.0.1:8080 - R:/127.0.0.1:59317] SETTINGS: ack=true\r\n------------------------------------\r\n```", "@ejona86 - Thanks for reporting! I have a unit test for this issue now and will submit a PR soon.", "@ejona86 - In terms of your 2nd suggestion I *think* we should be OK with the current setup because `Http2ConnectionHandler` forces a `flush()` in `channelReadComplete()` ... but lets handle that in a followup PR because I think the 1st suggestion is sufficient to fix this issue. any objections to this approach?", "@Scottmitch, yeah, just doing (1) now sounds fine and is lower risk." ]
[ "This doesn't actually check \"before settings are used\" in the general sense. It only tests `INITIAL_WINDOW_SIZE` handling. Maybe `testSettingsAckIsSentBeforeUsingFlowControl`?\r\n\r\nI say that because if the default flow controller avoids calling `writePendingBytes();` from `initialWindowSize(int)`, this test passes. But that wouldn't imply that other settings (e.g., `MAX_CONCURRENT_STREAMS`) would be handled correctly.", "sure sgtm", "fixed" ]
"2017-03-09T02:58:35Z"
[ "defect" ]
http2: Compatibility issue: SETTINGS ACK may be reordered
### Expected behavior When receiving a SETTINGS frame, the ACK would be sent before any frames that make use of the new settings. For example, say MAX_CONCURRENT_STREAMS=0 initially. When receiving a SETTINGS with MAX_CONCURRENT_STREAMS=1, the SETTINGS(ACK=true) would be sent before a HEADERS for a new stream. Similarly for INITIAL_WINDOW_SIZE increasing the window size; the SETTINGS(ACK=true) should be sent before any DATA makes use of the increased window. ### Actual behavior When INITIAL_WINDOW_SIZE is increased, we have seen Netty send DATA using the newly available window before the SETTINGS ACK. (Although we haven't actually confirmed the SETTINGS ACK is sent at all, because the receiver aborts as soon as the broken window is detected, while enabling logging on the java side or swapping to plaintext changes the timing so the bug doesn't manifest due to details in the server's behavior.) `encoder.remoteSettings()` [is called before](https://github.com/netty/netty/blob/netty-4.1.8.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L452,L455) `writeSettingsAck()`. `encoder.remoteSettings()` [calls](https://github.com/netty/netty/blob/netty-4.1.8.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java#L113) `flowController().initialWindowSize()` which results in [the flow controller writing pending bytes](https://github.com/netty/netty/blob/netty-4.1.8.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L659). Two changes should probably be made: 1. Send the settings ack immediately before processing the settings frame in the decoder. Any failure in processing would probably result in the connection being closed, so this doesn't seem dangerous. 2. Stop triggering `writePendingBytes()` in the flow controller and instead [wait for the `flush()` like normal](https://github.com/netty/netty/blob/netty-4.1.8.Final/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L166). ### Steps to reproduce Nothing easy yet. We have seen the problem with INITIAL_WINDOW_SIZE. It is pretty racy in our environment and requires a C# gRPC server talking to our Netty-based gRPC client. You can see grpc/grpc-java#2801 and grpc/grpc#9956. With a specially-crafted server, it should be reliable to reproduce, but I didn't want to delay filing the issue. ### Versions Netty version: Netty 4.1.6, Netty 4.1.8 JVM version: 1.8.0_112 OS version: Ubuntu 14.04 ### Appendix This is the current log we can see on the C# server. It's a bit hard to read/comprehend, so I'm not sure you want to look at it much. `incoming_window_delta` does not include `INITIAL_WINDOW_SIZE`, so that's why it goes negative. You can see how one SETTINGS ACK is received for the connection handshake. 5 DATA frames are received successfully, but then it fails with "frame of size 11642 overflows incoming window of 0" while decoding the next frame. ``` D0309 00:04:02.466914 139734348199680 src/core/lib/transport/connectivity_state.c:134: CONWATCH: 0x7f1664012118 server_transport: from IDLE [cur=READY] notify=0x7f16640074b0 D0309 00:04:02.467097 139734348199680 src/core/lib/transport/connectivity_state.c:134: CONWATCH: 0x7f1664012118 server_transport: from READY [cur=READY] notify=0x7f16640074b0 D0309 00:04:02.467170 139734348199680 src/core/ext/transport/chttp2/transport/frame_settings.c:130: SEND SETTINGS {INITIAL_WINDOW_SIZE = 65535MAX_HEADER_LIST_SIZE = 16384} D0309 00:04:02.467237 139734348199680 src/core/ext/transport/chttp2/transport/writing.c:334: FLOW write: server CREDIT TRANSPORT:incoming_window 65535 by announced 2147418112 giving 2147483647 D0309 00:04:02.474815 139734348199680 src/core/ext/transport/chttp2/transport/frame_settings.c:152: RECV SETTINGS ACK D0309 00:04:02.524103 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:397: FLOW parse: server DEBIT STREAM[3]:incoming_window_delta 0 by incoming_frame_size 16384 giving -16384 D0309 00:04:02.524163 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:413: FLOW parse: server DEBIT TRANSPORT:incoming_window 2147483647 by incoming_frame_size 16384 giving 2147467263 D0309 00:04:02.524238 139734348199680 src/core/ext/transport/chttp2/transport/frame_settings.c:130: SEND SETTINGS {INITIAL_WINDOW_SIZE = 77177} D0309 00:04:02.524863 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:397: FLOW parse: server DEBIT STREAM[3]:incoming_window_delta -16384 by incoming_frame_size 16384 giving -32768 D0309 00:04:02.524889 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:413: FLOW parse: server DEBIT TRANSPORT:incoming_window 2147467263 by incoming_frame_size 16384 giving 2147450879 D0309 00:04:02.525761 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:397: FLOW parse: server DEBIT STREAM[3]:incoming_window_delta -32768 by incoming_frame_size 16384 giving -49152 D0309 00:04:02.525834 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:413: FLOW parse: server DEBIT TRANSPORT:incoming_window 2147450879 by incoming_frame_size 16384 giving 2147434495 D0309 00:04:02.529791 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:397: FLOW parse: server DEBIT STREAM[3]:incoming_window_delta -49152 by incoming_frame_size 16239 giving -65391 D0309 00:04:02.529863 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:413: FLOW parse: server DEBIT TRANSPORT:incoming_window 2147434495 by incoming_frame_size 16239 giving 2147418256 D0309 00:04:02.529917 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:397: FLOW parse: server DEBIT STREAM[3]:incoming_window_delta -65391 by incoming_frame_size 144 giving -65535 D0309 00:04:02.529970 139734348199680 src/core/ext/transport/chttp2/transport/parsing.c:413: FLOW parse: server DEBIT TRANSPORT:incoming_window 2147418256 by incoming_frame_size 144 giving 2147418112 D0309 00:04:02.530253 139734348199680 src/core/lib/transport/connectivity_state.c:184: SET: 0x7f1664012118 server_transport: READY --> SHUTDOWN [close_transport] error=0x7f1664004560 {"created":"@1489017842.530171783","description":"Delayed close due to in-progress write","file":"src/core/ext/transport/chttp2/transport/chttp2_transport.c","file_line":517,"grpc_status":14,"referenced_errors":[{"created":"@1489017842.530170049","description":"Failed parsing HTTP/2","file":"src/core/ext/transport/chttp2/transport/chttp2_transport.c","file_line":1971,"referenced_errors":[{"created":"@1489017842.530012238","description":"frame of size 11642 overflows incoming window of 0","file":"src/core/ext/transport/chttp2/transport/parsing.c","file_line":391}]}]} ```
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java index 333937113e2..ef643fafadc 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java @@ -402,11 +402,13 @@ private void applyLocalSettings(Http2Settings settings) throws Http2Exception { @Override public void onSettingsRead(ChannelHandlerContext ctx, Http2Settings settings) throws Http2Exception { - encoder.remoteSettings(settings); - - // Acknowledge receipt of the settings. + // Acknowledge receipt of the settings. We should do this before we process the settings to ensure our + // remote peer applies these settings before any subsequent frames that we may send which depend upon these + // new settings. See https://github.com/netty/netty/issues/6520. encoder.writeSettingsAck(ctx, ctx.newPromise()); + encoder.remoteSettings(settings); + listener.onSettingsRead(ctx, settings); }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java index cb70ec6225a..f5de427b4a8 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java @@ -347,6 +347,95 @@ public void operationComplete(ChannelFuture future) throws Exception { verify(clientListener, never()).onRstStreamRead(any(ChannelHandlerContext.class), anyInt(), anyLong()); } + @Test + public void testSettingsAckIsSentBeforeSettingsAreUsed() throws Exception { + bootstrapEnv(1, 1, 1, 1); + + final CountDownLatch serverSettingsAckLatch1 = new CountDownLatch(1); + final CountDownLatch serverSettingsAckLatch2 = new CountDownLatch(2); + final CountDownLatch serverDataLatch = new CountDownLatch(1); + final CountDownLatch clientWriteDataLatch = new CountDownLatch(1); + final byte[] data = new byte[] {1, 2, 3, 4, 5}; + final ByteArrayOutputStream out = new ByteArrayOutputStream(data.length); + + doAnswer(new Answer<Void>() { + @Override + public Void answer(InvocationOnMock invocationOnMock) throws Throwable { + serverSettingsAckLatch1.countDown(); + serverSettingsAckLatch2.countDown(); + return null; + } + }).when(serverListener).onSettingsAckRead(any(ChannelHandlerContext.class)); + doAnswer(new Answer<Integer>() { + @Override + public Integer answer(InvocationOnMock in) throws Throwable { + ByteBuf buf = (ByteBuf) in.getArguments()[2]; + int padding = (Integer) in.getArguments()[3]; + int processedBytes = buf.readableBytes() + padding; + + buf.readBytes(out, buf.readableBytes()); + serverDataLatch.countDown(); + return processedBytes; + } + }).when(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), + any(ByteBuf.class), eq(0), anyBoolean()); + + final Http2Headers headers = dummyHeaders(); + + // The server initially reduces the connection flow control window to 0. + runInChannel(serverConnectedChannel, new Http2Runnable() { + @Override + public void run() throws Http2Exception { + http2Server.encoder().writeSettings(serverCtx(), + new Http2Settings().copyFrom(http2Server.decoder().localSettings()) + .initialWindowSize(0), + serverNewPromise()); + http2Server.flush(serverCtx()); + } + }); + + assertTrue(serverSettingsAckLatch1.await(DEFAULT_AWAIT_TIMEOUT_SECONDS, SECONDS)); + + // The client should now attempt to send data, but the window size is 0 so it will be queued in the flow + // controller. + runInChannel(clientChannel, new Http2Runnable() { + @Override + public void run() throws Http2Exception { + http2Client.encoder().writeHeaders(ctx(), 3, headers, 0, (short) 16, false, 0, false, + newPromise()); + http2Client.encoder().writeData(ctx(), 3, Unpooled.wrappedBuffer(data), 0, true, newPromise()); + http2Client.flush(ctx()); + clientWriteDataLatch.countDown(); + } + }); + + assertTrue(clientWriteDataLatch.await(DEFAULT_AWAIT_TIMEOUT_SECONDS, SECONDS)); + + // Now the server opens up the connection window to allow the client to send the pending data. + runInChannel(serverConnectedChannel, new Http2Runnable() { + @Override + public void run() throws Http2Exception { + http2Server.encoder().writeSettings(serverCtx(), + new Http2Settings().copyFrom(http2Server.decoder().localSettings()) + .initialWindowSize(data.length), + serverNewPromise()); + http2Server.flush(serverCtx()); + } + }); + + assertTrue(serverSettingsAckLatch2.await(DEFAULT_AWAIT_TIMEOUT_SECONDS, SECONDS)); + assertTrue(serverDataLatch.await(DEFAULT_AWAIT_TIMEOUT_SECONDS, SECONDS)); + assertArrayEquals(data, out.toByteArray()); + + // Verify that no errors have been received. + verify(serverListener, never()).onGoAwayRead(any(ChannelHandlerContext.class), anyInt(), anyLong(), + any(ByteBuf.class)); + verify(serverListener, never()).onRstStreamRead(any(ChannelHandlerContext.class), anyInt(), anyLong()); + verify(clientListener, never()).onGoAwayRead(any(ChannelHandlerContext.class), anyInt(), anyLong(), + any(ByteBuf.class)); + verify(clientListener, never()).onRstStreamRead(any(ChannelHandlerContext.class), anyInt(), anyLong()); + } + @Test public void priorityUsingHigherValuedStreamIdDoesNotPreventUsingLowerStreamId() throws Exception { bootstrapEnv(1, 1, 2, 0);
train
train
2017-03-09T02:09:17
"2017-03-09T00:09:52Z"
ejona86
val
netty/netty/6551_6552
netty/netty
netty/netty/6551
netty/netty/6552
[ "timestamp(timedelta=1.0, similarity=0.8946069731133737)" ]
9c1a1916962354f1e69fe43107219c25a86e6cef
c8cf45faab998dd2a922826fad20e900f01c49a2
[ "@vkostyukov this is a bug... Please open a PR FTW :)", "Thanks @normanmaurer! I'm on it!", "Fixed by https://github.com/netty/netty/pull/6552" ]
[]
"2017-03-20T20:19:43Z"
[ "defect" ]
Http2StreamChannel doesn't inherit config of the parent channel
# Problem As you probably aware, Finagle is a bit special when it comes to Netty 4 usage. For example, we neither use pooling nor prefer direct buffers at the moment. We're getting close to the pooled/direct utopia, but we're not there yet. That said, Finagle's HTTP/2 pipelines are unpooled and yet we've discovered that inbound messages on the server-side are pooled in some cases. Turns out, `Http2StreamChannel` doesn't inherit the channel config from the parent channel (in our case, a channel with unpooled allocator) and just [defaults to a standard setup effectively enabling pooling](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2StreamChannel.java#L58). A possible workaround is to override the default allocator (`-Dio.netty.allocator.type`). Is there any reason to not pass parent's channel config to the `Http2StreamChannel` constructor to make sure it re-uses its settings? I'm happy to make a PR if that sounds like a reasonable solution. I'm not excluding the possibility that this is done intentionally for HTTP/2 and there is a good reason to not reuse parent's channel config. I apologize if that's the case and I'm eager to hear your thoughts about this. # Env Details * Netty version: 4.1.9-Final * JVM version: 1.8.0_111-b14 * OS: Darwin 15.6.0
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java index e03abd2165d..83d9eb5ce7c 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java @@ -305,6 +305,12 @@ ChannelFuture createStreamChannel(Channel parentChannel, EventLoopGroup group, C } channel.pipeline().addLast(handler); + // We need to copy parent's channel options into a child's options to make + // sure they share same allocator, same receive buffer allocator, etc. + // + // See https://github.com/netty/netty/issues/6551 + channel.config().setOptions(parentChannel.config().getOptions()); + initOpts(channel, options); initAttrs(channel, attrs);
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java index 95e57a5e24a..c17c1970da0 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java @@ -39,6 +39,7 @@ import org.junit.Test; import static io.netty.util.ReferenceCountUtil.release; + import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; @@ -343,6 +344,28 @@ public void settingChannelOptsAndAttrsOnBootstrap() { assertEquals("bar", channel.attr(key).get()); } + @Test + public void childChannelShouldShareParentsChannelOptions() { + EmbeddedChannel parent = new EmbeddedChannel(); + parent.config().setAutoRead(false); + parent.config().setWriteSpinCount(42); + + Http2StreamChannelBootstrap b = new Http2StreamChannelBootstrap(); + parent.pipeline().addLast(new Http2MultiplexCodec(true, b)); + + Channel child = b + .parentChannel(parent) + .handler(new TestChannelInitializer()) + .connect() + .channel(); + + assertFalse(child.config().isAutoRead()); + assertEquals(child.config().getWriteSpinCount(), 42); + + child.close(); + assertTrue(!parent.finish()); + } + @Test public void outboundStreamShouldWriteGoAwayWithoutReset() { childChannelInitializer.handler = new ChannelInboundHandlerAdapter() {
test
train
2017-03-19T16:17:29
"2017-03-17T21:28:30Z"
vkostyukov
val
netty/netty/6549_6587
netty/netty
netty/netty/6549
netty/netty/6587
[ "timestamp(timedelta=199.0, similarity=0.8867371246394287)" ]
ef21d5f4caf904e9035e6e8ffec503e554cb6d61
56a9f604b64396c0eb4127a7eb65d769f04c4072
[ "see https://github.com/netty/netty/pull/6587" ]
[]
"2017-03-30T23:09:49Z"
[ "defect" ]
HttpServerKeepAliveHandler closes connection on 204 response
### Expected behavior Netty Http Server responds 204 without body, maintaining the connection open. ### Actual behavior `HttpServerKeepAliveHandler` closes the connection unless the response is `Informational` or has `Content-Length` set, but 204 responses are not Informational but should not carry a `Content-Length` header [rfc](https://tools.ietf.org/html/rfc7230#section-3.3.2). ``` A server MUST NOT send a Content-Length header field in any response with a status code of 1xx (Informational) or 204 (No Content). A server MUST NOT send a Content-Length header field in any 2xx (Successful) response to a CONNECT request ``` A condition regarding 204 responses should be added [here](https://github.com/netty/netty/blob/149916d052180dd2b0e0e98598f349454ea200ba/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java#L115). ### Steps to reproduce Create Http Server adding `https://github.com/netty/netty/blob/149916d052180dd2b0e0e98598f349454ea200ba/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java#L115` Send a 204 response without body and `Content-Length`.
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpServerKeepAliveHandlerTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java index 1930c0ec804..c6305a5e2a2 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerKeepAliveHandler.java @@ -103,6 +103,7 @@ private boolean shouldKeepAlive() { * <p> * <ul> * <li>See <a href="https://tools.ietf.org/html/rfc7230#section-6.3"/></li> + * <li>See <a href="https://tools.ietf.org/html/rfc7230#section-3.3.2"/></li> * <li>See <a href="https://tools.ietf.org/html/rfc7230#section-3.3.3"/></li> * </ul> * @@ -112,7 +113,7 @@ private boolean shouldKeepAlive() { */ private static boolean isSelfDefinedMessageLength(HttpResponse response) { return isContentLengthSet(response) || isTransferEncodingChunked(response) || isMultipart(response) || - isInformational(response); + isInformational(response) || response.status().code() == HttpResponseStatus.NO_CONTENT.code(); } private static boolean isInformational(HttpResponse response) {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerKeepAliveHandlerTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerKeepAliveHandlerTest.java index 29087891fe7..bcd64bebad0 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerKeepAliveHandlerTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerKeepAliveHandlerTest.java @@ -27,9 +27,19 @@ import java.util.Arrays; import java.util.Collection; -import static io.netty.handler.codec.http.HttpHeaderValues.*; -import static io.netty.handler.codec.http.HttpUtil.*; -import static org.junit.Assert.*; +import static io.netty.handler.codec.http.HttpHeaderValues.CLOSE; +import static io.netty.handler.codec.http.HttpHeaderValues.KEEP_ALIVE; +import static io.netty.handler.codec.http.HttpHeaderValues.MULTIPART_MIXED; +import static io.netty.handler.codec.http.HttpResponseStatus.NO_CONTENT; +import static io.netty.handler.codec.http.HttpResponseStatus.OK; +import static io.netty.handler.codec.http.HttpUtil.isContentLengthSet; +import static io.netty.handler.codec.http.HttpUtil.isKeepAlive; +import static io.netty.handler.codec.http.HttpUtil.setContentLength; +import static io.netty.handler.codec.http.HttpUtil.setKeepAlive; +import static io.netty.handler.codec.http.HttpUtil.setTransferEncodingChunked; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; @RunWith(Parameterized.class) public class HttpServerKeepAliveHandlerTest { @@ -41,6 +51,7 @@ public class HttpServerKeepAliveHandlerTest { private final boolean isKeepAliveResponseExpected; private final HttpVersion httpVersion; + private final HttpResponseStatus responseStatus; private final String sendKeepAlive; private final int setSelfDefinedMessageLength; private final String setResponseConnection; @@ -49,27 +60,30 @@ public class HttpServerKeepAliveHandlerTest { @Parameters public static Collection<Object[]> keepAliveProvider() { return Arrays.asList(new Object[][] { - { true, HttpVersion.HTTP_1_0, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, KEEP_ALIVE }, // 0 - { true, HttpVersion.HTTP_1_0, REQUEST_KEEP_ALIVE, SET_MULTIPART, KEEP_ALIVE }, // 1 - { false, HttpVersion.HTTP_1_0, null, SET_RESPONSE_LENGTH, null }, // 2 - { true, HttpVersion.HTTP_1_1, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, null }, // 3 - { false, HttpVersion.HTTP_1_1, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, CLOSE }, // 4 - { true, HttpVersion.HTTP_1_1, REQUEST_KEEP_ALIVE, SET_MULTIPART, null }, // 5 - { true, HttpVersion.HTTP_1_1, REQUEST_KEEP_ALIVE, SET_CHUNKED, null }, // 6 - { false, HttpVersion.HTTP_1_1, null, SET_RESPONSE_LENGTH, null }, // 7 - { false, HttpVersion.HTTP_1_0, REQUEST_KEEP_ALIVE, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 8 - { false, HttpVersion.HTTP_1_0, null, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 9 - { false, HttpVersion.HTTP_1_1, REQUEST_KEEP_ALIVE, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 10 - { false, HttpVersion.HTTP_1_1, null, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 11 - { false, HttpVersion.HTTP_1_0, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, null }, // 12 + { true, HttpVersion.HTTP_1_0, OK, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, KEEP_ALIVE }, // 0 + { true, HttpVersion.HTTP_1_0, OK, REQUEST_KEEP_ALIVE, SET_MULTIPART, KEEP_ALIVE }, // 1 + { false, HttpVersion.HTTP_1_0, OK, null, SET_RESPONSE_LENGTH, null }, // 2 + { true, HttpVersion.HTTP_1_1, OK, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, null }, // 3 + { false, HttpVersion.HTTP_1_1, OK, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, CLOSE }, // 4 + { true, HttpVersion.HTTP_1_1, OK, REQUEST_KEEP_ALIVE, SET_MULTIPART, null }, // 5 + { true, HttpVersion.HTTP_1_1, OK, REQUEST_KEEP_ALIVE, SET_CHUNKED, null }, // 6 + { false, HttpVersion.HTTP_1_1, OK, null, SET_RESPONSE_LENGTH, null }, // 7 + { false, HttpVersion.HTTP_1_0, OK, REQUEST_KEEP_ALIVE, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 8 + { false, HttpVersion.HTTP_1_0, OK, null, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 9 + { false, HttpVersion.HTTP_1_1, OK, REQUEST_KEEP_ALIVE, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 10 + { false, HttpVersion.HTTP_1_1, OK, null, NOT_SELF_DEFINED_MSG_LENGTH, null }, // 11 + { false, HttpVersion.HTTP_1_0, OK, REQUEST_KEEP_ALIVE, SET_RESPONSE_LENGTH, null }, // 12 + { true, HttpVersion.HTTP_1_1, NO_CONTENT, REQUEST_KEEP_ALIVE, NOT_SELF_DEFINED_MSG_LENGTH, null}, // 13 + { false, HttpVersion.HTTP_1_0, NO_CONTENT, null, NOT_SELF_DEFINED_MSG_LENGTH, null} // 14 }); } public HttpServerKeepAliveHandlerTest(boolean isKeepAliveResponseExpected, HttpVersion httpVersion, - String sendKeepAlive, + HttpResponseStatus responseStatus, String sendKeepAlive, int setSelfDefinedMessageLength, CharSequence setResponseConnection) { this.isKeepAliveResponseExpected = isKeepAliveResponseExpected; this.httpVersion = httpVersion; + this.responseStatus = responseStatus; this.sendKeepAlive = sendKeepAlive; this.setSelfDefinedMessageLength = setSelfDefinedMessageLength; this.setResponseConnection = setResponseConnection == null? null : setResponseConnection.toString(); @@ -84,7 +98,7 @@ public void setUp() { public void test_KeepAlive() throws Exception { FullHttpRequest request = new DefaultFullHttpRequest(httpVersion, HttpMethod.GET, "/v1/foo/bar"); setKeepAlive(request, REQUEST_KEEP_ALIVE.equals(sendKeepAlive)); - HttpResponse response = new DefaultFullHttpResponse(httpVersion, HttpResponseStatus.OK); + HttpResponse response = new DefaultFullHttpResponse(httpVersion, responseStatus); if (!StringUtil.isNullOrEmpty(setResponseConnection)) { response.headers().set(HttpHeaderNames.CONNECTION, setResponseConnection); } @@ -111,7 +125,7 @@ public void test_PipelineKeepAlive() { setKeepAlive(secondRequest, REQUEST_KEEP_ALIVE.equals(sendKeepAlive)); FullHttpRequest finalRequest = new DefaultFullHttpRequest(httpVersion, HttpMethod.GET, "/v1/foo/bar"); setKeepAlive(finalRequest, false); - FullHttpResponse response = new DefaultFullHttpResponse(httpVersion, HttpResponseStatus.OK); + FullHttpResponse response = new DefaultFullHttpResponse(httpVersion, responseStatus); FullHttpResponse informationalResp = new DefaultFullHttpResponse(httpVersion, HttpResponseStatus.PROCESSING); setKeepAlive(response, true); setContentLength(response, 0);
train
train
2017-03-30T20:52:03
"2017-03-16T17:47:30Z"
sschepens
val
netty/netty/6579_6589
netty/netty
netty/netty/6579
netty/netty/6589
[ "timestamp(timedelta=0.0, similarity=0.8901193503397146)", "keyword_pr_to_issue" ]
ef21d5f4caf904e9035e6e8ffec503e554cb6d61
f4c635d30b23be6ce51866704c178232e98015e7
[ "@nmittler @ejona86 can we get around this without exposing it again ? ", "I'll defer to @ejona86 and @carl-mastrangelo ", "I just chatted with @carl-mastrangelo offline and this is blocking them from migrating to 4.1.9.Final. Easiest thing would be to just expose it.\r\n\r\n@Scottmitch any objections?", "FYI: we (gRPC Team) don't mind too much working around breaking method signatures and moving code. We accept that the HTTP/2 API is unstable. ", "Thanks everyone for working on this. I'm really existed to use this when released.", "@guillaumecle - Just to clarify your end goal is just to use gRPC after it has consumed a new version of Netty, not necessarily directly using the HPACK classes, correct?\r\n\r\nI would like to avoid exposing HPACK if possible (see https://github.com/netty/netty/issues/6591)", "Correct, especially use cloud-bigtable-client after it has consumed new versions of gRPC and Netty.", "We are really eager to consume this fix, any idea of when Netty 4.1.10.Final will be released?", "@guillaumecle end of this week." ]
[ "if we make it public again can we at least make it final ?", "Yup - done. I wasn't sure at first whether gRPC needed to extend but it looks like it doesn't.", "having this be public can be dangerous as indicated in the comments. Can this stay package private?" ]
"2017-03-31T00:25:40Z"
[]
Expose the hpack decoder
Hello, grpc is using the hpack decoder in [GrpcHttp2HeadersDecoder](https://github.com/grpc/grpc-java/blob/master/netty/src/main/java/io/grpc/netty/GrpcHttp2HeadersDecoder.java) and in netty 4.1.9.Final its visibility has been [changed to private](https://github.com/zer0se7en/netty/pull/2/commits/f9001b9fc07a71a9d6eaf0462470416780302107). This is preventing grpc to be used with netty 4.1.9.Final and with netty-tcnative 2.0.0.Final. Would it be possible to expose back this feature to the public api? Thank you, Reference discussion: https://groups.google.com/forum/#!topic/grpc-io/jkgIbSC9PlI
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java index dad2c6fc1d8..e4d5c5f0d4d 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java @@ -49,7 +49,7 @@ import static io.netty.util.internal.ObjectUtil.checkPositive; import static io.netty.util.internal.ThrowableUtil.unknownStackTrace; -final class HpackDecoder { +public final class HpackDecoder { private static final Http2Exception DECODE_ULE_128_DECOMPRESSION_EXCEPTION = unknownStackTrace( connectionError(COMPRESSION_ERROR, "HPACK - decompression failure"), HpackDecoder.class, "decodeULE128(..)"); @@ -96,7 +96,7 @@ final class HpackDecoder { * (which is dangerous). * @param initialHuffmanDecodeCapacity Size of an intermediate buffer used during huffman decode. */ - HpackDecoder(long maxHeaderListSize, int initialHuffmanDecodeCapacity) { + public HpackDecoder(long maxHeaderListSize, int initialHuffmanDecodeCapacity) { this(maxHeaderListSize, initialHuffmanDecodeCapacity, DEFAULT_HEADER_TABLE_SIZE); } @@ -104,7 +104,7 @@ final class HpackDecoder { * Exposed Used for testing only! Default values used in the initial settings frame are overriden intentionally * for testing but violate the RFC if used outside the scope of testing. */ - HpackDecoder(long maxHeaderListSize, int initialHuffmanDecodeCapacity, int maxHeaderTableSize) { + public HpackDecoder(long maxHeaderListSize, int initialHuffmanDecodeCapacity, int maxHeaderTableSize) { this.maxHeaderListSize = checkPositive(maxHeaderListSize, "maxHeaderListSize"); this.maxHeaderListSizeGoAway = Http2CodecUtil.calculateMaxHeaderListSizeGoAway(maxHeaderListSize);
null
train
train
2017-03-30T20:52:03
"2017-03-28T23:10:17Z"
guillaumecle
val
netty/netty/6347_6590
netty/netty
netty/netty/6347
netty/netty/6590
[ "timestamp(timedelta=18.0, similarity=0.8480042062936383)" ]
38b054c65cc655bb4966517abeb32c1246c5c7e3
8dcf9257976eb59ff4b6a1e3392d61c460cb97aa
[ "Somewhat related with Java 9 without these options I get\r\n\r\n```\r\njava.lang.NullPointerException\r\n\tat io.netty.resolver.dns.DnsNameResolver.<init>(DnsNameResolver.java:250)\r\n\tat io.netty.resolver.dns.DnsNameResolverBuilder.build(DnsNameResolverBuilder.java:347)\r\n\tat com.baulsupp.oksocial.network.NettyDns.<init>(NettyDns.java:49)\r\n\tat com.baulsupp.oksocial.network.NettyDns.byName(NettyDns.java:92)\r\n\tat com.baulsupp.oksocial.Main.createClientBuilder(Main.java:589)\r\n\tat com.baulsupp.oksocial.Main.initialise(Main.java:425)\r\n\tat com.baulsupp.oksocial.Main.run(Main.java:277)\r\n\tat com.baulsupp.oksocial.Main.main(Main.java:106)\r\n\tat com.baulsupp.oksocial.TestMain.main(TestMain.java:5)\r\n```\r\n\r\nBecause b.register() returns a failed ChannelFuture with null channel\r\n\r\n```\r\n channelFuture = responseHandler.channelActivePromise;\r\n ch = (DatagramChannel) b.register().channel();\r\n ch.config().setRecvByteBufAllocator(new FixedRecvByteBufAllocator(maxPayloadSize));\r\n```", "@yschimke thanks for reporting... working on a fix", "@yschimke fixed by https://github.com/netty/netty/pull/6590 . Please check", "As commented on the other issue, this worked for me", "should have just commented...", "Fixed by https://github.com/netty/netty/pull/6590" ]
[ "How about: `Skipping a malformed nameserver URI: {}` ?", "done", "Not possible as the constructor expects a `Hashtable` 👎 ", "nope...", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Extract this nested try block into a separate method. [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1141)\n", "![MAJOR](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/severity-major.png) Replace the synchronized class \"Hashtable\" by an unsynchronized one such as \"HashMap\". [![rule](https://raw.githubusercontent.com/SonarCommunity/sonar-github/master/images/rule.png)](https://garage.netty.io/sonarqube/coding_rules#rule_key=squid%3AS1149)\n" ]
"2017-03-31T06:09:13Z"
[ "improvement" ]
DnsNameResolver uses public dns servers in java 9
I realise this is known, but it seems broken enough to deserve a tracking issue. ### Expected behavior Should use system dns servers ### Actual behavior Logs a warning "Default DNS servers: [/8.8.8.8:53, /8.8.4.4:53] (Google Public DNS as a fallback)" Then uses the google servers. n.b. It logs the warning even *if* you manually set the nameServerAddresses .nameServerAddresses( DnsServerAddresses.sequential(new InetSocketAddress("x.x.x.x", 53), new InetSocketAddress("x.x.x.x", 53))) ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) java version "9-ea" Java(TM) SE Runtime Environment (build 9-ea+149) Java HotSpot(TM) 64-Bit Server VM (build 9-ea+149, mixed mode) ### OS version (e.g. `uname -a`) ### Workaround set on command line --add-opens=java.base/sun.net.dns=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java" ]
[]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java index 505ecfb4ae2..b7c5bd719fa 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java @@ -20,10 +20,17 @@ import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; +import javax.naming.Context; +import javax.naming.NamingException; +import javax.naming.directory.DirContext; +import javax.naming.directory.InitialDirContext; import java.lang.reflect.Method; import java.net.InetSocketAddress; +import java.net.URI; +import java.net.URISyntaxException; import java.util.ArrayList; import java.util.Collections; +import java.util.Hashtable; import java.util.List; import static io.netty.resolver.dns.DnsServerAddresses.sequential; @@ -47,22 +54,48 @@ public final class DefaultDnsServerAddressStreamProvider implements DnsServerAdd static { final List<InetSocketAddress> defaultNameServers = new ArrayList<InetSocketAddress>(2); + + // Using jndi-dns to obtain the default name servers. + // + // See: + // - http://docs.oracle.com/javase/8/docs/technotes/guides/jndi/jndi-dns.html + // - http://mail.openjdk.java.net/pipermail/net-dev/2017-March/010695.html + Hashtable<String, String> env = new Hashtable<String, String>(); + env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.dns.DnsContextFactory"); + env.put("java.naming.provider.url", "dns://"); try { - Class<?> configClass = Class.forName("sun.net.dns.ResolverConfiguration"); - Method open = configClass.getMethod("open"); - Method nameservers = configClass.getMethod("nameservers"); - Object instance = open.invoke(null); - - @SuppressWarnings("unchecked") - final List<String> list = (List<String>) nameservers.invoke(instance); - for (String a: list) { - if (a != null) { - defaultNameServers.add(new InetSocketAddress(SocketUtils.addressByName(a), DNS_PORT)); + DirContext ctx = new InitialDirContext(env); + String dnsUrls = (String) ctx.getEnvironment().get("java.naming.provider.url"); + String[] servers = dnsUrls.split(" "); + for (String server : servers) { + try { + defaultNameServers.add(SocketUtils.socketAddress(new URI(server).getHost(), DNS_PORT)); + } catch (URISyntaxException e) { + logger.debug("Skipping a malformed nameserver URI: {}", server, e); + } + } + } catch (NamingException ignore) { + // Will try reflection if this fails. + } + + if (defaultNameServers.isEmpty()) { + try { + Class<?> configClass = Class.forName("sun.net.dns.ResolverConfiguration"); + Method open = configClass.getMethod("open"); + Method nameservers = configClass.getMethod("nameservers"); + Object instance = open.invoke(null); + + @SuppressWarnings("unchecked") + final List<String> list = (List<String>) nameservers.invoke(instance); + for (String a: list) { + if (a != null) { + defaultNameServers.add(new InetSocketAddress(SocketUtils.addressByName(a), DNS_PORT)); + } } + } catch (Exception ignore) { + // Failed to get the system name server list via reflection. + // Will add the default name servers afterwards. } - } catch (Exception ignore) { - // Failed to get the system name server list. - // Will add the default name servers afterwards. } if (!defaultNameServers.isEmpty()) {
null
test
train
2017-04-19T07:26:26
"2017-02-11T10:52:37Z"
yschimke
val
netty/netty/6583_6604
netty/netty
netty/netty/6583
netty/netty/6604
[ "timestamp(timedelta=12.0, similarity=0.8510605653766261)" ]
155983f1a1384b2dc31a637da50bf7ecd3622927
9bff295cff30bb3fb8bd9a91f738ac705cbfac23
[ "@vkostyukov yes this is a bug. Can you please submit a pr ?", "@vkostyukov maybe let me also think about if we can fix this in a more generic way so it \"just works\" for all handlers.", "@vkostyukov so yes just do a PR for now. I may be able to come up with a more general solution but not sure yet and it may take some time. ", "Thanks @normanmaurer! I think fixing it in `MessageAggregator` will be generic enough since it will also fix `HttpObjectMessageAggretator`, `WebSocketFrameAggregator`, `AbstractMemcacheObjectAggregator`, `RedisBulkStringAggregator`, etc.\r\n\r\nWorking on a fix and a test.", "So I was playing with the implementation today and it turns out it's not that easy to do.\r\n\r\nBecause `MesasgeAggregator` is an inbound handler, there is no way to intercept a read request from one of the next handlers, which I planned to use as a predicate whether or not start managing the read-flow (think about it this way: there is no need to keep reading if nobody asked us to). We could probably just do it (no matter if the read request was issued or not) and it will be more-or-less fine. I'd personally prefer this \"implicit\" auto-read=true (which we could document) for the pipelines with installed message aggregator rather than stalled connections, but I can totally see why people may not like this magical behavior change.\r\n\r\n@normanmaurer, @Scottmitch What do you think about that?", "Another option would be just changing `MessageAggregator` to be a duplex channel handler but that's a public API change.", "@vkostyukov - Can we use a similar strategy as [ByteToMessageDecoder](https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java#L304)? `MessageAggregator` would essentially track if anything was added to `out` in `channelRead`, and check/reset that state in `channelReadComplete`.", "Oh I like it @Scottmitch! Thans a lot! Going to try this out.", "Fixed by https://github.com/netty/netty/pull/6604 .. @vkostyukov thanks!" ]
[ "please fix formatting ", "use `Unpooled.copiedBuffer(..., CharsetUtil.US_ASCII);`", "fix formatting", "use `embedded.readInbound()` and so train the read messages. Also assert its what you expect and ensure you release everything", "private final", "nit: `Unpooled.copiedBuffer(string, CharsetUtil.US_ASCII);`\r\n\r\nNo need to first convert to byte[] and then do the copy", "Sure! Just updated the PR." ]
"2017-04-05T00:26:24Z"
[ "defect" ]
MessageAggregator doesn't manage read flow (auto-read is disabled)
### Expected behavior This was discovered in Finagle's [HTTP proxy support handler](https://github.com/twitter/finagle/blob/develop/finagle-netty4/src/main/scala/com/twitter/finagle/netty4/proxy/HttpProxyConnectHandler.scala#L103-L106). Calling `ctx.read()` (when auto-read is disabled) in any handler placed right after the [`MessageAggregator`](https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java) (or `HttpObjectAggregator`) should guarantee at least one inbound message to be fired in a feasible future. ### Actual behavior `MessageAggregator` doesn't manage the read-flow when auto-read is disabled so the connection just stalls if there are several things that have to be aggregated. I searched for `read()` in `MessageAggregator` and didn't find anything. I think we should keep issuing read-requests from `MessageAggregator` if (1) a previous handler issued a read-request and (2) message aggregation is still in progress. Please, let me know what do you think about that. As usual, I'm happy to prepare a PR. ### Environment: Netty 4.1.9-Final
[ "codec/src/main/java/io/netty/handler/codec/MessageAggregator.java" ]
[ "codec/src/main/java/io/netty/handler/codec/MessageAggregator.java" ]
[ "codec/src/test/java/io/netty/handler/codec/MessageAggregatorTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java b/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java index 38e5bd7575a..2bede688bd0 100644 --- a/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java +++ b/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java @@ -399,6 +399,17 @@ protected void handleOversizedMessage(ChannelHandlerContext ctx, S oversized) th new TooLongFrameException("content length exceeded " + maxContentLength() + " bytes.")); } + @Override + public void channelReadComplete(ChannelHandlerContext ctx) throws Exception { + // We might need keep reading the channel until the full message is aggregated. + // + // See https://github.com/netty/netty/issues/6583 + if (currentMessage != null && !ctx.channel().config().isAutoRead()) { + ctx.read(); + } + ctx.fireChannelReadComplete(); + } + @Override public void channelInactive(ChannelHandlerContext ctx) throws Exception { try {
diff --git a/codec/src/test/java/io/netty/handler/codec/MessageAggregatorTest.java b/codec/src/test/java/io/netty/handler/codec/MessageAggregatorTest.java new file mode 100644 index 00000000000..d3c4f5f6c97 --- /dev/null +++ b/codec/src/test/java/io/netty/handler/codec/MessageAggregatorTest.java @@ -0,0 +1,94 @@ +/* + * Copyright 2017 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec; + +import org.junit.Test; + +import static org.junit.Assert.*; +import static org.mockito.Mockito.*; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufHolder; +import io.netty.buffer.DefaultByteBufHolder; +import io.netty.buffer.Unpooled; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelOutboundHandlerAdapter; +import io.netty.channel.embedded.EmbeddedChannel; +import io.netty.util.CharsetUtil; + +public class MessageAggregatorTest { + private static final class ReadCounter extends ChannelOutboundHandlerAdapter { + int value; + + @Override + public void read(ChannelHandlerContext ctx) throws Exception { + value++; + ctx.read(); + } + } + + abstract static class MockMessageAggregator + extends MessageAggregator<ByteBufHolder, ByteBufHolder, ByteBufHolder, ByteBufHolder> { + + protected MockMessageAggregator() { + super(1024); + } + + @Override + protected ByteBufHolder beginAggregation(ByteBufHolder start, ByteBuf content) throws Exception { + return start.replace(content); + } + } + + private static ByteBufHolder message(String string) { + return new DefaultByteBufHolder( + Unpooled.copiedBuffer(string, CharsetUtil.US_ASCII)); + } + + @SuppressWarnings("unchecked") + @Test + public void testReadFlowManagement() throws Exception { + ReadCounter counter = new ReadCounter(); + ByteBufHolder first = message("first"); + ByteBufHolder chunk = message("chunk"); + ByteBufHolder last = message("last"); + + MockMessageAggregator agg = spy(MockMessageAggregator.class); + when(agg.isStartMessage(first)).thenReturn(true); + when(agg.isContentMessage(chunk)).thenReturn(true); + when(agg.isContentMessage(last)).thenReturn(true); + when(agg.isLastContentMessage(last)).thenReturn(true); + + EmbeddedChannel embedded = new EmbeddedChannel(counter, agg); + embedded.config().setAutoRead(false); + + assertFalse(embedded.writeInbound(first)); + assertFalse(embedded.writeInbound(chunk)); + assertTrue(embedded.writeInbound(last)); + + assertEquals(3, counter.value); // 2 reads issued from MockMessageAggregator + // 1 read issued from EmbeddedChannel constructor + + ByteBufHolder all = new DefaultByteBufHolder(Unpooled.wrappedBuffer( + first.content().retain(), chunk.content().retain(), last.content().retain())); + ByteBufHolder out = embedded.readInbound(); + + assertEquals(all, out); + assertTrue(all.release() && out.release()); + assertFalse(embedded.finish()); + } +}
test
train
2017-04-07T03:09:58
"2017-03-30T00:06:03Z"
vkostyukov
val
netty/netty/6622_6634
netty/netty
netty/netty/6622
netty/netty/6634
[ "timestamp(timedelta=20.0, similarity=0.9067023002115264)" ]
38b054c65cc655bb4966517abeb32c1246c5c7e3
4c4b55019b50d0866004807f4ade3cf15a81d107
[ "```patch\r\n\r\nFrom 7beda8e105e05eb3b3c7e0c6b2f99a069370137f Mon Sep 17 00:00:00 2001\r\nDate: Tue, 11 Apr 2017 17:34:36 +0200\r\nSubject: [PATCH] Fix encoder exceptions being discarded when writing with a VoidPromise.\r\n\r\nMotivation:\r\n\r\nAccording to documentation of Channel#voidPromise(), if encoding of the message throws an exception, exception should be passed to the channel pipeline.\r\nUnfortunately, that's not the case, which this commit fixes.\r\nI've been ripping my hair out for hours trying to find out a memory leak without any clue.\r\nThere was an exception thrown in my handler which made the reason very obvious, but it was being silently discarded.\r\nFixes #6622\r\n\r\nModifications:\r\n\r\nFix the logic of notifyOutboundHandler to handle a case when the promise is not void.\r\n\r\nResult:\r\n\r\nEncoder exceptions will be properly propagated to the pipeline when used with void promise, in accordance to javadocs of Channel#voidPromise()\r\n---\r\n transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java | 2 ++\r\n 1 file changed, 2 insertions(+), 0 deletions(-)\r\n\r\ndiff --git a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java\r\nindex 55e596f..d138307 100644\r\n--- a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java\r\n+++ b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java\r\n@@ -834,6 +834,8 @@ abstract class AbstractChannelHandlerContext extends DefaultAttributeMap\r\n private static void notifyOutboundHandlerException(Throwable cause, ChannelPromise promise) {\r\n if (!(promise instanceof VoidChannelPromise)) {\r\n PromiseNotificationUtil.tryFailure(promise, cause, logger);\r\n+ } else {\r\n+ invokeExceptionCaught(t);\r\n }\r\n }\r\n \r\n--\r\nlibgit2 0.24.0\r\n```", "@normanmaurer could you take a look at this patch :)", "@ninja- yes this is clearly a bug... Your fix is not 100 % correct as it not handles all the cases tho. Please check https://github.com/netty/netty/pull/6634 for a full fix. Thanks for reporting!", "Thanks! Would you mind explaining which case I would miss with my version of the fix :)?", "@ninja- the problem in the ChannelOutboundBuffer and also it would not start at the head of the pipeline", "I've been pretty confident that I chose the function that starts propagation at the head...But OK thanks :)", "Fixed by https://github.com/netty/netty/pull/6634" ]
[ "How about moving `promise instanceof VoidChannelPromise` into `PromiseNotificationUtil`?" ]
"2017-04-18T05:54:28Z"
[ "defect" ]
VoidChannelPromise eats exceptions...
### Expected behavior ``` voidPromise ChannelPromise voidPromise() Return a special ChannelPromise which can be reused for different operations. It's only supported to use it for write(Object, ChannelPromise). Be aware that the returned ChannelPromise will not support most operations and should only be used if you want to save an object allocation for every write operation. You will not be able to detect if the operation was complete, only if it failed as the implementation will call ChannelPipeline.fireExceptionCaught(Throwable) in this case. ``` when writing a packet with a void promise, an exception should be thrown on failure. ### Actual behavior the exception is eaten and it's not forwarded to the channel pipeline ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) This is CLEARLY a Bug. in this function https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java#L736 when exception is thrown the logic goes to here https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java#L834 and...the exception is ignored. If the promise is void, it should be forwarded to the pipeline as stated in the API... cc @Scottmitch @normanmaurer ### Netty version 4.1.9-FINAL ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java", "transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java" ]
[ "transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java", "transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java" ]
[ "transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java index 55e596f7f6b..796bddf0fff 100644 --- a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java +++ b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java @@ -832,9 +832,9 @@ public ChannelFuture writeAndFlush(Object msg) { } private static void notifyOutboundHandlerException(Throwable cause, ChannelPromise promise) { - if (!(promise instanceof VoidChannelPromise)) { - PromiseNotificationUtil.tryFailure(promise, cause, logger); - } + // Only log if the given promise is not of type VoidChannelPromise as tryFailure(...) is expected to return + // false. + PromiseNotificationUtil.tryFailure(promise, cause, promise instanceof VoidChannelPromise ? null : logger); } private void notifyHandlerException(Throwable cause) { diff --git a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java index 52bf57d8dac..34fbde3556d 100644 --- a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java +++ b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java @@ -664,15 +664,15 @@ public void run() { } private static void safeSuccess(ChannelPromise promise) { - if (!(promise instanceof VoidChannelPromise)) { - PromiseNotificationUtil.trySuccess(promise, null, logger); - } + // Only log if the given promise is not of type VoidChannelPromise as trySuccess(...) is expected to return + // false. + PromiseNotificationUtil.trySuccess(promise, null, promise instanceof VoidChannelPromise ? null : logger); } private static void safeFail(ChannelPromise promise, Throwable cause) { - if (!(promise instanceof VoidChannelPromise)) { - PromiseNotificationUtil.tryFailure(promise, cause, logger); - } + // Only log if the given promise is not of type VoidChannelPromise as tryFailure(...) is expected to return + // false. + PromiseNotificationUtil.tryFailure(promise, cause, promise instanceof VoidChannelPromise ? null : logger); } @Deprecated
diff --git a/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java b/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java index 4f745e439cf..39f6752b2da 100644 --- a/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java +++ b/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java @@ -1073,6 +1073,35 @@ public void testNotPinExecutor() { group.shutdownGracefully(0, 0, TimeUnit.SECONDS); } + @Test(timeout = 3000) + public void testVoidPromiseNotify() throws Throwable { + ChannelPipeline pipeline1 = new LocalChannel().pipeline(); + + EventLoopGroup defaultGroup = new DefaultEventLoopGroup(1); + EventLoop eventLoop1 = defaultGroup.next(); + final Promise<Throwable> promise = eventLoop1.newPromise(); + final Exception exception = new IllegalArgumentException(); + try { + eventLoop1.register(pipeline1.channel()).syncUninterruptibly(); + pipeline1.addLast(new ChannelDuplexHandler() { + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { + throw exception; + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { + promise.setSuccess(cause); + } + }); + pipeline1.write("test", pipeline1.voidPromise()); + assertSame(exception, promise.syncUninterruptibly().getNow()); + } finally { + pipeline1.channel().close().syncUninterruptibly(); + defaultGroup.shutdownGracefully(); + } + } + private static final class TestTask implements Runnable { private final ChannelPipeline pipeline;
test
train
2017-04-19T07:26:26
"2017-04-11T15:18:20Z"
ninja-
val