ĵ > Fromagentzhatgmail.comTue

Fromagentzhatgmail.comTue

From agentzh at gmail.com Tue Oct 1 03:02:34 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Mon, 30 Sep 2013 20:02:34 -0700
Subject: High response time at high concurrent connections
In-Reply-To: <4bbe928a919b3ee7755228bff5db9655.NginxMailingListEnglish@forum.nginx.org>
References: <CAB4Tn6Nwxady7J4BZzFEd+ZVqCv3No0_cz_jSQfBsUugJO8fPA@mail.gmail.com>
<4bbe928a919b3ee7755228bff5db9655.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAB4Tn6OXBwpm7-Ew8v+bM3t5_6PKMnP78eM=1cxXtTE8QrNMAg@mail.gmail.com>

Hello!

On Mon, Sep 30, 2013 at 7:25 AM, laltin wrote:
> But looking at tornado logs I expect around 2000reqs/sec. Assuming that each
> request is handled in 2ms one instance can handle 500reqs/sec and with 4
> instances it sholud be 2000req/sec. But it is stuck at 1200reqs/sec I wonder
> why it is stuck at that point?
> Does increasing the number of instances change the result?
>

It depends on where the bottleneck really is :)

You can use the on-CPU flamegraph and off-CPU flamegraph tools to
check the on-CPU time and off-CPU time usage by sampling your related
processes under load. In particular, using the following tools on
Linux:

https://github.com/agentzh/nginx-systemtap-toolkit#ngx-sample-bt
https://github.com/agentzh/nginx-systemtap-toolkit#ngx-sample-bt-off-cpu

These tools are very general and not specific to Nginx processes, BTW.

Best regards,
-agentzh


From emailgrant at gmail.com Tue Oct 1 07:12:41 2013
From: emailgrant at gmail.com (Grant)
Date: Tue, 1 Oct 2013 00:12:41 -0700
Subject: root works, alias doesn't
In-Reply-To: <CALqce=2RXJLTMedOb6d7koQkKqmrq=3zqDR7U9ZR98WOPUJanQ@mail.gmail.com>
References: <CAN0CFw0x_EgdVe3u7KxQ21KxVzbXshdeZpLXP3hJAMSiuoyMHg@mail.gmail.com>
<CALqce=3QLCmbYUFV6kuZnpZOdaDKwt9WAmzr2iYeXv6VS_7fsw@mail.gmail.com>
<CAN0CFw2Gp6cM7mK+PGM8fNrhrWLuvzdVkQ6yDFY3pwupnwBO8Q@mail.gmail.com>
<CALqce=2RXJLTMedOb6d7koQkKqmrq=3zqDR7U9ZR98WOPUJanQ@mail.gmail.com>
Message-ID: <CAN0CFw2SjYHgKzabadtLr4AEHfq0voK6XwN=qyhcw611u1nr1Q@mail.gmail.com>

>> It works if I specify the full path for the alias. What is the
>> difference between alias and root? I have root specified outside of
>> the server block and I thought I could use alias to avoid specifying
>> the full path again.
>
> http://nginx.org/en/docs/http/ngx_http_core_module.html#alias
> http://nginx.org/en/docs/http/ngx_http_core_module.html#root
>
> The docs says that the requested filepath is constructed by concatenating
> root + URI
> That's for root.
>
> The docs also say that alias replaces the content directory (so it must be
> absolutely defined through alias).
> By default, the last part of the URI (after the last slash, so the file
> name) is searched into the directory specified by alias.
> alias doesn't construct itself based on root, it's totally independent, so
> by using that, you'll need to specify the directory absolutely, which is
> precisely what you wish to avoid.

I see. It seems like root and alias function identically within "location /".

>> I tried both of the following with the same result:
>>
>> location / {
>> alias webalizer/;
>> }
>>
>> location ~ ^/$ {
>> alias webalizer/$1;
>> }
>
>
> For
>
> what you wish to do, you might try the following:
>
> set $rootDir /var/www/localhost/htdocs
> root $rootDir/;
> location / {
> alias $rootDir/webalizer/;
> }
>
> alias is meant for exceptional overload of root in a location block, so I
> guess its use here is a good idea.

I'm not sure what you mean by that last sentence. When should alias
be used instead of root inside of "location /"?

> However, there seems to be no environmental propagation of some $root
> variable (which may be wanted by developers to avoid confusion and unwanted
> concatenation of values in the variables tree).
> $document_root and $realpath_root must be computed last, based on the value
> of the 'root' directive (or its 'alias' overload), so they can't be used
> indeed.
>
> I'd be glad to know the real reasons of the developers behind the absence of
> environmental propagation of some $root variable.

Me too.

- Grant


From nginx-forum at nginx.us Tue Oct 1 07:49:44 2013
From: nginx-forum at nginx.us (aschlosberg)
Date: Tue, 01 Oct 2013 03:49:44 -0400
Subject: [ANNOUNCE] ngx_shared_env_module
Message-ID: <82f2c98d07c23555ffc7fde65f2751a9.NginxMailingListEnglish@forum.nginx.org>

I have decided to release an in-house module that I have been using for my
hosting company. A full explanation of the goals of the module is available
in the README.

https://github.com/aschlosberg/ngx-shared-env

I am well aware that this module introduces (minor) additional per-request
overheads. However, experience has shown them to be far superior to the
alternative of Apache whilst allowing the freedom to run a shared
environment without constantly restarting the nginx. As my knowledge of the
inner workings of nginx is not as sophisticated as that of others I welcome
feedback regarding optimisation.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243296,243296#msg-243296


From nginx-forum at nginx.us Tue Oct 1 12:15:35 2013
From: nginx-forum at nginx.us (nginxnewbie33)
Date: Tue, 01 Oct 2013 08:15:35 -0400
Subject: Peer closed connection in SSL handshake when using chrome
Message-ID: <8d309bfcdbf6678cc922d006768dc97f.NginxMailingListEnglish@forum.nginx.org>

I am receiving 'peer closed connection in SSL handshake (104: Connection
reset by peer) while SSL handshaking, client: 168.166.124.xxx, server:
54.225.xx.xx'
When I try to access an application through my nginx reverse proxy using
CHROME. IE seems to work but Chrome recieves this error everytime.

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243298,243298#msg-243298


From nginx-forum at nginx.us Tue Oct 1 13:26:57 2013
From: nginx-forum at nginx.us (nginxnewbie33)
Date: Tue, 01 Oct 2013 09:26:57 -0400
Subject: Peer closed connection in SSL handshake when using chrome
In-Reply-To: <8d309bfcdbf6678cc922d006768dc97f.NginxMailingListEnglish@forum.nginx.org>
References: <8d309bfcdbf6678cc922d006768dc97f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <4d2029fa070374f3d725e0e78d228f1c.NginxMailingListEnglish@forum.nginx.org>

Seems I'm answering my own question but it leads to another. This is not
really an issue with CHROME the problem is that I had fiddler running while
I was trying to bring up my app. So it is actually fiddler that causes the
errors. I also have issues with IE when I'm running fiddler jsut didn't
realize it. So is there anything I need to do to get nginx to run with
fiddler or is there another diagnostic tool I could use with nginx?
Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243298,243299#msg-243299


From vbart at nginx.com Tue Oct 1 13:55:03 2013
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Tue, 1 Oct 2013 17:55:03 +0400
Subject: root works, alias doesn't
In-Reply-To: <CALqce=2RXJLTMedOb6d7koQkKqmrq=3zqDR7U9ZR98WOPUJanQ@mail.gmail.com>
References: <CAN0CFw0x_EgdVe3u7KxQ21KxVzbXshdeZpLXP3hJAMSiuoyMHg@mail.gmail.com>
<CAN0CFw2Gp6cM7mK+PGM8fNrhrWLuvzdVkQ6yDFY3pwupnwBO8Q@mail.gmail.com>
<CALqce=2RXJLTMedOb6d7koQkKqmrq=3zqDR7U9ZR98WOPUJanQ@mail.gmail.com>
Message-ID: <201310011755.03550.vbart@nginx.com>

On Sunday 29 September 2013 23:20:35 B.R. wrote:
[...]
> ?For?
>
> ?what you wish to do, you might try the following:
>
> set $rootDir /var/www/localhost/htdocs
> root $rootDir/;
> location / {
> alias $rootDir/webalizer/;
> }
>
> alias is meant for exceptional overload of root in a location block, so I
> guess its use here is a good idea.?
> However, there seems to be no environmental propagation of some $root
> variable (which may be wanted by developers to avoid confusion and unwanted
> concatenation of values in the variables tree).
[..]

nginx is not trying to be a template engine.
http://nginx.org/en/docs/faq/variables_in_config.html

wbr, Valentin V. Bartenev


From mdounin at mdounin.ru Tue Oct 1 13:59:50 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 1 Oct 2013 17:59:50 +0400
Subject: nginx-1.5.6
Message-ID: <20131001135950.GF62063@mdounin.ru>

Changes with nginx 1.5.6 01 Oct 2013

*) Feature: the "fastcgi_buffering" directive.

*) Feature: the "proxy_ssl_protocols" and "proxy_ssl_ciphers"
directives.
Thanks to Piotr Sikora.

*) Feature: optimization of SSL handshakes when using long certificate
chains.

*) Feature: the mail proxy supports SMTP pipelining.

*) Bugfix: in the ngx_http_auth_basic_module when using "$apr1$"
password encryption method.
Thanks to Markus Linnala.

*) Bugfix: in MacOSX, Cygwin, and nginx/Windows incorrect location might
be used to process a request if locations were given using characters
in different cases.

*) Bugfix: automatic redirect with appended trailing slash for proxied
locations might not work.

*) Bugfix: in the mail proxy server.

*) Bugfix: in the ngx_http_spdy_module.


--
Maxim Dounin
http://nginx.org/en/donation.html


From vbart at nginx.com Tue Oct 1 14:09:31 2013
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Tue, 1 Oct 2013 18:09:31 +0400
Subject: root works, alias doesn't
In-Reply-To: <CAN0CFw2SjYHgKzabadtLr4AEHfq0voK6XwN=qyhcw611u1nr1Q@mail.gmail.com>
References: <CAN0CFw0x_EgdVe3u7KxQ21KxVzbXshdeZpLXP3hJAMSiuoyMHg@mail.gmail.com>
<CALqce=2RXJLTMedOb6d7koQkKqmrq=3zqDR7U9ZR98WOPUJanQ@mail.gmail.com>
<CAN0CFw2SjYHgKzabadtLr4AEHfq0voK6XwN=qyhcw611u1nr1Q@mail.gmail.com>
Message-ID: <201310011809.31061.vbart@nginx.com>

On Tuesday 01 October 2013 11:12:41 Grant wrote:
> >> It works if I specify the full path for the alias. What is the
> >> difference between alias and root? I have root specified outside of
> >> the server block and I thought I could use alias to avoid specifying
> >> the full path again.
> >
> > http://nginx.org/en/docs/http/ngx_http_core_module.html#alias
> > http://nginx.org/en/docs/http/ngx_http_core_module.html#root
> >
> > The docs says that the requested filepath is constructed by concatenating
> > root + URI
> > That's for root.
> >
> > The docs also say that alias replaces the content directory (so it must
> > be absolutely defined through alias).
> > By default, the last part of the URI (after the last slash, so the file
> > name) is searched into the directory specified by alias.
> > alias doesn't construct itself based on root, it's totally independent,
> > so by using that, you'll need to specify the directory absolutely, which
> > is precisely what you wish to avoid.
>
> I see. It seems like root and alias function identically within "location /".
>

Not exactly. For example, request "/favicon.ico":

location / {
alias /data/www;
}

will result in opening "/data/wwwfavicon.ico", while:

location / {
root /data/www;
}

will return "/data/www/favicon.ico".

But,

location / {
alias /data/www/;
}

will work the same way as

location / {
root /data/www;
}

or

location / {
root /data/www/;
}


wbr, Valentin V. Bartenev


From lists at der-ingo.de Tue Oct 1 14:35:58 2013
From: lists at der-ingo.de (Ingo Schmidt)
Date: Tue, 01 Oct 2013 16:35:58 +0200
Subject: nginx and upstart
Message-ID: <524ADDCE.6030304@der-ingo.de>

Hi!

I have seen this question has been asked before in the list, but
unfortunately there haven't been any answers, so let's see if I have
more luck :)

In can successfully upgrade the nginx binary on the fly as documented here:
http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly

However, if nginx was started via Upstart and then upgraded like this,
Upstart ends up in a confused state:
"status nginx" will show that nginx is running, but no PID is shown.
"stop nginx" states that the upstart job is stopped (but nginx is still
running)
"start nginx" states that it works, but Upstart shows no PID as it
normally does.
"reload nginx" says "unknown instance"

The only way to fix this is to actually manually stop and start nginx,
but then there is some downtime. Is it possible to work around this
problem? Should I avoid using Upstart? If yes, what are my options to
run nginx with the on-the-fly upgrade possibility?

Cheers, Ingo =;->


From ian.hobson at ntlworld.com Tue Oct 1 16:52:32 2013
From: ian.hobson at ntlworld.com (Ian Hobson)
Date: Tue, 01 Oct 2013 17:52:32 +0100
Subject: Solving a 500 error
Message-ID: <524AFDD0.6070207@ntlworld.com>

Hi All,

I have an nginx install with the configuration below.

The server is a linux VM running under Virtual Box on my windows
machine. The website / directory is made available as a sharename using
Samba, which I connect to from Windows, so I can edit the files. I edit
in windows, using familiar tools and then test using a browser, usually
without restarting nginx or init-fastcgi.

This works fine for php files. When I edit one of two javascript files,
the next request for a javascript file fails with a 500 error - even if
the request is not for the changed file.

The version of nginx I am running is 1.2.6 compiled with the long
polling module included.

Does anyone know what is happening?

Thansk,

Ian

This is my server confiig.

server {
server_name coachmaster3.anake.hcs;
listen 80;
fastcgi_read_timeout 300;
index index.php;
root /home/ian/websites/coachmaster3dev/htdocs;
location = / {
rewrite ^ /index.php last;
}
# serve php via fastcgi if it exists
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param CENTRAL_ROOT $document_root;
fastcgi_param RESELLER_ROOT $document_root;
fastcgi_param HTTPS OFF;
}
# serve static files
try_files $uri =404;
# now to configure the long polling
push_store_messages on;
location /publish {
push_publisher;
set $push_channel_id $arg_id;
push_message_timeout 30s;
push_max_message_buffer_length 10;
}
# public long-polling endpoint
location /activity {
push_subscriber;
push_subscriber_concurrency broadcast;
set $push_channel_id $arg_id;
default_type text/plain;
}
}




--
Ian Hobson
31 Sheerwater, Northampton NN3 5HU,
Tel: 01604 513875
Preparing eBooks for Kindle and ePub formats to give the best reader experience.


From kworthington at gmail.com Tue Oct 1 17:57:58 2013
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 1 Oct 2013 13:57:58 -0400
Subject: nginx-1.5.6
In-Reply-To: <20131001135950.GF62063@mdounin.ru>
References: <20131001135950.GF62063@mdounin.ru>
Message-ID: <CAGo79UUb=A33rPBn4fgh0fES+cMBZMyBr9sVHJvzVaVPe6=KFQ@mail.gmail.com>

Hello Nginx users,

Now available: Nginx 1.5.6 for Windows http://goo.gl/Bffumh (32-bit and
64-bit versions)

These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.

Announcements are also available via my Twitter stream (
http://twitter.com/kworthington), if you prefer to receive updates that way.

Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
http://kevinworthington.com/
http://twitter.com/kworthington

On Tue, Oct 1, 2013 at 9:59 AM, Maxim Dounin <mdounin at mdounin.ru> wrote:

> Changes with nginx 1.5.6 01 Oct
> 2013
>
> *) Feature: the "fastcgi_buffering" directive.
>
> *) Feature: the "proxy_ssl_protocols" and "proxy_ssl_ciphers"
> directives.
> Thanks to Piotr Sikora.
>
> *) Feature: optimization of SSL handshakes when using long certificate
> chains.
>
> *) Feature: the mail proxy supports SMTP pipelining.
>
> *) Bugfix: in the ngx_http_auth_basic_module when using "$apr1$"
> password encryption method.
> Thanks to Markus Linnala.
>
> *) Bugfix: in MacOSX, Cygwin, and nginx/Windows incorrect location
> might
> be used to process a request if locations were given using
> characters
> in different cases.
>
> *) Bugfix: automatic redirect with appended trailing slash for proxied
> locations might not work.
>
> *) Bugfix: in the mail proxy server.
>
> *) Bugfix: in the ngx_http_spdy_module.
>
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131001/d832e88b/attachment.html>

From contact at jpluscplusm.com Tue Oct 1 19:36:15 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Tue, 1 Oct 2013 20:36:15 +0100
Subject: Solving a 500 error
In-Reply-To: <524AFDD0.6070207@ntlworld.com>
References: <524AFDD0.6070207@ntlworld.com>
Message-ID: <CAKsTx7Bkc8ExRG7ZVYFj3=E9jvtwi2+_oxUhQKZcoPoRVYHheg@mail.gmail.com>

On 1 October 2013 17:52, Ian Hobson <ian.hobson at ntlworld.com> wrote:
> Hi All,
>
> I have an nginx install with the configuration below.
>
> The server is a linux VM running under Virtual Box on my windows machine.
> The website / directory is made available as a sharename using Samba, which
> I connect to from Windows, so I can edit the files. I edit in windows, using
> familiar tools and then test using a browser, usually without restarting
> nginx or init-fastcgi.

I used to admin some devs who had that sort of abortion of a workflow
and did the same as you.

Just don't do that.

> This works fine for php files. When I edit one of two javascript files, the
> next request for a javascript file fails with a 500 error - even if the
> request is not for the changed file.
>
> The version of nginx I am running is 1.2.6 compiled with the long polling
> module included.
>
> Does anyone know what is happening?

To be fair, chap, you're the one who has access to the log files! What
do *they* say?

J


From ian.hobson at ntlworld.com Tue Oct 1 21:57:16 2013
From: ian.hobson at ntlworld.com (Ian Hobson)
Date: Tue, 01 Oct 2013 22:57:16 +0100
Subject: Solving a 500 error
In-Reply-To: <CAKsTx7Bkc8ExRG7ZVYFj3=E9jvtwi2+_oxUhQKZcoPoRVYHheg@mail.gmail.com>
References: <524AFDD0.6070207@ntlworld.com>
<CAKsTx7Bkc8ExRG7ZVYFj3=E9jvtwi2+_oxUhQKZcoPoRVYHheg@mail.gmail.com>
Message-ID: <524B453C.6070201@ntlworld.com>

On 01/10/2013 20:36, Jonathan Matthews wrote:
> To be fair, chap, you're the one who has access to the log files! What
> do*they* say?
They tell me nothing new.

When I change a static file, I get a 500 error on the next static file I
request - even if it is not the file I have changed.

Sometimes I get two consecutive 500 errors, sometimes only one.

I'm still mystified as to why I should get any at all - and why on
unchanged files?

Regards

Ian

--
Ian Hobson
31 Sheerwater, Northampton NN3 5HU,
Tel: 01604 513875
Preparing eBooks for Kindle and ePub formats to give the best reader experience.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131001/bbc38572/attachment.html>

From steve at greengecko.co.nz Tue Oct 1 22:01:48 2013
From: steve at greengecko.co.nz (Steve Holdoway)
Date: Wed, 02 Oct 2013 11:01:48 +1300
Subject: Solving a 500 error
In-Reply-To: <524B453C.6070201@ntlworld.com>
References: <524AFDD0.6070207@ntlworld.com>
<CAKsTx7Bkc8ExRG7ZVYFj3=E9jvtwi2+_oxUhQKZcoPoRVYHheg@mail.gmail.com>
<524B453C.6070201@ntlworld.com>
Message-ID: <1380664908.11141.395.camel@steve-new>

On Tue, 2013-10-01 at 22:57 +0100, Ian Hobson wrote:
> On 01/10/2013 20:36, Jonathan Matthews wrote:
>
> > To be fair, chap, you're the one who has access to the log files! What
> > do *they* say?
> They tell me nothing new.
>
> When I change a static file, I get a 500 error on the next static file
> I request - even if it is not the file I have changed.
>
> Sometimes I get two consecutive 500 errors, sometimes only one.
>
> I'm still mystified as to why I should get any at all - and why on
> unchanged files?
>
> Regards
>
> Ian

To reiterate what Jonathan said... you have nginx and system logs. Next
time you get the error, how about just tailing the last 50 lines from
all logs that have changed?

Also have you gone over the basics: disks full, out of inodes, enough
open files, etc.


Steve
--
Steve Holdoway BSc(Hons) MIITP
http://www.greengecko.co.nz
Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa


From contact at jpluscplusm.com Tue Oct 1 22:05:35 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Tue, 1 Oct 2013 23:05:35 +0100
Subject: Solving a 500 error
In-Reply-To: <524B453C.6070201@ntlworld.com>
References: <524AFDD0.6070207@ntlworld.com>
<CAKsTx7Bkc8ExRG7ZVYFj3=E9jvtwi2+_oxUhQKZcoPoRVYHheg@mail.gmail.com>
<524B453C.6070201@ntlworld.com>
Message-ID: <CAKsTx7AyYvvqJ19CNz_etjz5O21hTUPM+2C_A=4k+ycVYf-wTA@mail.gmail.com>

On 1 October 2013 22:57, Ian Hobson <ian.hobson at ntlworld.com> wrote:
> On 01/10/2013 20:36, Jonathan Matthews wrote:
>
> To be fair, chap, you're the one who has access to the log files! What
> do *they* say?
>
> They tell me nothing new.

You get a 500 in your *access* log and a simultaneous entry in your
*error* log doesn't appear? At all?

> When I change a static file, I get a 500 error on the next static file I
> request - even if it is not the file I have changed.
>
> Sometimes I get two consecutive 500 errors, sometimes only one.
>
> I'm still mystified as to why I should get any at all - and why on unchanged
> files?

That "push" config stuff looks like a 3rd party module. Try running
without that enabled and see if the errors persist. If they do, try it
without that module *compiled* in and see if they persist.

NB I'm not suggesting you have to not use this module; just that if
you can isolate the problem to "when it's enabled/compiled", then you
can poke that module's authors about the problem.

Cheers,
J


From nginx-forum at nginx.us Wed Oct 2 01:18:22 2013
From: nginx-forum at nginx.us (dossi)
Date: Tue, 01 Oct 2013 21:18:22 -0400
Subject: ssl on different servers
Message-ID: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>

Hi,
My domain.com is on ip: x.x.x.x
where I have a configuration like:

server {
server_name sub.domain.com;
location / {

proxy_pass http://y.y.y.y;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

On ip y.y.y.y my configuration is:

server {
server_name sub.domain.com;
location / {

proxy_pass http://localhost:8080;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

That's fine!

But...

I'm trying to add ssl support, I bought a wildcard certificate but
unfortunately I'm struggling with the configuration.

I changed the config on x.x.x.x:

server {
server_name sub.domain.com;
location / {
proxy_pass http://y.y.y.y;

proxy_set_header X-Forwarded-Proto https;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
listen 443;
ssl_certificate ssl.crt;
ssl_certificate_key my.key;
}

and I changed the config on y.y.y.y:

server {
server_name sub.domain.com;
location / {
proxy_pass http://localhost:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

listen 443 ssl;
ssl_certificate ssl.crt;
ssl_certificate_key my.key;
}

I also tried other configuration, but I cannot make it working.

Can you help me, please?

Thanks

Dossi

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243340,243340#msg-243340


From nginx-forum at nginx.us Wed Oct 2 04:13:20 2013
From: nginx-forum at nginx.us (justin)
Date: Wed, 02 Oct 2013 00:13:20 -0400
Subject: Getting forward secrecy enabled
Message-ID: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>

On ssllabs.com I am getting the following, even though I am using all the
recommend settings.

http://i.imgur.com/TlsKMzP.png

Here are my nginx settings:

ssl_prefer_server_ciphers on;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384
EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH
EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_dhparam /etc/nginx/ssl/dhparam_4096.pem;

Any idea how I can get full forward secrecy enabled?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243341#msg-243341


From nginx-forum at nginx.us Wed Oct 2 04:25:42 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 00:25:42 -0400
Subject: ssl on different servers
In-Reply-To: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
References: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <dcf196d913a5a1013030f348edbeb056.NginxMailingListEnglish@forum.nginx.org>

what is your problem then?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243340,243342#msg-243342


From nginx-forum at nginx.us Wed Oct 2 04:52:49 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 00:52:49 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>

Hi justin,

> even though I am using all the recommend settings.

which recommended settings? recommended by whom?

i learned that, from ssllabs-view, only the cipher-suites recommended by
ivan ristic seem to work:
http://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#perfect-forward-secrecy
all other cipher-suites i found "somewhere" that should enable PFS dont seem
to work,
at least for sslabs.

problem is: there is no other way (that i know of) than ssllabs to check
your server-settings
and check PFS.

but PFS also depends on your openssl-version.


regards,

mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243343#msg-243343


From nginx-forum at nginx.us Wed Oct 2 05:00:14 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 01:00:14 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>

btw, check the following for a reference for PFS-setup:
https://www.ssllabs.com/ssltest/analyze.html?d=makepw.com

ssl-settings are:

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_ciphers
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;


regards,


mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243344#msg-243344


From nginx-forum at nginx.us Wed Oct 2 05:16:18 2013
From: nginx-forum at nginx.us (justin)
Date: Wed, 02 Oct 2013 01:16:18 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20efe914642845cf0a309143c5fe7910.NginxMailingListEnglish@forum.nginx.org>

I tried what was recommended by
(http://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#perfect-forward-secrecy):

But still missing: IE 11 / Win 8.1 (FAIL)
IE 8-10 / Win 7 (NO FS)
IE 7 / Vista (NO FS)

Here is my exact config:

ssl_prefer_server_ciphers on;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243345#msg-243345


From nginx-forum at nginx.us Wed Oct 2 05:18:05 2013
From: nginx-forum at nginx.us (justin)
Date: Wed, 02 Oct 2013 01:18:05 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <20efe914642845cf0a309143c5fe7910.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<20efe914642845cf0a309143c5fe7910.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <f9ffcfcb064e4babfe5c14afc2307f94.NginxMailingListEnglish@forum.nginx.org>

Sucks the forum software cutting of the cipher list string, here is what I
am using in a gist:

https://gist.github.com/nodesocket/8d4cc41c91466ae17b80

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243346#msg-243346


From nginx-forum at nginx.us Wed Oct 2 05:32:46 2013
From: nginx-forum at nginx.us (justin)
Date: Wed, 02 Oct 2013 01:32:46 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>

Comparing the result from makepw.com and my site, I am missing the following
cipher suites:

TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH 256 bits (eq. 3072
bits RSA) FS 256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) ECDH 256 bits (eq. 3072
bits RSA) FS 128
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) ECDH 256 bits (eq. 3072
bits RSA) FS 256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) ECDH 256 bits (eq. 3072
bits RSA) FS 128
TLS_ECDHE_RSA_WITH_RC4_128_SHA (0xc011) ECDH 256 bits (eq. 3072 bits RSA)
FS 128
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH 256 bits (eq. 3072 bits
RSA) FS 256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH 256 bits (eq. 3072 bits
RSA) FS 128

I just confirmed that I am running the latest version of openssl (OpenSSL
1.0.1e 11 Feb 2013).

Any ideas?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243347#msg-243347


From nginx-forum at nginx.us Wed Oct 2 05:34:36 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 01:34:36 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <20efe914642845cf0a309143c5fe7910.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<20efe914642845cf0a309143c5fe7910.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <505facad6bb56a2a85fce2002918672f.NginxMailingListEnglish@forum.nginx.org>

hmm, looks like some mismatch: in yoiur config you define ECDH, but in your
screenshot
i see DH configured (please compare your screenshot with the ssllabs-link i
provided, esp.
the cipher-suites/handshake - part.

should be:

TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH 256 bits (eq. 3072
bits RSA) FS

is:
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) DH 4096 bits



your openssl-version seems to be OK.

did you compiled nginx with your own version of openssl?

if not, what gives "openssl version" ?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243348#msg-243348


From nginx-forum at nginx.us Wed Oct 2 05:46:50 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 01:46:50 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>

how did you compiled nginx, with openssl-sources via
--with-openssl=/path/to/sources ?
i could imagine that, if not, your (outdated) distros openssl-dev might be
used.

i have this issue when compiling nginx on debian; i have to download openssl
and
point nginx where to find the sources

but since openssl recognizes openssl 1.0.1e ... this seems fishy somehow, as
if you
are potentially capable of PFS, but are not able to deliver, for whatever
reason.

all i did for makepw.com was:

./configure ... --with-http_spdy_module --with-http_ssl_module
--with-openssl=/path/to/openssl_source/ ...

then i configured the cipher-suites according to recomendations from ivan
ristic.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243349#msg-243349


From nginx-forum at nginx.us Wed Oct 2 05:57:28 2013
From: nginx-forum at nginx.us (justin)
Date: Wed, 02 Oct 2013 01:57:28 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
<7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>

I don't compile nginx, I get it from the official CentOS repo:

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243350#msg-243350


From nginx-forum at nginx.us Wed Oct 2 06:29:06 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 02:29:06 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
<7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
<ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <7b587af2c99d12153999961ae949cb87.NginxMailingListEnglish@forum.nginx.org>

maybe you ask the person who creates the packages how nginx was build, which
openssl-version applies etc pp.

can you execute "openssl version" on the server nginx runs on?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243351#msg-243351


From nginx-forum at nginx.us Wed Oct 2 07:06:47 2013
From: nginx-forum at nginx.us (dossi)
Date: Wed, 02 Oct 2013 03:06:47 -0400
Subject: ssl on different servers
In-Reply-To: <dcf196d913a5a1013030f348edbeb056.NginxMailingListEnglish@forum.nginx.org>
References: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
<dcf196d913a5a1013030f348edbeb056.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <e0cd81167265b37f19bbc69f1718d095.NginxMailingListEnglish@forum.nginx.org>

The problem is that if I point a browser to https://sub.domain.com it
doesn't work.

Cheers
Dossi

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243340,243352#msg-243352


From nginx-forum at nginx.us Wed Oct 2 07:11:13 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 02 Oct 2013 03:11:13 -0400
Subject: ssl on different servers
In-Reply-To: <e0cd81167265b37f19bbc69f1718d095.NginxMailingListEnglish@forum.nginx.org>
References: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
<dcf196d913a5a1013030f348edbeb056.NginxMailingListEnglish@forum.nginx.org>
<e0cd81167265b37f19bbc69f1718d095.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <d7de1adf45d8834935acf7d8ab509c22.NginxMailingListEnglish@forum.nginx.org>

did you tried to turn it off and on again?

sorry, but from your description no one would be able to help you.




regards,


mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243340,243353#msg-243353


From nginx-forum at nginx.us Wed Oct 2 07:32:06 2013
From: nginx-forum at nginx.us (dossi)
Date: Wed, 02 Oct 2013 03:32:06 -0400
Subject: ssl on different servers
In-Reply-To: <d7de1adf45d8834935acf7d8ab509c22.NginxMailingListEnglish@forum.nginx.org>
References: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
<dcf196d913a5a1013030f348edbeb056.NginxMailingListEnglish@forum.nginx.org>
<e0cd81167265b37f19bbc69f1718d095.NginxMailingListEnglish@forum.nginx.org>
<d7de1adf45d8834935acf7d8ab509c22.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <78eb85b12b3e68f8c3be935297ce7d0c.NginxMailingListEnglish@forum.nginx.org>

What's not working, I suppose, is that the client browser go to
https://sub.domain.com (on IP x.x.x.x) that it is forwarded to IP y.y.y.y


Is there any link that explains how to configure that two nginxes (one on
x.x.x.x and one on y.y.y.y) so that the https traffic "routes" from x.x.x.x
to y.y.y.y?

I found these two posts:
http://stackoverflow.com/questions/2958650/how-to-add-ssl-to-subdomain-that-points-to-a-different-server
http://danconnor.com/post/4f65ea41daac4ed031000004/https_ssl_proxying_nginx_to_nginx


but the first does not show how the configuration and the second is similar
to my issue, but it involves a load balancer that is not my case.
My problem is: https://sub.domain.com -> x.x.x.x -> y.y.y.y

Dossi

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243340,243354#msg-243354


From ar at xlrs.de Wed Oct 2 08:05:01 2013
From: ar at xlrs.de (Axel)
Date: Wed, 02 Oct 2013 10:05:01 +0200
Subject: ssl on different servers
In-Reply-To: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
References: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <6cd00f229160cb2588472975f3dd86c7@xlrs.de>

Hi,

it sounds as if you want to proxy your ssl request to another server and
terminate it there?! You cannot do this.

You need to establish a ssl connection first before you can use http/s.

regards,
Axel


Am 02.10.2013 03:18, schrieb dossi:
> Hi,
> My domain.com is on ip: x.x.x.x
> where I have a configuration like:
>
> server {
> server_name sub.domain.com;
> location / {
>
> proxy_pass http://y.y.y.y;
>
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
>
> On ip y.y.y.y my configuration is:
>
> server {
> server_name sub.domain.com;
> location / {
>
> proxy_pass http://localhost:8080;
>
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
>
> That's fine!
>
> But...
>
> I'm trying to add ssl support, I bought a wildcard certificate but
> unfortunately I'm struggling with the configuration.
>
> I changed the config on x.x.x.x:
>
> server {
> server_name sub.domain.com;
> location / {
> proxy_pass http://y.y.y.y;
>
> proxy_set_header X-Forwarded-Proto https;
>
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
> listen 443;
> ssl_certificate ssl.crt;
> ssl_certificate_key my.key;
> }
>
> and I changed the config on y.y.y.y:
>
> server {
> server_name sub.domain.com;
> location / {
> proxy_pass http://localhost:8080;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
>
> listen 443 ssl;
> ssl_certificate ssl.crt;
> ssl_certificate_key my.key;
> }
>
> I also tried other configuration, but I cannot make it working.
>
> Can you help me, please?
>
> Thanks
>
> Dossi
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,243340,243340#msg-243340
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


From list_nginx at bluerosetech.com Wed Oct 2 08:25:16 2013
From: list_nginx at bluerosetech.com (Darren Pilgrim)
Date: Wed, 02 Oct 2013 01:25:16 -0700
Subject: Getting forward secrecy enabled
In-Reply-To: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <524BD86C.8060805@bluerosetech.com>

I have:

ssl_ciphers HIGH:!SSLv2:!MEDIUM:!LOW:!EXP:!RC4:!DSS:!aNULL:@STRENGTH;
ssl_prefer_server_ciphers on;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

Yields:

https://www.ssllabs.com/ssltest/analyze.html?d=rush.bluerosetech.com

nginx 1.4.2 compiled against OpenSSL 1.0.1e 11 Feb 2013


From mdounin at mdounin.ru Wed Oct 2 10:55:01 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 2 Oct 2013 14:55:01 +0400
Subject: Solving a 500 error
In-Reply-To: <524AFDD0.6070207@ntlworld.com>
References: <524AFDD0.6070207@ntlworld.com>
Message-ID: <20131002105501.GW62063@mdounin.ru>

Hello!

On Tue, Oct 01, 2013 at 05:52:32PM +0100, Ian Hobson wrote:

> Hi All,
>
> I have an nginx install with the configuration below.
>
> The server is a linux VM running under Virtual Box on my windows
> machine. The website / directory is made available as a sharename
> using Samba, which I connect to from Windows, so I can edit the
> files. I edit in windows, using familiar tools and then test using a
> browser, usually without restarting nginx or init-fastcgi.
>
> This works fine for php files. When I edit one of two javascript
> files, the next request for a javascript file fails with a 500 error
> - even if the request is not for the changed file.
>
> The version of nginx I am running is 1.2.6 compiled with the long
> polling module included.
>
> Does anyone know what is happening?

Key points is "Samba" and "Linux". When you edit files via Samba
share, it tries to lock files with fcntl(F_GETLEASE) if running on
Linux. This in turn results in errors while opening such "locked"
files. For more information see Samba docs here:

http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/locking.html#id2616903

It's not clear why the problem happens with files you didn't
changed, but I would suppose they are at least open via Samba.

Anyway, disabling appropriate locking in Samba configuration will
likely help. See the link above for details.

--
Maxim Dounin
http://nginx.org/en/donation.html


From vahan at helix.am Wed Oct 2 11:08:37 2013
From: vahan at helix.am (Vahan Yerkanian)
Date: Wed, 2 Oct 2013 15:08:37 +0400
Subject: Getting forward secrecy enabled
In-Reply-To: <ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
<7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
<ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>

On Oct 2, 2013, at 9:57 AM, justin <nginx-forum at nginx.us> wrote:

> I don't compile nginx, I get it from the official CentOS repo:
>
> [nginx]
> name=nginx repo
> baseurl=http://nginx.org/packages/centos/6/$basearch/
> gpgcheck=0
> enabled=1
>

That's your problem, that version doesn't support ECDHE.

You'll need to compile your own version, there are lots of guides on the net, one of the first results on Google:

https://xkyle.com/getting-started-with-spdy-on-nginx/

Best regards,
Vahan Yerkanian
Tech. Coordinator
Helix Consulting LLC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131002/540ca320/attachment.html>

From nginx-forum at nginx.us Wed Oct 2 13:30:29 2013
From: nginx-forum at nginx.us (itpp2012)
Date: Wed, 02 Oct 2013 09:30:29 -0400
Subject: Solving a 500 error
In-Reply-To: <20131002105501.GW62063@mdounin.ru>
References: <20131002105501.GW62063@mdounin.ru>
Message-ID: <e057c29389f4460b0ef5e683692ec98a.NginxMailingListEnglish@forum.nginx.org>

Sounds familiar, edit files elsewhere and overwrite them for nginx's
destination, any basic editor has macro support, just make a macro that does
a copy after save.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243333,243371#msg-243371


From nginx-forum at nginx.us Wed Oct 2 14:04:23 2013
From: nginx-forum at nginx.us (amehzenin)
Date: Wed, 02 Oct 2013 10:04:23 -0400
Subject: Does Nginx have separate queuing mechanism for requests?
Message-ID: <5b8f6cee65746633236bb79b5467a9da.NginxMailingListEnglish@forum.nginx.org>

Hello.

Consider the following situation: you are deploying application that can
serve 1 req./sec. What would happen if I send 10 request in 1 second? I
wrote simple app to test that:
https://github.com/amezhenin/nginx_slow_upstream .
This test shows that your requests will be served _in_exact_same_order_
they were sent.

For now, this looks like Nginx have some kind of queue for requests, but my
colleague(administrator) sayd that there is no any queues in Nginx. So I
wrote another question about epoll here:
http://stackoverflow.com/questions/19114001/does-epoll-preserve-the-order-in-which-fds-was-registered
. From that discussion I figured that epoll does preserves the order of
requests.

I have two questions:
1) Is there any mistakes in reasoning/code above?
2) Does Nginx have some sort of queue for requests on top of epoll? Or Nginx
uses pure epoll functionality?

Thank you, and sorry for my English :)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243372,243372#msg-243372


From scott_ribe at elevated-dev.com Wed Oct 2 14:28:20 2013
From: scott_ribe at elevated-dev.com (Scott Ribe)
Date: Wed, 2 Oct 2013 08:28:20 -0600
Subject: We're sorry, but something went wrong
Message-ID: <3A41B882-F00B-4477-B6EE-794BFC82E5ED@elevated-dev.com>

So I'm in the early stages of rolling out a system to production, and a few app errors are cropping up that didn't get caught in testing, and it occurs to me: it would be nice if the default "we're sorry, but something went wrong" error page could include a timestamp.

--
Scott Ribe
scott_ribe at elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





From jabberuser at gmail.com Wed Oct 2 15:07:01 2013
From: jabberuser at gmail.com (Piotr Karbowski)
Date: Wed, 02 Oct 2013 17:07:01 +0200
Subject: gzip does not work if Content-Type contain semicolon?
Message-ID: <524C3695.4040106@gmail.com>

Hi,

I have some serious troubles getting gzip to compress text/html and
text/plain content.

I've put a nginx as revproxy to Jenkins. Jenkins return headers like
"Content-Type: text/plain;charset=UTF-8" and "Content-Type:
text/html;charset=UTF-8". Even with all fancy stuff like enabling gzip
for 1.0 proto, gzip_proxied any and so on, no output is compressed.

Tested version 1.5.4.

-- Piotr.


From laursen at oxygen.net Wed Oct 2 15:08:00 2013
From: laursen at oxygen.net (Lasse Laursen)
Date: Wed, 2 Oct 2013 17:08:00 +0200
Subject: We're sorry, but something went wrong
In-Reply-To: <3A41B882-F00B-4477-B6EE-794BFC82E5ED@elevated-dev.com>
References: <3A41B882-F00B-4477-B6EE-794BFC82E5ED@elevated-dev.com>
Message-ID: <D1F6CE3B-1449-47CF-9011-516F3794C513@oxygen.net>

That sounds like a RoR app acting up and not the nginx server itself.

L.


On Oct 2, 2013, at 4:28 PM, Scott Ribe <scott_ribe at elevated-dev.com> wrote:

> So I'm in the early stages of rolling out a system to production, and a few app errors are cropping up that didn't get caught in testing, and it occurs to me: it would be nice if the default "we're sorry, but something went wrong" error page could include a timestamp.
>
> --
> Scott Ribe
> scott_ribe at elevated-dev.com
> http://www.elevated-dev.com/
> (303) 722-0567 voice
>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131002/67156f81/attachment.bin>

From mdounin at mdounin.ru Wed Oct 2 15:13:00 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 2 Oct 2013 19:13:00 +0400
Subject: gzip does not work if Content-Type contain semicolon?
In-Reply-To: <524C3695.4040106@gmail.com>
References: <524C3695.4040106@gmail.com>
Message-ID: <20131002151259.GA62063@mdounin.ru>

Hello!

On Wed, Oct 02, 2013 at 05:07:01PM +0200, Piotr Karbowski wrote:

> Hi,
>
> I have some serious troubles getting gzip to compress text/html and
> text/plain content.
>
> I've put a nginx as revproxy to Jenkins. Jenkins return headers like
> "Content-Type: text/plain;charset=UTF-8" and "Content-Type:
> text/html;charset=UTF-8". Even with all fancy stuff like enabling
> gzip for 1.0 proto, gzip_proxied any and so on, no output is
> compressed.

Do you have gzip_types properly set?

http://nginx.org/r/gzip_types

--
Maxim Dounin
http://nginx.org/en/donation.html


From scott_ribe at elevated-dev.com Wed Oct 2 15:20:08 2013
From: scott_ribe at elevated-dev.com (Scott Ribe)
Date: Wed, 2 Oct 2013 09:20:08 -0600
Subject: We're sorry, but something went wrong
In-Reply-To: <D1F6CE3B-1449-47CF-9011-516F3794C513@oxygen.net>
References: <3A41B882-F00B-4477-B6EE-794BFC82E5ED@elevated-dev.com>
<D1F6CE3B-1449-47CF-9011-516F3794C513@oxygen.net>
Message-ID: <FAB4EE51-5C73-4435-84C2-55EF127BB27B@elevated-dev.com>

On Oct 2, 2013, at 9:08 AM, Lasse Laursen <laursen at oxygen.net> wrote:

> That sounds like a RoR app acting up and not the nginx server itself.

Ah... Yes it is an error in RoR, but I had mistakenly thought that RoR was failing to return a result to nginx, and nginx was serving up that page. I see that in fact RoR's exception handling is returning that page to nginx.

So I know exactly what to do ;-)

--
Scott Ribe
scott_ribe at elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





From jabberuser at gmail.com Wed Oct 2 15:27:15 2013
From: jabberuser at gmail.com (Piotr Karbowski)
Date: Wed, 02 Oct 2013 17:27:15 +0200
Subject: gzip does not work if Content-Type contain semicolon?
In-Reply-To: <20131002151259.GA62063@mdounin.ru>
References: <524C3695.4040106@gmail.com> <20131002151259.GA62063@mdounin.ru>
Message-ID: <524C3B53.5030103@gmail.com>

Hi,

On 10/02/2013 05:13 PM, Maxim Dounin wrote:
> Do you have gzip_types properly set?
>
> http://nginx.org/r/gzip_types
>

I believe I do. I tried quite a lot of combinations, also, text/html is
default on and cannot be removed if I understand it corrently so it
should work just as-is. I tried adding even full string like
"text/html;charset=UTF-8" still no luck.

-- Piotr.


From mdounin at mdounin.ru Wed Oct 2 15:38:41 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 2 Oct 2013 19:38:41 +0400
Subject: gzip does not work if Content-Type contain semicolon?
In-Reply-To: <524C3B53.5030103@gmail.com>
References: <524C3695.4040106@gmail.com> <20131002151259.GA62063@mdounin.ru>
<524C3B53.5030103@gmail.com>
Message-ID: <20131002153841.GC62063@mdounin.ru>

Hello!

On Wed, Oct 02, 2013 at 05:27:15PM +0200, Piotr Karbowski wrote:

> Hi,
>
> On 10/02/2013 05:13 PM, Maxim Dounin wrote:
> >Do you have gzip_types properly set?
> >
> >http://nginx.org/r/gzip_types
> >
>
> I believe I do. I tried quite a lot of combinations, also, text/html
> is default on and cannot be removed if I understand it corrently so
> it should work just as-is. I tried adding even full string like
> "text/html;charset=UTF-8" still no luck.

Could you please show the configuration you use, and
an example request which shows the response isn't gzipped?

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Wed Oct 2 15:41:17 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 2 Oct 2013 19:41:17 +0400
Subject: Does Nginx have separate queuing mechanism for requests?
In-Reply-To: <5b8f6cee65746633236bb79b5467a9da.NginxMailingListEnglish@forum.nginx.org>
References: <5b8f6cee65746633236bb79b5467a9da.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131002154117.GD62063@mdounin.ru>

Hello!

On Wed, Oct 02, 2013 at 10:04:23AM -0400, amehzenin wrote:

> Hello.
>
> Consider the following situation: you are deploying application that can
> serve 1 req./sec. What would happen if I send 10 request in 1 second? I
> wrote simple app to test that:
> https://github.com/amezhenin/nginx_slow_upstream .
> This test shows that your requests will be served _in_exact_same_order_
> they were sent.
>
> For now, this looks like Nginx have some kind of queue for requests, but my
> colleague(administrator) sayd that there is no any queues in Nginx. So I
> wrote another question about epoll here:
> http://stackoverflow.com/questions/19114001/does-epoll-preserve-the-order-in-which-fds-was-registered
> . From that discussion I figured that epoll does preserves the order of
> requests.
>
> I have two questions:
> 1) Is there any mistakes in reasoning/code above?
> 2) Does Nginx have some sort of queue for requests on top of epoll? Or Nginx
> uses pure epoll functionality?

There is no queue in nginx, but there is queue in a listen socket
of your backend app. It's called "listen queue" or "backlog" and
likely it's what preserves a request order for you.

--
Maxim Dounin
http://nginx.org/en/donation.html


From jabberuser at gmail.com Wed Oct 2 15:56:53 2013
From: jabberuser at gmail.com (Piotr Karbowski)
Date: Wed, 02 Oct 2013 17:56:53 +0200
Subject: gzip does not work if Content-Type contain semicolon?
In-Reply-To: <20131002153841.GC62063@mdounin.ru>
References: <524C3695.4040106@gmail.com> <20131002151259.GA62063@mdounin.ru>
<524C3B53.5030103@gmail.com> <20131002153841.GC62063@mdounin.ru>
Message-ID: <524C4245.2030909@gmail.com>

Maxim, thanks for your time and willing to help.

False bug report.

After doing some tests I found out that if I check localhost it does
gzip, but if I go thru full ip/domain it does not - I had $http_proxy
variable set and the proxy server was not providing me gzipped content.
Sorry about the noise.

-- Piotr.


From nginx-forum at nginx.us Wed Oct 2 16:20:15 2013
From: nginx-forum at nginx.us (itpp2012)
Date: Wed, 02 Oct 2013 12:20:15 -0400
Subject: [ANN] Windows nginx 1.5.6.4 Butterfly
Message-ID: <571ce0008a8186b718ac21e91cc8ed4e.NginxMailingListEnglish@forum.nginx.org>

12:38 2-10-2013: nginx 1.5.6.4 Butterfly

The Nginx 'Butterfly' release brings to Windows stable and unleashed power
of Nginx, Lua,
Streaming feature, Reverse DNS, SPDY, easy c250k in a non-blocking and event
driven build
which runs on Windows XP SP3 or higher, both 32 and 64 bit.

Based on nginx 1.5.6 (release) with;
+ RDNS (https://github.com/flant/nginx-http-rdns)
+ Array-var-nginx-module
(https://github.com/agentzh/array-var-nginx-module)
+ ngx_devel_kit v0.2.19
+ lua-nginx-module v0.9.0
* Additional specifications are like 13:46 25-9-2013: nginx 1.5.6.3 Alice

See also http://forum.nginx.org/read.php?2,242426

Builds can be found here:
http://nginx-win.ecsds.eu/

Coming next: - Porting Nasxi (WAF) v0.52
(https://github.com/nbs-system/naxsi)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243383,243383#msg-243383


From emailgrant at gmail.com Wed Oct 2 16:22:17 2013
From: emailgrant at gmail.com (Grant)
Date: Wed, 2 Oct 2013 09:22:17 -0700
Subject: root works, alias doesn't
In-Reply-To: <201310011809.31061.vbart@nginx.com>
References: <CAN0CFw0x_EgdVe3u7KxQ21KxVzbXshdeZpLXP3hJAMSiuoyMHg@mail.gmail.com>
<CALqce=2RXJLTMedOb6d7koQkKqmrq=3zqDR7U9ZR98WOPUJanQ@mail.gmail.com>
<CAN0CFw2SjYHgKzabadtLr4AEHfq0voK6XwN=qyhcw611u1nr1Q@mail.gmail.com>
<201310011809.31061.vbart@nginx.com>
Message-ID: <CAN0CFw17_NMmVU2ucrsFFZmbOWZG0AJf8AC6tD7vVVuLf6ZXvw@mail.gmail.com>

>> >> It works if I specify the full path for the alias. What is the
>> >> difference between alias and root? I have root specified outside of
>> >> the server block and I thought I could use alias to avoid specifying
>> >> the full path again.
>> >
>> > http://nginx.org/en/docs/http/ngx_http_core_module.html#alias
>> > http://nginx.org/en/docs/http/ngx_http_core_module.html#root
>> >
>> > The docs says that the requested filepath is constructed by concatenating
>> > root + URI
>> > That's for root.
>> >
>> > The docs also say that alias replaces the content directory (so it must
>> > be absolutely defined through alias).
>> > By default, the last part of the URI (after the last slash, so the file
>> > name) is searched into the directory specified by alias.
>> > alias doesn't construct itself based on root, it's totally independent,
>> > so by using that, you'll need to specify the directory absolutely, which
>> > is precisely what you wish to avoid.
>>
>> I see. It seems like root and alias function identically within "location /".
>>
>
> Not exactly. For example, request "/favicon.ico":
>
> location / {
> alias /data/www;
> }
>
> will result in opening "/data/wwwfavicon.ico", while:
>
> location / {
> root /data/www;
> }
>
> will return "/data/www/favicon.ico".
>
> But,
>
> location / {
> alias /data/www/;
> }
>
> will work the same way as
>
> location / {
> root /data/www;
> }
>
> or
>
> location / {
> root /data/www/;
> }

That's true. Is alias or root preferred in this situation for performance?

- Grant


From vbart at nginx.com Wed Oct 2 18:22:46 2013
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Wed, 2 Oct 2013 22:22:46 +0400
Subject: root works, alias doesn't
In-Reply-To: <CAN0CFw17_NMmVU2ucrsFFZmbOWZG0AJf8AC6tD7vVVuLf6ZXvw@mail.gmail.com>
References: <CAN0CFw0x_EgdVe3u7KxQ21KxVzbXshdeZpLXP3hJAMSiuoyMHg@mail.gmail.com>
<201310011809.31061.vbart@nginx.com>
<CAN0CFw17_NMmVU2ucrsFFZmbOWZG0AJf8AC6tD7vVVuLf6ZXvw@mail.gmail.com>
Message-ID: <201310022222.46953.vbart@nginx.com>

On Wednesday 02 October 2013 20:22:17 Grant wrote:
[..]
>
> That's true. Is alias or root preferred in this situation for performance?
>

The "root" directive is better from any point of view. It is less complicated
and bugfree ("alias" has bugs, see https://trac.nginx.org/nginx/ticket/97 ).

You should always prefer "root" over "alias" when it is possible.

wbr, Valentin V. Bartenev


From vbart at nginx.com Wed Oct 2 18:44:09 2013
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Wed, 2 Oct 2013 22:44:09 +0400
Subject: ssl on different servers
In-Reply-To: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
References: <1e20a9590e1749fb003b5e5b72e7fe9b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <201310022244.09764.vbart@nginx.com>

On Wednesday 02 October 2013 05:18:22 you wrote:
> Hi,
> My domain.com is on ip: x.x.x.x
> where I have a configuration like:
>
> server {
> server_name sub.domain.com;
> location / {
>
> proxy_pass http://y.y.y.y;
>
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
>
> On ip y.y.y.y my configuration is:
>
> server {
> server_name sub.domain.com;
> location / {
>
> proxy_pass http://localhost:8080;
>
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
>
> That's fine!
>
> But...
>
> I'm trying to add ssl support, I bought a wildcard certificate but
> unfortunately I'm struggling with the configuration.
>
> I changed the config on x.x.x.x:
>
> server {
> server_name sub.domain.com;
> location / {
> proxy_pass http://y.y.y.y;
[..]

You have changed y.y.y.y to use HTTPS, but still trying to pass HTTP.

wbr, Valentin V. Bartenev


From emailgrant at gmail.com Wed Oct 2 19:02:30 2013
From: emailgrant at gmail.com (Grant)
Date: Wed, 2 Oct 2013 19:02:30 +0000
Subject: root works, alias doesn't
In-Reply-To: <201310022222.46953.vbart@nginx.com>
References: <CAN0CFw0x_EgdVe3u7KxQ21KxVzbXshdeZpLXP3hJAMSiuoyMHg@mail.gmail.com>
<201310011809.31061.vbart@nginx.com>
<CAN0CFw17_NMmVU2ucrsFFZmbOWZG0AJf8AC6tD7vVVuLf6ZXvw@mail.gmail.com>
<201310022222.46953.vbart@nginx.com>
Message-ID: <CAN0CFw1fa7iEOXSkXjQvZDDwa8yzH_oKAcAMb5+DGA6DG188dA@mail.gmail.com>

>> That's true. Is alias or root preferred in this situation for performance?
>
> The "root" directive is better from any point of view. It is less complicated
> and bugfree ("alias" has bugs, see https://trac.nginx.org/nginx/ticket/97 ).
>
> You should always prefer "root" over "alias" when it is possible.

Many thanks Valentin.

- Grant


From nginx-forum at nginx.us Wed Oct 2 21:01:40 2013
From: nginx-forum at nginx.us (bryndole)
Date: Wed, 02 Oct 2013 17:01:40 -0400
Subject: High response time at high concurrent connections
In-Reply-To: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
References: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>

At that level of concurrency you are only testing the throughput of ab and
not nginx. Apache bench is a single process and is too slow to test anything
other than a single Apache instance.

In summary, ab is a piece of crap.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243246,243389#msg-243389


From nginx-forum at nginx.us Wed Oct 2 21:13:04 2013
From: nginx-forum at nginx.us (laltin)
Date: Wed, 02 Oct 2013 17:13:04 -0400
Subject: High response time at high concurrent connections
In-Reply-To: <7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>
References: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
<7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>

But while benchmarking with ab I test the site with a browser and I
experience high response times. What do you recommend for testing?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243246,243390#msg-243390


From nginx-forum at nginx.us Wed Oct 2 21:43:04 2013
From: nginx-forum at nginx.us (bryndole)
Date: Wed, 02 Oct 2013 17:43:04 -0400
Subject: High response time at high concurrent connections
In-Reply-To: <0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>
References: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
<7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>
<0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <d6a5e15c6146cc45d1feab88367585c3.NginxMailingListEnglish@forum.nginx.org>

It soulds like ab is succeeding in generating a load high enough to impact
performance already. You need to figure out where the bottlenecks are.

My message is intended as a helpful hint that you should not trust the stats
from ab. You need to be measuring perf on the nginx and backend servers.
Start by looking at the elapsed time in nginx and the response times of the
backends.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243246,243391#msg-243391


From oscaretu at gmail.com Wed Oct 2 22:24:48 2013
From: oscaretu at gmail.com (Oscar Fernandez Sierra)
Date: Thu, 3 Oct 2013 00:24:48 +0200
Subject: High response time at high concurrent connections
In-Reply-To: <0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>
References: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
<7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>
<0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAFztJwQHM0HnaW68t8g8vWWbFj6pkgWbnPp+Cwbuyie+idnXNg@mail.gmail.com>

Hello.

Perhaps this page can be useful for you to choose tools for testing:

http://www.softwareqatest.com/qatweb1.html#LOAD

Greetings,

Oscar


On Wed, Oct 2, 2013 at 11:13 PM, laltin <nginx-forum at nginx.us> wrote:

> But while benchmarking with ab I test the site with a browser and I
> experience high response times. What do you recommend for testing?
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,243246,243390#msg-243390
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



--
Oscar Fernandez Sierra
oscaretu at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131003/44c3ded4/attachment.html>

From nginx-forum at nginx.us Thu Oct 3 06:29:52 2013
From: nginx-forum at nginx.us (justin)
Date: Thu, 03 Oct 2013 02:29:52 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>
References: <BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>
Message-ID: <1e2a5d2d9510334650c83f4be4075c0d.NginxMailingListEnglish@forum.nginx.org>

Yeah, anyway to get the official yum repo to support ECDHE when they
compile. Seems like a basic thing they should already do already.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243398#msg-243398


From nginx-forum at nginx.us Thu Oct 3 07:37:40 2013
From: nginx-forum at nginx.us (orestespap)
Date: Thu, 03 Oct 2013 03:37:40 -0400
Subject: Wordpress log in 404 Not Found nginx/1.4.2 issue
Message-ID: <f4d9129d28764c76d07541e6957e8920.NginxMailingListEnglish@forum.nginx.org>

Hey guys,

When I try to log-in for the site I work for I receive an issue. I type user
and password correctly and then it redirects to a 404 Not Found
nginx/1.4.2 error. When visiting the home page of the site it shows that I
am loged in but when I click visit dashboard it redirects there.

Any help? Thanks!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243399,243399#msg-243399


From sb at nginx.com Thu Oct 3 12:36:41 2013
From: sb at nginx.com (Sergey Budnevitch)
Date: Thu, 3 Oct 2013 16:36:41 +0400
Subject: Getting forward secrecy enabled
In-Reply-To: <BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
<7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
<ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
<BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>
Message-ID: <AF7EC98A-EF16-4B79-ADE6-6CA55F05BB26@nginx.com>


On 2 Oct2013, at 15:08 , Vahan Yerkanian <vahan at helix.am> wrote:

> On Oct 2, 2013, at 9:57 AM, justin <nginx-forum at nginx.us> wrote:
>
>> I don't compile nginx, I get it from the official CentOS repo:
>>
>> [nginx]
>> name=nginx repo
>> baseurl=http://nginx.org/packages/centos/6/$basearch/
>> gpgcheck=0
>> enabled=1
>>
>
> That's your problem, that version doesn't support ECDHE.

nginx itself has no ciphers support, it depend on openssl.
RHEL/CentOS version of openssl lacks elliptic curve ciphers,
it is explicitly striped from rpm (https://bugzilla.redhat.com/show_bug.cgi?id=319901),
and ECDHE is unavailable on RHEL/CentOS with default openssl.
So either change/rebuild openssl rpm, rebuild nginx with
statically linked openssl or use another linux distribution.

You could list and check available ciphers by:
openssl cipher -v

From sb at nginx.com Thu Oct 3 13:17:13 2013
From: sb at nginx.com (Sergey Budnevitch)
Date: Thu, 3 Oct 2013 17:17:13 +0400
Subject: Getting forward secrecy enabled
In-Reply-To: <AF7EC98A-EF16-4B79-ADE6-6CA55F05BB26@nginx.com>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
<7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
<ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
<BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>
<AF7EC98A-EF16-4B79-ADE6-6CA55F05BB26@nginx.com>
Message-ID: <A6E93B5F-EBF8-4B36-899F-57FB08A4121A@nginx.com>


On 3 Oct2013, at 16:36 , Sergey Budnevitch <sb at nginx.com> wrote:

>
> On 2 Oct2013, at 15:08 , Vahan Yerkanian <vahan at helix.am> wrote:
>
>> On Oct 2, 2013, at 9:57 AM, justin <nginx-forum at nginx.us> wrote:
>>
>>> I don't compile nginx, I get it from the official CentOS repo:
>>>
>>> [nginx]
>>> name=nginx repo
>>> baseurl=http://nginx.org/packages/centos/6/$basearch/
>>> gpgcheck=0
>>> enabled=1
>>>
>>
>> That's your problem, that version doesn't support ECDHE.
>
> nginx itself has no ciphers support, it depend on openssl.
> RHEL/CentOS version of openssl lacks elliptic curve ciphers,
> it is explicitly striped from rpm (https://bugzilla.redhat.com/show_bug.cgi?id=319901),
> and ECDHE is unavailable on RHEL/CentOS with default openssl.
> So either change/rebuild openssl rpm,

It is neccesary to rebuild nginx too, openssl replacement along is not sufficient.

> rebuild nginx with
> statically linked openssl or use another linux distribution.
>
> You could list and check available ciphers by:
> openssl cipher -v

BTW, DHE also provides forward secrecy, but it is slow.


From gmm at csdoc.com Thu Oct 3 13:29:13 2013
From: gmm at csdoc.com (Gena Makhomed)
Date: Thu, 03 Oct 2013 16:29:13 +0300
Subject: Getting forward secrecy enabled
In-Reply-To: <AF7EC98A-EF16-4B79-ADE6-6CA55F05BB26@nginx.com>
References: <06bb6e61c8ae0921716635e24ded9f28.NginxMailingListEnglish@forum.nginx.org>
<8fcf9904280a26d7bcc1aa3b97f98fcf.NginxMailingListEnglish@forum.nginx.org>
<b98c8fc17ef7d62606f7795eb4dac079.NginxMailingListEnglish@forum.nginx.org>
<c1a22626ed3b8bcfbcb959bb8800af1c.NginxMailingListEnglish@forum.nginx.org>
<7418a8170638fdf434d4d791ab5efa79.NginxMailingListEnglish@forum.nginx.org>
<ecde6720f7f14a426c52556d50828a3b.NginxMailingListEnglish@forum.nginx.org>
<BED6DCA0-9AD4-4E16-A5A4-7EA54776D9A2@helix.am>
<AF7EC98A-EF16-4B79-ADE6-6CA55F05BB26@nginx.com>
Message-ID: <524D7129.1040106@csdoc.com>

On 03.10.2013 15:36, Sergey Budnevitch wrote:

> nginx itself has no ciphers support, it depend on openssl.
> RHEL/CentOS version of openssl lacks elliptic curve ciphers,
> it is explicitly striped from rpm (https://bugzilla.redhat.com/show_bug.cgi?id=319901),
> and ECDHE is unavailable on RHEL/CentOS with default openssl.
> So either change/rebuild openssl rpm, rebuild nginx with
> statically linked openssl or use another linux distribution.

for rebuild nginx with statically linked openssl, spec changes:

========================================================

...
%define openssl_version 1.0.1e
...
Source0: http://sysoev.ru/nginx/nginx-%{version}.tar.gz
...
Source4: http://www.openssl.org/source/openssl-%{openssl_version}.tar.gz
...
%prep
%setup -q
%setup -q -b4
...
./configure \
...
--with-openssl=../openssl-%{openssl_version} \
--with-openssl-opt="no-threads no-shared no-zlib no-dso no-asm" \
...
#make %{?_smp_mflags}
make
...

========================================================

P.S.

better if nginx rpm spec contain build options -
like "--with-statically-linked-openssl"
for easy change usage statically/dynamically
linked openssl during nginx srpm rebuild.
or even change default to always use
latest openssl for nginx from nginx.org

if nginx build with latest openssl -
Getting forward secrecy enabled is easy, as described in articles:

https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy

and

https://community.qualys.com/blogs/securitylabs/2013/09/17/updated-ssltls-deployment-best-practices-deprecate-rc4

for example:

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM
EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384
EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA
RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS +RC4 RC4";

ssl_dhparam /etc/tls/dh2048/dh2048.pem;
ssl_session_cache shared:SSL:4M;
ssl_session_timeout 120m;

ssl_stapling on;
resolver 8.8.8.8 8.8.4.4;

with such config test https://www.ssllabs.com/ssltest/
for nginx on CentOS 6 say:

"This server supports Forward Secrecy with modern browsers."

--
Best regards,
Gena


From nginx-forum at nginx.us Thu Oct 3 14:26:33 2013
From: nginx-forum at nginx.us (amehzenin)
Date: Thu, 03 Oct 2013 10:26:33 -0400
Subject: Does Nginx have separate queuing mechanism for requests?
In-Reply-To: <20131002154117.GD62063@mdounin.ru>
References: <20131002154117.GD62063@mdounin.ru>
Message-ID: <f2b6223628a7c26c30bd751e0ee747c5.NginxMailingListEnglish@forum.nginx.org>

Maxim Dounin Wrote:
-------------------------------------------------------
>
> There is no queue in nginx, but there is queue in a listen socket
> of your backend app. It's called "listen queue" or "backlog" and
> likely it's what preserves a request order for you.
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


You meen application server has listen socket:

#include <sys/types.h>
#include <sys/socket.h>

int listen(int sockfd, int backlog);

(http://linux.die.net/man/2/listen)

And backlog defines the length of this queue. Nginx pushes requests to
app-server as they come and app-server take this requests from his own queue
when he is ready to serve next request. Am I right?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243372,243406#msg-243406


From nginx-forum at nginx.us Thu Oct 3 14:36:13 2013
From: nginx-forum at nginx.us (pothi)
Date: Thu, 03 Oct 2013 10:36:13 -0400
Subject: Wordpress log in 404 Not Found nginx/1.4.2 issue
In-Reply-To: <f4d9129d28764c76d07541e6957e8920.NginxMailingListEnglish@forum.nginx.org>
References: <f4d9129d28764c76d07541e6957e8920.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <16daa3dc68ed5d9a69aa898f6cfe15e3.NginxMailingListEnglish@forum.nginx.org>

Please post your Nginx conf. Without it, no one can guess what could have
gone wrong in your specific case.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243399,243407#msg-243407


From mdounin at mdounin.ru Thu Oct 3 15:02:30 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 3 Oct 2013 19:02:30 +0400
Subject: Does Nginx have separate queuing mechanism for requests?
In-Reply-To: <f2b6223628a7c26c30bd751e0ee747c5.NginxMailingListEnglish@forum.nginx.org>
References: <20131002154117.GD62063@mdounin.ru>
<f2b6223628a7c26c30bd751e0ee747c5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131003150230.GH62063@mdounin.ru>

Hello!

On Thu, Oct 03, 2013 at 10:26:33AM -0400, amehzenin wrote:

> Maxim Dounin Wrote:
> -------------------------------------------------------
> >
> > There is no queue in nginx, but there is queue in a listen socket
> > of your backend app. It's called "listen queue" or "backlog" and
> > likely it's what preserves a request order for you.

[...]

> You meen application server has listen socket:
>
> #include <sys/types.h>
> #include <sys/socket.h>
>
> int listen(int sockfd, int backlog);
>
> (http://linux.die.net/man/2/listen)
>
> And backlog defines the length of this queue. Nginx pushes requests to
> app-server as they come and app-server take this requests from his own queue
> when he is ready to serve next request. Am I right?

Yes.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Thu Oct 3 16:34:20 2013
From: nginx-forum at nginx.us (ddutra)
Date: Thu, 03 Oct 2013 12:34:20 -0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
Message-ID: <9a3636a2bd1f96b2859589dcab929c98.NginxMailingListEnglish@forum.nginx.org>

Hello guys,

First of all, thanks for nginx. It is very good and easy to setup. And it is
kind of a joy to learn about it.

Two warnings: this performance thing is addictive. Every bit you squeeze,
you want more. And English is my second language so pardon me for any
mistakes.

Anyways I am comparing nginx performance for wordpress websites in different
scenarios and something seems weird. So I am here to share with you guys and
maybe adjust my expectations.

Software

# NGINX 1.4.2-1~dotdeb.1

# PHP5-CGI 5.4.20-1~dotdeb.1

# PHP-FPM 5.4.20-1~dotdeb.1

# MYSQL Server 5.5.31+dfsg-0+wheezy1

# MYSQL Tuner 1.2.0-1

# APC opcode 3.1.13-1

This is a ec2 small instance. All tests done using SIEGE 40 concurrent
requests for 2 minutes. All tests done from localhost > localhost.

Scenario one - A url cached via fastcgi_cache to TMPFS (MEMORY)
SIEGE -c 40 -b -t120s
'http://www.joaodedeus.com.br/quero-visitar/abadiania-go'

Transactions: 1403 hits
Availability: 100.00 %
Elapsed time: 119.46 secs
Data transferred: 14.80 MB
Response time: 3.36 secs
Transaction rate: 11.74 trans/sec
Throughput: 0.12 MB/sec
Concurrency: 39.42
Successful transactions: 1403
Failed transactions: 0
Longest transaction: 4.43
Shortest transaction: 1.38


Scenario two - Same url cached via fastcgi_cache to disk (ec2 oninstance
storage - ephemeral)

Transactions: 1407 hits
Availability: 100.00 %
Elapsed time: 119.13 secs
Data transferred: 14.84 MB
Response time: 3.33 secs
Transaction rate: 11.81 trans/sec
Throughput: 0.12 MB/sec
Concurrency: 39.34
Successful transactions: 1407
Failed transactions: 0
Longest transaction: 4.40
Shortest transaction: 0.88


Here is where the first question pops in. I dont see a huge difference on
ram to disk. Is that normal? I mean, no huge benefit on using ram cache.

Scenario three - The same page, saved as .html and server by nginx

Transactions: 1799 hits
Availability: 100.00 %
Elapsed time: 120.00 secs
Data transferred: 25.33 MB
Response time: 2.65 secs
Transaction rate: 14.99 trans/sec
Throughput: 0.21 MB/sec
Concurrency: 39.66
Successful transactions: 1799
Failed transactions: 0
Longest transaction: 5.21
Shortest transaction: 1.30


Here is the main question. This is a huge difference. I mean, AFAIK serving
from cache is supposed to be as fast as serving a static .html file, right?
I mean - nginx sees that there is a cache rule for location and sees that
there is a cached version, serves it. Why so much difference?

Cache is working fine
35449 -
10835 HIT
1156 MISS
1074 BYPASS
100 EXPIRED

Any help is welcome.
Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243412#msg-243412


From mdounin at mdounin.ru Thu Oct 3 17:12:07 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 3 Oct 2013 21:12:07 +0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <9a3636a2bd1f96b2859589dcab929c98.NginxMailingListEnglish@forum.nginx.org>
References: <9a3636a2bd1f96b2859589dcab929c98.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131003171207.GL62063@mdounin.ru>

Hello!

On Thu, Oct 03, 2013 at 12:34:20PM -0400, ddutra wrote:

[...]

> Scenario three - The same page, saved as .html and server by nginx
>
> Transactions: 1799 hits
> Availability: 100.00 %
> Elapsed time: 120.00 secs
> Data transferred: 25.33 MB
> Response time: 2.65 secs
> Transaction rate: 14.99 trans/sec
> Throughput: 0.21 MB/sec
> Concurrency: 39.66
> Successful transactions: 1799
> Failed transactions: 0
> Longest transaction: 5.21
> Shortest transaction: 1.30
>
>
> Here is the main question. This is a huge difference. I mean, AFAIK serving
> from cache is supposed to be as fast as serving a static .html file, right?
> I mean - nginx sees that there is a cache rule for location and sees that
> there is a cached version, serves it. Why so much difference?

The 15 requests per second for a static file looks utterly slow,
and first of all you may want to find out what's a limiting factor
in this case. This will likely help to answer the question "why
the difference".

>From what was previously reported here - communication with EC2
via external ip address may be very slow, and using 127.0.0.1
instead used to help.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Thu Oct 3 19:00:51 2013
From: nginx-forum at nginx.us (ddutra)
Date: Thu, 03 Oct 2013 15:00:51 -0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <20131003171207.GL62063@mdounin.ru>
References: <20131003171207.GL62063@mdounin.ru>
Message-ID: <546f35c67f4ade68b4c85ed97dffcaf0.NginxMailingListEnglish@forum.nginx.org>

Maxim Dounin Wrote:
-------------------------------------------------------

> The 15 requests per second for a static file looks utterly slow,
> and first of all you may want to find out what's a limiting factor
> in this case. This will likely help to answer the question "why
> the difference".
>
> From what was previously reported here - communication with EC2
> via external ip address may be very slow, and using 127.0.0.1
> instead used to help.
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


Maxim,

Thanks for your help.

Alright, so you are saying my static html serving stats are bad, that means
the gap between serving static html from disk and serving cached version
(fastcgi_cache) from tmpfs is even bigger?

Anyways, I did the same siege on a very basic (few lines) .html static file.
Got better transaction rates.

siege -c 40 -b -t120s 'http://127.0.0.1/index.html'

Here are the results

Lifting the server siege... done.


Transactions: 35768 hits
Availability: 97.65 %
Elapsed time: 119.57 secs
Data transferred: 5.42 MB
Response time: 0.13 secs
Transaction rate: 299.14 trans/sec
Throughput: 0.05 MB/sec
Concurrency: 38.02
Successful transactions: 35768
Failed transactions: 859
Longest transaction: 1.41
Shortest transaction: 0.00

Obs: the small percentage of fails are because of "socket: 1464063744
address is unavailable.: Cannot assign requested address". I think it is a
problem on debian / siege config.


Same thing, using http://server-public-ip/index.html

Lifting the server siege... done.


Transactions: 32651 hits
Availability: 100.00 %
Elapsed time: 119.75 secs
Data transferred: 4.95 MB
Response time: 0.07 secs
Transaction rate: 272.66 trans/sec
Throughput: 0.04 MB/sec
Concurrency: 19.97
Successful transactions: 32651
Failed transactions: 0
Longest transaction: 0.56
Shortest transaction: 0.00

Note that this is a very basic html file, it just has a couple of lines.

Now the same thing with a more "complex" html which is a exact copy of
http://www.joaodedeus.com.br/quero-visitar/abadiania-go.

Using 127.0.0.1/test.html

Lifting the server siege... done.


Transactions: 2182 hits
Availability: 100.00 %
Elapsed time: 119.11 secs
Data transferred: 30.56 MB
Response time: 1.08 secs
Transaction rate: 18.32 trans/sec
Throughput: 0.26 MB/sec
Concurrency: 19.87
Successful transactions: 2182
Failed transactions: 0
Longest transaction: 2.68
Shortest transaction: 0.02


Using public ip

Lifting the server siege... done.


Transactions: 1913 hits
Availability: 100.00 %
Elapsed time: 119.80 secs
Data transferred: 26.79 MB
Response time: 1.25 secs
Transaction rate: 15.97 trans/sec
Throughput: 0.22 MB/sec
Concurrency: 19.94
Successful transactions: 1913
Failed transactions: 0
Longest transaction: 4.33
Shortest transaction: 0.19


Same slow transaction rate.


Please let me know what you think. Its my first nginx experience. So far it
is performing way better then my old setup, but I would like to get the most
out of it.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243414#msg-243414


From iptablez at yahoo.com Fri Oct 4 11:04:49 2013
From: iptablez at yahoo.com (Indo Php)
Date: Fri, 4 Oct 2013 04:04:49 -0700 (PDT)
Subject: ngx_cache_purge not found
Message-ID: <1380884689.59683.YahooMailNeo@web142302.mail.bf1.yahoo.com>

Hi there,

I tried to use ngx_cache_purge with the configuration below
? ? ? ? location ~ /purge(/.*) { ??
? ? ? ? ? ? ? ? allow ? 127.0.0.1;
? ? ? ? ? ? ? ? deny ? ?all;
? ? ? ? ? ? ? ? proxy_cache_purge ?one backend$request_uri;
? ? ? ? }

? ? ? ? location ~* ^.+\.(css|js)$ { ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? proxy_pass ? ? ? ? ? ? ?http://backend; ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
? ? ? ? ? ? ? ? proxy_cache ? ? ? ? ? ? two; ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? proxy_cache_key ? ? ? ? backend$request_uri; ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? proxy_cache_valid ? ? ? 200 ?1h; ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? proxy_cache_valid ? ? ? 404 ?1m; ? ? ? ? ? ? ? ?

Then when I tried to open by browser with url http://myurl/purge/style.css, it's saying Page not Found (404)

Is there any something I've missed?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131004/7f2408bb/attachment.html>

From richard at kearsley.me Fri Oct 4 11:20:37 2013
From: richard at kearsley.me (Richard Kearsley)
Date: Fri, 04 Oct 2013 12:20:37 +0100
Subject: ngx_cache_purge not found
In-Reply-To: <1380884689.59683.YahooMailNeo@web142302.mail.bf1.yahoo.com>
References: <1380884689.59683.YahooMailNeo@web142302.mail.bf1.yahoo.com>
Message-ID: <524EA485.6030802@kearsley.me>

On 04/10/13 12:04, Indo Php wrote:
> allow 127.0.0.1;
> deny all;

the url will only work if requested from the server itself...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131004/686ebb9a/attachment.html>

From iptablez at yahoo.com Fri Oct 4 11:31:03 2013
From: iptablez at yahoo.com (Indo Php)
Date: Fri, 4 Oct 2013 04:31:03 -0700 (PDT)
Subject: ngx_cache_purge not found
In-Reply-To: <524EA485.6030802@kearsley.me>
References: <1380884689.59683.YahooMailNeo@web142302.mail.bf1.yahoo.com>
<524EA485.6030802@kearsley.me>
Message-ID: <1380886263.9376.YahooMailNeo@web142301.mail.bf1.yahoo.com>

I tried to remove it already.?



________________________________
From: Richard Kearsley <richard at kearsley.me>
To: nginx at nginx.org
Sent: Friday, October 4, 2013 6:20 PM
Subject: Re: ngx_cache_purge not found



On 04/10/13 12:04, Indo Php wrote:

? ? ? ? ? ? ? ? allow ? 127.0.0.1;
>? ? ? ? ? ? ? ? deny ? ?all;
the url will only work if requested from the server itself...

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131004/48cb891e/attachment.html>

From mdounin at mdounin.ru Fri Oct 4 12:05:09 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 16:05:09 +0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <546f35c67f4ade68b4c85ed97dffcaf0.NginxMailingListEnglish@forum.nginx.org>
References: <20131003171207.GL62063@mdounin.ru>
<546f35c67f4ade68b4c85ed97dffcaf0.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131004120509.GM62063@mdounin.ru>

Hello!

On Thu, Oct 03, 2013 at 03:00:51PM -0400, ddutra wrote:

> Maxim Dounin Wrote:
> -------------------------------------------------------
>
> > The 15 requests per second for a static file looks utterly slow,
> > and first of all you may want to find out what's a limiting factor
> > in this case. This will likely help to answer the question "why
> > the difference".
> >
> > From what was previously reported here - communication with EC2
> > via external ip address may be very slow, and using 127.0.0.1
> > instead used to help.
>
> Alright, so you are saying my static html serving stats are bad, that means
> the gap between serving static html from disk and serving cached version
> (fastcgi_cache) from tmpfs is even bigger?

Yes. Numbers are _very_ low. In a virtual machine on my notebook
numbers from siege with 151-byte static file looks like:

$ siege -c 40 -b -t120s http://127.0.0.1:8080/index.html
...
Lifting the server siege... done.
Transactions: 200685 hits
Availability: 100.00 %
Elapsed time: 119.82 secs
Data transferred: 28.90 MB
Response time: 0.02 secs
Transaction rate: 1674.88 trans/sec
Throughput: 0.24 MB/sec
Concurrency: 39.64
Successful transactions: 200685
Failed transactions: 0
Longest transaction: 0.08
Shortest transaction: 0.01
...

Which is still very low. Switching off verbose output in siege
config (which is there by default) results in:

$ siege -c 40 -b -t120s http://127.0.0.1:8080/index.html
** SIEGE 2.70
** Preparing 40 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 523592 hits
Availability: 100.00 %
Elapsed time: 119.73 secs
Data transferred: 75.40 MB
Response time: 0.01 secs
Transaction rate: 4373.23 trans/sec
Throughput: 0.63 MB/sec
Concurrency: 39.80
Successful transactions: 523592
Failed transactions: 0
Longest transaction: 0.02
Shortest transaction: 0.01

That is, almost 3x speedup. This suggests the limiting factor
first tests is siege itself. And top suggests the test is CPU
bound (idle 0%) - with nginx using about 4% of the CPU, and about
60% accounted to siege threads. Rest is unaccounted, likely due
to number of threads siege uses.

With http_load results look like:

$ echo http://127.0.0.1:8080/index.html > z
$ http_load -parallel 40 -seconds 120 z
696950 fetches, 19 max parallel, 1.05239e+08 bytes, in 120 seconds
151 mean bytes/connection
5807.91 fetches/sec, 876995 bytes/sec
msecs/connect: 0.070619 mean, 7.608 max, 0 min
msecs/first-response: 0.807419 mean, 14.526 max, 0 min
HTTP response codes:
code 200 -- 696950

That is, siege results certainly could be better. Test is again
CPU bound, with nginx using about 40% and http_load using about
60%.

>From my previous experience, siege requires multiple dedicated
servers to run due to being CPU hungry.

[...]

> Please let me know what you think.

Numbers are still very low, but the difference between public ip
and 127.0.0.1 seems minor. Limiting factor is something else.

> Its my first nginx experience. So far it
> is performing way better then my old setup, but I would like to get the most
> out of it.

First of all, I would recommend you to make sure your are
benchmarking nginx, not your benchmarking tool.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 4 13:43:05 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 04 Oct 2013 09:43:05 -0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <20131004120509.GM62063@mdounin.ru>
References: <20131004120509.GM62063@mdounin.ru>
Message-ID: <dcaa4e38dbb0be64ff74dbebe8c40622.NginxMailingListEnglish@forum.nginx.org>

Hello Maxim,
Thanks again for your considerations and help.

My first siege tests against the ec2 m1.small production server was done
using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
considerations about 127.0.0.1 why I did the siege from the same server that
is running nginx (production).

The debian machine I am using for the tests has 4vcpus and runs nothing
else. Other virtual machines run on this server but nothing too heavy. So I
am "sieging" from a server that has way more power then the one running
nginx. And I am sieging for a static html file on the production server that
has 44.2kb.

Lets run the tests again. This time I'll keep an eye on the siege cpu usage
and overall server load using htop and vmware vsphere client.

siege -c40 -b -t120s -i 'http://177.71.188.137/test.html' (agaisnt
production)

Transactions: 2010 hits
Availability: 100.00 %
Elapsed time: 119.95 secs
Data transferred: 28.12 MB
Response time: 2.36 secs
Transaction rate: 16.76 trans/sec
Throughput: 0.23 MB/sec
Concurrency: 39.59
Successful transactions: 2010
Failed transactions: 0
Longest transaction: 5.81
Shortest transaction: 0.01

Siege cpu usage was like 1~~2% during the entire 120s.
On the other hand, ec2 m1.small (production nginx) was 100% the entire time.
All nginx.

Again, with more concurrent users
siege -c80 -b -t120s -i 'http://177.71.188.137/test.html'

Lifting the server siege... done.

Transactions: 2029 hits
Availability: 100.00 %
Elapsed time: 119.65 secs
Data transferred: 28.41 MB
Response time: 4.60 secs
Transaction rate: 16.96 trans/sec
Throughput: 0.24 MB/sec
Concurrency: 78.00
Successful transactions: 2029
Failed transactions: 0
Longest transaction: 9.63
Shortest transaction: 0.19

Cant get pass the 17trans/sec per cpu.

This time siege cpu usage on my dell server was like 2~~3% the entire time
(htop). vsphere graphs dont even show a change from idle.

So I think we can rule out the possibility of siege cpu limitation.


----------------

Now for another test. Running siege and nginx on the same machine, with
exactly the same nginx.conf as the production server - changing only one
thing:

worker_processes 1; to >> worker_processes 4; Because m1.small (AWS Ec2) has
only 1 vcpu. My vmware dev server has 4.

siege -c40 -b -t120s -i 'http://127.0.0.1/test.html'

Results are - siege using about 1% cpu, all 4 vcpus jumping betweehn 30 to
90% usage, I would say avg 50%.

I dont see a lot of improvement either - results below:

Transactions: 13935 hits
Availability: 100.00 %
Elapsed time: 119.25 secs
Data transferred: 195.14 MB
Response time: 0.34 secs
Transaction rate: 116.86 trans/sec
Throughput: 1.64 MB/sec
Concurrency: 39.85
Successful transactions: 13935
Failed transactions: 0
Longest transaction: 1.06
Shortest transaction: 0.02


siege -c50 -b -t240s -i 'http://127.0.0.1/test.html'
Transactions: 27790 hits
Availability: 100.00 %
Elapsed time: 239.93 secs
Data transferred: 389.16 MB
Response time: 0.43 secs
Transaction rate: 115.83 trans/sec
Throughput: 1.62 MB/sec
Concurrency: 49.95
Successful transactions: 27790
Failed transactions: 0
Longest transaction: 1.78
Shortest transaction: 0.01


I belive this machine I just did this test is more powerful then our
notebooks. AVG CPU during the tests is 75%, 99% consumed by nginx. So it can
only be something in nginx config file.

Here is my nginx.conf
http://ddutra.s3.amazonaws.com/nginx/nginx.conf

And here is the virtualhost file I am fetching this test.html page com, it
is the default virtual host and the same one I use for status consoles etc.
http://ddutra.s3.amazonaws.com/nginx/default


If you could please take a look. There is a huge difference between your
results and mine. I am sure i am doing something wrong here.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243429#msg-243429


From nginx-forum at nginx.us Fri Oct 4 13:43:41 2013
From: nginx-forum at nginx.us (DevNginx)
Date: Fri, 04 Oct 2013 09:43:41 -0400
Subject: Why Nginx Doesn't Implement FastCGI Multiplexing?
In-Reply-To: <6535E158-7A98-4440-BF8F-BB4964E5606F@sysoev.ru>
References: <6535E158-7A98-4440-BF8F-BB4964E5606F@sysoev.ru>
Message-ID: <3fa131f30492b88a9e7f968b282f37e7.NginxMailingListEnglish@forum.nginx.org>

I would also like to add a vote for FCGI multiplexing.

There is no obligation for backends, since non-implementing backends can
indicate FCGI_CANT_MPX_CONN in response to a FCGI_GET_VALUES request by
nginx. The other poster has already mentioned FCGI_ABORT_REQUEST and
dropping response packets from dangling requests.

My scenario is that I have a variety of requests: some take a while, but
others are a quick URL rewrite culminating in a X-Accel-Redirect. This
rewrite involves complicated logic which is part of my overall backend
application., which I would rather not factor out and rewrite into a nginx
module The actual computation for the URL rewrite is miniscule compared to
the overhead of opening/closing a TCP connection, so FCGI request
multiplexing would be of great help here.

If the overhead of a multiplexed FCGI request starts to approach doing the
work directly in an nginx module, it would give a valuable alternative to
writing modules. This would avoid the pitfalls of writing modules (code
refactoring, rewriting in C, jeopardizing nginx worker process, etc.).

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,243430#msg-243430


From nginx-forum at nginx.us Fri Oct 4 13:52:49 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 04 Oct 2013 09:52:49 -0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <dcaa4e38dbb0be64ff74dbebe8c40622.NginxMailingListEnglish@forum.nginx.org>
References: <20131004120509.GM62063@mdounin.ru>
<dcaa4e38dbb0be64ff74dbebe8c40622.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <c1712162a3b001cac2c5b3fae9376895.NginxMailingListEnglish@forum.nginx.org>

Well, I just looked at the results again and it seems my Throughput (mb per
s) are not very far from yours.

My bad.

So results not that bad right? What do you think.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243431#msg-243431


From a.marinov at ucdn.com Fri Oct 4 14:33:57 2013
From: a.marinov at ucdn.com (Anatoli Marinov)
Date: Fri, 4 Oct 2013 17:33:57 +0300
Subject: nginx usptream 302 redirect
Message-ID: <CA+HzRfQaL1gK18-RD=A_E=O=hQcFFZrY-hpL+bB4Gxzx6Z=2hA@mail.gmail.com>

Hello,
Is there an easy way to configure nginx upstream to follow 302 instead of
send them to the browser?

I tried with this config:

http {
proxy_intercept_errors on;

proxy_cache_path /home/toli/nginx/run/cache keys_zone=zone_c1:256m
inactive=5d max_size=30g;

upstream up_servers {
server 127.0.1.1:8081;
}

server {
listen 127.0.0.1:8080;

location / {
proxy_cache zone_c1;
proxy_pass http://127.0.1.1:8081;
proxy_temp_path tmp ;

error_page 301 302 307 @redir;
}

location @redir {
proxy_cache zone_c1;
proxy_pass $upstream_http_location;
proxy_temp_path tmp ;
}
}
}

Unfortunately it do not work. I receive "500: Internal Server Error."
and in the logs I have [invalid URL prefix in ""]

>From dev mail list Maxim advised me to backup $upstream_http_location in
other variable and I did it but the result was the same - 500 internal
server error. The config after the fix is:


http {
proxy_intercept_errors on;

proxy_cache_path /home/toli/nginx/run/cache keys_zone=zone_c1:256m
inactive=5d max_size=30g;

upstream up_servers {
server 127.0.1.1:8081;
}

server {
listen 127.0.0.1:8080;

location / {
proxy_cache zone_c1;
proxy_pass http://127.0.1.1:8081;
proxy_temp_path tmp ;
set $foo $upstream_http_location;
error_page 301 302 307 @redir;
}

location @redir {
proxy_cache zone_c1;
proxy_pass $foo;
proxy_temp_path tmp ;
}
}
}

Do you have any idea how this could be achieved.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131004/e1331af9/attachment.html>

From nginx-forum at nginx.us Fri Oct 4 14:48:45 2013
From: nginx-forum at nginx.us (laltin)
Date: Fri, 04 Oct 2013 10:48:45 -0400
Subject: High response time at high concurrent connections
In-Reply-To: <d6a5e15c6146cc45d1feab88367585c3.NginxMailingListEnglish@forum.nginx.org>
References: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
<7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>
<0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>
<d6a5e15c6146cc45d1feab88367585c3.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <cf36b89731927b38308de66907a0b766.NginxMailingListEnglish@forum.nginx.org>

My backend logs show response times generally less than 2ms and with 4
instances backends can handle 2000 reqs/sec.
But when I look at nginx logs I see response times around 200ms and i think
nginx is the main problem with my situation.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243246,243405#msg-243405


From mdounin at mdounin.ru Fri Oct 4 15:11:06 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 19:11:06 +0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <dcaa4e38dbb0be64ff74dbebe8c40622.NginxMailingListEnglish@forum.nginx.org>
References: <20131004120509.GM62063@mdounin.ru>
<dcaa4e38dbb0be64ff74dbebe8c40622.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131004151106.GP62063@mdounin.ru>

Hello!

On Fri, Oct 04, 2013 at 09:43:05AM -0400, ddutra wrote:

> Hello Maxim,
> Thanks again for your considerations and help.
>
> My first siege tests against the ec2 m1.small production server was done
> using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
> considerations about 127.0.0.1 why I did the siege from the same server that
> is running nginx (production).
>
> The debian machine I am using for the tests has 4vcpus and runs nothing
> else. Other virtual machines run on this server but nothing too heavy. So I
> am "sieging" from a server that has way more power then the one running
> nginx. And I am sieging for a static html file on the production server that
> has 44.2kb.
>
> Lets run the tests again. This time I'll keep an eye on the siege cpu usage
> and overall server load using htop and vmware vsphere client.
>
> siege -c40 -b -t120s -i 'http://177.71.188.137/test.html' (agaisnt
> production)
>
> Transactions: 2010 hits
> Availability: 100.00 %
> Elapsed time: 119.95 secs
> Data transferred: 28.12 MB
> Response time: 2.36 secs
> Transaction rate: 16.76 trans/sec
> Throughput: 0.23 MB/sec
> Concurrency: 39.59
> Successful transactions: 2010
> Failed transactions: 0
> Longest transaction: 5.81
> Shortest transaction: 0.01

If this was a 44k file, this likely means you have gzip filter
enabled, as 28.12M / 2010 hits == 14k.

Having gzip enabled might indeed result in relatively high CPU
usage, and may result in such numbers in CPU-constrained cases.

For static html files, consider using gzip_static, see
http://nginx.org/r/gzip_static. Also consider tuning
gzip_comp_level to a lower level if you've changed it from a
default (1).

And, BTW, I've also tried to grab you exact test file from the
above link, and it asks for a password. Please note that checking
passwords is expensive operation, and can be very expensive
depending on password hash algorithms you use. If you test
against password-protected file - it may be another source of
slowness.

Just for reference, here are results from my virtual machine, a
45k file, with gzip enabled:

Transactions: 107105 hits
Availability: 100.00 %
Elapsed time: 119.30 secs
Data transferred: 1254.22 MB
Response time: 0.04 secs
Transaction rate: 897.80 trans/sec
Throughput: 10.51 MB/sec
Concurrency: 39.91
Successful transactions: 107105
Failed transactions: 0
Longest transaction: 0.08
Shortest transaction: 0.01

> Siege cpu usage was like 1~~2% during the entire 120s.

Please note that CPU percentage as printed for siege might be
incorrect and/or confusing for various reasons. Make sure to look
at _idle_ time on a server.

> On the other hand, ec2 m1.small (production nginx) was 100% the entire time.
> All nginx.

Ok, so you are CPU-bound, which is good. And see above
for possible reasons.

[...]

> I belive this machine I just did this test is more powerful then our
> notebooks. AVG CPU during the tests is 75%, 99% consumed by nginx. So it can
> only be something in nginx config file.
>
> Here is my nginx.conf
> http://ddutra.s3.amazonaws.com/nginx/nginx.conf
>
> And here is the virtualhost file I am fetching this test.html page com, it
> is the default virtual host and the same one I use for status consoles etc.
> http://ddutra.s3.amazonaws.com/nginx/default
>
>
> If you could please take a look. There is a huge difference between your
> results and mine. I am sure i am doing something wrong here.

The "gzip_comp_level 6;" in your config mostly explains things.
With gzip_compl_level set to 6 I get something about 450 r/s on my
notebook, which is a bit closer to your results. There is no need to
compress pages that hard - there is almost no difference in the
resulting document size, but there is huge difference in CPU time
required for compression.

Pagespeed also likely to consume lots of CPU power, and switching
it off should be helpfull.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Fri Oct 4 15:13:40 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 19:13:40 +0400
Subject: nginx usptream 302 redirect
In-Reply-To: <CA+HzRfQaL1gK18-RD=A_E=O=hQcFFZrY-hpL+bB4Gxzx6Z=2hA@mail.gmail.com>
References: <CA+HzRfQaL1gK18-RD=A_E=O=hQcFFZrY-hpL+bB4Gxzx6Z=2hA@mail.gmail.com>
Message-ID: <20131004151340.GQ62063@mdounin.ru>

Hello!

On Fri, Oct 04, 2013 at 05:33:57PM +0300, Anatoli Marinov wrote:

[...]

> From dev mail list Maxim advised me to backup $upstream_http_location in
> other variable and I did it but the result was the same - 500 internal
> server error. The config after the fix is:

[...]

> location / {
> proxy_cache zone_c1;
> proxy_pass http://127.0.1.1:8081;
> proxy_temp_path tmp ;
> set $foo $upstream_http_location;
> error_page 301 302 307 @redir;
> }
>
> location @redir {
> proxy_cache zone_c1;
> proxy_pass $foo;
> proxy_temp_path tmp ;
> }

You need "set ... $upstream_http_location" to be executed after a
request to an upstream was done, so you need in location @redir,
not in location /.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 4 15:17:09 2013
From: nginx-forum at nginx.us (IggyDolby)
Date: Fri, 04 Oct 2013 11:17:09 -0400
Subject: Forward Proxy config for iPads for soon to go Live site
Message-ID: <a5c347c396fba3722f5225a5e1a993e4.NginxMailingListEnglish@forum.nginx.org>

Hi, I'm an nginx newbie and I need to configure it as an externally
available forward proxy for manually configured iPads running an iOS app to
test a soon to go Live website.
You cannot change host files on an iPad but we can ask our external testers
to configure the proxy settings manually.
The old website is still "live" and has the old IP DNS's but we want to use
the same domain name but pointing to the new IP address.
I have installed nginx-1.4.2-1.el6.ngx.x86_64.rpm and it's up and running
but I need some help in how to get the configuration setup.
I will like to resolve to this domain using the /etc/hosts or any other way
that I could accomplish it and forward the requests to the new site.
We cannot switch the DNS yet or assign a new DNS name to this IP as it's all
ready to go with the original named SSL certs etc
Any help will be really appreciated.
Thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243438,243438#msg-243438


From mdounin at mdounin.ru Fri Oct 4 15:26:04 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 19:26:04 +0400
Subject: Forward Proxy config for iPads for soon to go Live site
In-Reply-To: <a5c347c396fba3722f5225a5e1a993e4.NginxMailingListEnglish@forum.nginx.org>
References: <a5c347c396fba3722f5225a5e1a993e4.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131004152604.GS62063@mdounin.ru>

Hello!

On Fri, Oct 04, 2013 at 11:17:09AM -0400, IggyDolby wrote:

> Hi, I'm an nginx newbie and I need to configure it as an externally
> available forward proxy for manually configured iPads running an iOS app to
> test a soon to go Live website.

Just a disclaimer:

Please note that nginx is reverse proxy, it's not forward proxy,
and it was never designed to be. While it could be possible to
configure nginx as a forward proxy, it's unsupported and may have
problems, including security ones.

> You cannot change host files on an iPad but we can ask our external testers
> to configure the proxy settings manually.
> The old website is still "live" and has the old IP DNS's but we want to use
> the same domain name but pointing to the new IP address.
> I have installed nginx-1.4.2-1.el6.ngx.x86_64.rpm and it's up and running
> but I need some help in how to get the configuration setup.
> I will like to resolve to this domain using the /etc/hosts or any other way
> that I could accomplish it and forward the requests to the new site.
> We cannot switch the DNS yet or assign a new DNS name to this IP as it's all
> ready to go with the original named SSL certs etc

If you already have nginx acting as a proxy for you (presumably
using proxy_pass with variables and a resolver, if /etc/hosts
doesn't work for you out of the box), an override of a particular
hostname can be easily done using an upstream{} block with a given
name.

http://nginx.org/r/upstream

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 4 15:44:41 2013
From: nginx-forum at nginx.us (IggyDolby)
Date: Fri, 04 Oct 2013 11:44:41 -0400
Subject: Forward Proxy config for iPads for soon to go Live site
In-Reply-To: <20131004152604.GS62063@mdounin.ru>
References: <20131004152604.GS62063@mdounin.ru>
Message-ID: <b0f182991884611fc6fe7a9c54b30758.NginxMailingListEnglish@forum.nginx.org>

Thanks for your reply, would you suggest to use some other open source proxy
like Squid or Varnish to use as forward proxy?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243438,243441#msg-243441


From mdounin at mdounin.ru Fri Oct 4 15:58:42 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 19:58:42 +0400
Subject: Forward Proxy config for iPads for soon to go Live site
In-Reply-To: <b0f182991884611fc6fe7a9c54b30758.NginxMailingListEnglish@forum.nginx.org>
References: <20131004152604.GS62063@mdounin.ru>
<b0f182991884611fc6fe7a9c54b30758.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131004155841.GT62063@mdounin.ru>

Hello!

On Fri, Oct 04, 2013 at 11:44:41AM -0400, IggyDolby wrote:

> Thanks for your reply, would you suggest to use some other open source proxy
> like Squid or Varnish to use as forward proxy?

Squid is known to work.

Varnish, as far as I can tell, isn't a forward proxy either, much
like nginx.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Fri Oct 4 16:48:40 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 20:48:40 +0400
Subject: Why Nginx Doesn't Implement FastCGI Multiplexing?
In-Reply-To: <3fa131f30492b88a9e7f968b282f37e7.NginxMailingListEnglish@forum.nginx.org>
References: <6535E158-7A98-4440-BF8F-BB4964E5606F@sysoev.ru>
<3fa131f30492b88a9e7f968b282f37e7.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131004164840.GV62063@mdounin.ru>

Hello!

On Fri, Oct 04, 2013 at 09:43:41AM -0400, DevNginx wrote:

> I would also like to add a vote for FCGI multiplexing.
>
> There is no obligation for backends, since non-implementing backends can
> indicate FCGI_CANT_MPX_CONN in response to a FCGI_GET_VALUES request by
> nginx. The other poster has already mentioned FCGI_ABORT_REQUEST and
> dropping response packets from dangling requests.
>
> My scenario is that I have a variety of requests: some take a while, but
> others are a quick URL rewrite culminating in a X-Accel-Redirect. This
> rewrite involves complicated logic which is part of my overall backend
> application., which I would rather not factor out and rewrite into a nginx
> module The actual computation for the URL rewrite is miniscule compared to
> the overhead of opening/closing a TCP connection, so FCGI request
> multiplexing would be of great help here.
>
> If the overhead of a multiplexed FCGI request starts to approach doing the
> work directly in an nginx module, it would give a valuable alternative to
> writing modules. This would avoid the pitfalls of writing modules (code
> refactoring, rewriting in C, jeopardizing nginx worker process, etc.).

Your use case seems to be perfectly covered by a keepalive connections
support, which is already here. See http://nginx.org/r/keepalive.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 4 16:52:28 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 04 Oct 2013 12:52:28 -0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <20131004151106.GP62063@mdounin.ru>
References: <20131004151106.GP62063@mdounin.ru>
Message-ID: <28aa189fac4b4921153bac1d89ee4f9a.NginxMailingListEnglish@forum.nginx.org>

Maxim,
Thank you again.

About my tests, FYI I had httpauth turned off for my tests.

I think you nailed the problem.

This is some new information for me.

So for production I have a standard website which is php being cached by
fastcgi cache. All static assets are served by nginx, so gzip_static will do
the trick if I pre-compress them and it will save a bunch of cpu.
What about the cached .php page? Is there any way of saving the gziped
version to cache?

Another question - most static assets are being worked in some way by
ngx_pagespeed and the optimized assets are cached. That means .js, .css and
images too. How does gzip works in this case? nginx gzips it everytime it
gets hit? ngx_pagespeed caches gzipped content? I am confused.

Maybe it would be better to drop ngx_pagespeed, bulk optimize every image on
source, minify all .js and .css, and let it all run on nginx without
ngx_pagespeed cache. Can you share you experience on that?

And one last question, is there any way to output $gzip_ratio on the
response headers in order to do a easy debbuging?


Later i'll do some more sieging with gzip comp level at 1 and off and i'll
post it here.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243444#msg-243444


From nginx-forum at nginx.us Fri Oct 4 17:43:52 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 04 Oct 2013 13:43:52 -0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <9a3636a2bd1f96b2859589dcab929c98.NginxMailingListEnglish@forum.nginx.org>
References: <9a3636a2bd1f96b2859589dcab929c98.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <23c49e724b3e54e6621726fe6a173f06.NginxMailingListEnglish@forum.nginx.org>

As promised here are my stats on vmware 4 vcpus siege -c50 -b -t240s -i
'http://127.0.0.1/test.html'
gzip off, pagespeed off.

Transactions: 898633 hits
Availability: 100.00 %
Elapsed time: 239.55 secs
Data transferred: 39087.92 MB
Response time: 0.01 secs
Transaction rate: 3751.34 trans/sec
Throughput: 163.17 MB/sec
Concurrency: 49.83
Successful transactions: 898633
Failed transactions: 0
Longest transaction: 0.03
Shortest transaction: 0.00

If you want to paste your considerations on my stackoverflow question
http://stackoverflow.com/questions/19160737/nginx-fastcgi-cache-performance-disk-cached-vs-tmpfs-cached-vs-static-file

I'll pick it as a correct anwser.

Now i'll set up static gzip serving and use comp level 1 for dynamic
content.

Thanks alot. Best regards

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243445#msg-243445


From mdounin at mdounin.ru Fri Oct 4 17:43:53 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 Oct 2013 21:43:53 +0400
Subject: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS
serving static file
In-Reply-To: <28aa189fac4b4921153bac1d89ee4f9a.NginxMailingListEnglish@forum.nginx.org>
References: <20131004151106.GP62063@mdounin.ru>
<28aa189fac4b4921153bac1d89ee4f9a.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131004174353.GW62063@mdounin.ru>

Hello!

On Fri, Oct 04, 2013 at 12:52:28PM -0400, ddutra wrote:

> Maxim,
> Thank you again.
>
> About my tests, FYI I had httpauth turned off for my tests.
>
> I think you nailed the problem.
>
> This is some new information for me.
>
> So for production I have a standard website which is php being cached by
> fastcgi cache. All static assets are served by nginx, so gzip_static will do
> the trick if I pre-compress them and it will save a bunch of cpu.
> What about the cached .php page? Is there any way of saving the gziped
> version to cache?

Yes, but it's not something trivial to configure. Best aproach
would likely be to unconditionally return gzipped version to
cache, and use gunzip to uncompress it if needed, see
http://nginx.org/r/gunzip.

> Another question - most static assets are being worked in some way by
> ngx_pagespeed and the optimized assets are cached. That means .js, .css and
> images too. How does gzip works in this case? nginx gzips it everytime it
> gets hit? ngx_pagespeed caches gzipped content? I am confused.

I haven't looked at what pagespeed folks did in their module, but
likely they don't cache anything gzip-related and the response is
gzipped every time (much like with normal files). It might also
conflict with gzip_static, as pagespeed will likely won't be able
to dig into gzipped response.

> Maybe it would be better to drop ngx_pagespeed, bulk optimize every image on
> source, minify all .js and .css, and let it all run on nginx without
> ngx_pagespeed cache. Can you share you experience on that?

In my experience, any dynamic processing should be avoided to
maximize performance. Static files should be optimized (minified,
pre-gzipped) somewhere during deployment process, this allows to
achieve smallest resource sizes while maintaining best
performance.

> And one last question, is there any way to output $gzip_ratio on the
> response headers in order to do a easy debbuging?

No, as $gzip_ratio isn't yet known while sending response headers.
Use logs instead. Or, if you just want to see how files are
compressed with different compression levels, just use gzip(1) for
tests.

--
Maxim Dounin
http://nginx.org/en/donation.html


From agentzh at gmail.com Fri Oct 4 19:01:00 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Fri, 4 Oct 2013 12:01:00 -0700
Subject: High response time at high concurrent connections
In-Reply-To: <cf36b89731927b38308de66907a0b766.NginxMailingListEnglish@forum.nginx.org>
References: <e13f6b9528eb5e99585ac92c64625f32.NginxMailingListEnglish@forum.nginx.org>
<7e8587cb7d7339e1b315a277ed96d48f.NginxMailingListEnglish@forum.nginx.org>
<0f969e72719f1681ace49afecc78b4f8.NginxMailingListEnglish@forum.nginx.org>
<d6a5e15c6146cc45d1feab88367585c3.NginxMailingListEnglish@forum.nginx.org>
<cf36b89731927b38308de66907a0b766.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAB4Tn6OycLwOtFp+kRnHrcP+XdDAQXMWexLWte4S8HoVFcrN1A@mail.gmail.com>

Hello!

On Fri, Oct 4, 2013 at 7:48 AM, laltin wrote:
> My backend logs show response times generally less than 2ms and with 4
> instances backends can handle 2000 reqs/sec.
> But when I look at nginx logs I see response times around 200ms and i think
> nginx is the main problem with my situation.
>

As I've suggested, use the off-CPU and on-CPU flamegraph tool to check
your loaded Nginx worker processes. It could easily be a configuration
issue or something. Please do that.

See

https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt

and

https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt-off-cpu

Also, ensure your Nginx is not flooding your Nginx's error log file,
which is very expensive. If it is, you can also clearly see it in the
flame graphs anyway.

Stop guessing, start tracing! :)

Best regards,
-agentzh


From nginx-forum at nginx.us Sat Oct 5 09:16:47 2013
From: nginx-forum at nginx.us (mex)
Date: Sat, 05 Oct 2013 05:16:47 -0400
Subject: Overhead when enabling debug?
In-Reply-To: <20130927141926.GM2271@mdounin.ru>
References: <20130927141926.GM2271@mdounin.ru>
Message-ID: <65eeac937002b611a7f13b912c3614ad.NginxMailingListEnglish@forum.nginx.org>

hi maxim,


thanx, thats what i did expected.

i did installed the --with-debug - enabled version on site with ~ 2000 rp/s
during daytime, nothing to see so far.

thanx for checking!



regards,

mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243201,243451#msg-243451


From nginx-forum at nginx.us Sat Oct 5 09:31:59 2013
From: nginx-forum at nginx.us (mex)
Date: Sat, 05 Oct 2013 05:31:59 -0400
Subject: Overhead when enabling debug?
In-Reply-To: <CAB4Tn6ND2G6uy4mLBqwsOYPOm6SagAtsWiCnHpE_=r4_RvYGuQ@mail.gmail.com>
References: <CAB4Tn6ND2G6uy4mLBqwsOYPOm6SagAtsWiCnHpE_=r4_RvYGuQ@mail.gmail.com>
Message-ID: <11255f9b72c96470039c84af05205901.NginxMailingListEnglish@forum.nginx.org>

hi agentzh,


your points are valid, but i talk about heisenbugs and the ability
to monitor a certain ip; you know, theres WTF??? - errors :)

please note, on the infrastructure i talk about we have usually
debug-logs disabled, and the bottleneck is usually the app-servers.


but thanx for your answer, i'll invest some time and check your toolchains,
especially systemtap. is systemtap included in openresty? looks like the
perfect tool to create some nagios-plugins upon.




regards,


mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243201,243452#msg-243452


From arkaitzj at gmail.com Sat Oct 5 12:45:49 2013
From: arkaitzj at gmail.com (arkaitzj at gmail.com)
Date: Sat, 5 Oct 2013 14:45:49 +0200
Subject: Fwd: Fastcgi returning 404 and nginx overwriting it to 200
In-Reply-To: <CAAPxSdz9RZii=WoymU9VjQE-9Ad=gQSGpHbjEnY_P69BLLApWQ@mail.gmail.com>
References: <CAAPxSdz9RZii=WoymU9VjQE-9Ad=gQSGpHbjEnY_P69BLLApWQ@mail.gmail.com>
Message-ID: <CAAPxSdwSaGy+RM16ENRFs2=t0aB2jb0OH8ePJw3LVmTGUOyN3g@mail.gmail.com>

Hi all,


My version is 0.7.67-3 coming from Debian Squeeze.
I have tried with squeeze-backports which is version 1.2.1-2.2~bpo60+2 but
I got the same results.

I have reduced this to the smallest config for fastcgi available and it
still does the same.
There are no intercept_redirects directives or error_pages neither.

I have tried enabling the debug log and I see messages like "upstream split
a header line in FastCGI records"
Strace-ing it later I see the first few lines sent, like the status one are
splited in 2 recvfroms, but I assume this shouldn't be a problem.
I see clearly that nginx is receiving a 404 but sending back to the client
the output intact but a 200 ok.

Am I doing/assuming something wrong in here or could this be a bug?


--
Arkaitz
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131005/918d6d83/attachment.html>

From nginx-forum at nginx.us Sat Oct 5 15:12:01 2013
From: nginx-forum at nginx.us (DevNginx)
Date: Sat, 05 Oct 2013 11:12:01 -0400
Subject: Why Nginx Doesn't Implement FastCGI Multiplexing?
In-Reply-To: <20131004164840.GV62063@mdounin.ru>
References: <20131004164840.GV62063@mdounin.ru>
Message-ID: <7a3648f09ea983667da1abdba108db4f.NginxMailingListEnglish@forum.nginx.org>

Maxim Dounin Wrote:
-------------------------------------------------------
> Your use case seems to be perfectly covered by a keepalive connections
>
> support, which is already here. See http://nginx.org/r/keepalive.

OK, yeah that would work for me. Thanks.

There is still the possibility that long running requests could clog the
connections, but I can work around that by listening on two different ports
and having nginx route the quickies to their dedicated port.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,243454#msg-243454


From arkaitzj at gmail.com Sat Oct 5 16:40:29 2013
From: arkaitzj at gmail.com (arkaitzj at gmail.com)
Date: Sat, 5 Oct 2013 18:40:29 +0200
Subject: Fastcgi returning 404 and nginx overwriting it to 200
In-Reply-To: <CAAPxSdwSaGy+RM16ENRFs2=t0aB2jb0OH8ePJw3LVmTGUOyN3g@mail.gmail.com>
References: <CAAPxSdz9RZii=WoymU9VjQE-9Ad=gQSGpHbjEnY_P69BLLApWQ@mail.gmail.com>
<CAAPxSdwSaGy+RM16ENRFs2=t0aB2jb0OH8ePJw3LVmTGUOyN3g@mail.gmail.com>
Message-ID: <CAAPxSdxRbtCHDRSj=8j+3KyjtEsDBLL2_P1MTyy7CqQxnAvK=Q@mail.gmail.com>

Hi,

I managed to fix it by changing uwsgi as fcgi runner from version 1.4.6 to
version 1.9.17.1, this fixed the issues, not sure what side was wrong.


--
Arkaitz


On Sat, Oct 5, 2013 at 2:45 PM, arkaitzj at gmail.com <arkaitzj at gmail.com>wrote:

> Hi all,
>
>
> My version is 0.7.67-3 coming from Debian Squeeze.
> I have tried with squeeze-backports which is version 1.2.1-2.2~bpo60+2
> but I got the same results.
>
> I have reduced this to the smallest config for fastcgi available and it
> still does the same.
> There are no intercept_redirects directives or error_pages neither.
>
> I have tried enabling the debug log and I see messages like "upstream
> split a header line in FastCGI records"
> Strace-ing it later I see the first few lines sent, like the status one
> are splited in 2 recvfroms, but I assume this shouldn't be a problem.
> I see clearly that nginx is receiving a 404 but sending back to the client
> the output intact but a 200 ok.
>
> Am I doing/assuming something wrong in here or could this be a bug?
>
>
> --
> Arkaitz
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131005/b92bc80b/attachment.html>

From agentzh at gmail.com Sat Oct 5 18:25:39 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Sat, 5 Oct 2013 11:25:39 -0700
Subject: Overhead when enabling debug?
In-Reply-To: <11255f9b72c96470039c84af05205901.NginxMailingListEnglish@forum.nginx.org>
References: <CAB4Tn6ND2G6uy4mLBqwsOYPOm6SagAtsWiCnHpE_=r4_RvYGuQ@mail.gmail.com>
<11255f9b72c96470039c84af05205901.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAB4Tn6PMsP-U7h2zDOcr_q=RA5fD3RsRXfVM10xPn6g0BMT7aw@mail.gmail.com>

Hello!

On Sat, Oct 5, 2013 at 2:31 AM, mex wrote:
> your points are valid, but i talk about heisenbugs and the ability
> to monitor a certain ip; you know, theres WTF??? - errors :)
>

It's trivial to do with dynamic tracing tools like systemtap and
dtrace. You don't even need to reload Nginx for this or parse any log
data files.

We also have great success in tracing down the real cause of those
rare timeout requests in our production nginx servers ;) Again, we
didn't change the Nginx configuration file nor reload the server.

> please note, on the infrastructure i talk about we have usually
> debug-logs disabled, and the bottleneck is usually the app-servers.
>

For our online systems, Nginx *is* the application server :)

> but thanx for your answer, i'll invest some time and check your toolchains,
> especially systemtap. is systemtap included in openresty? looks like the
> perfect tool to create some nagios-plugins upon.
>

systemtap is the tool framework that can answer almost *any* questions
that can be formulated in its scripting language :) The real-world
questions may involve many software layers at once, like involving
Nginx and Linux kernel's TCP/IP stack (and even the LuaJIT VM) at the
same time. And systemtap can associate events happening at different
layers of the software stack easily and efficiently.

The biggest selling point is that you don't have to modify the
software nor the software's configuration yourself to make things work
and you can always aggregate data at the data source, saving a lot of
resources in dumping, storing, and parsing the raw logging data.

Best regards,
-agentzh


From nginx-forum at nginx.us Sat Oct 5 22:16:24 2013
From: nginx-forum at nginx.us (izghitu)
Date: Sat, 05 Oct 2013 18:16:24 -0400
Subject: nginx reverse proxy odd 301 redirect
Message-ID: <bbba19c86c98d4c2695c4beab0a816f2.NginxMailingListEnglish@forum.nginx.org>

Hi,

I have latest nginx which I am using as a reverse proxy for an application
of mine running on apache on another server. The nginx config is like this:
upstream upstreamname {
server ip:port;
}

....

location / {
proxy_pass http://upstreamname;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

...

So my problem appears with a certain change password script new users must
use when they login first time after their username was created.

If I use apache directly then the change password script works fine. If I do
it using nginx as reverse proxy then at the point where I press the submit
button and the POST should happen, I can't see the POST hitting the apache
backend(no entry in access log) but I can see an odd 301 redirect with the
POST request to the script in the nginx access log.

Any idea of what could be the cause? Anything I am doing wrong?

Please help
Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243458,243458#msg-243458


From mdounin at mdounin.ru Sun Oct 6 09:31:54 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Sun, 6 Oct 2013 13:31:54 +0400
Subject: nginx reverse proxy odd 301 redirect
In-Reply-To: <bbba19c86c98d4c2695c4beab0a816f2.NginxMailingListEnglish@forum.nginx.org>
References: <bbba19c86c98d4c2695c4beab0a816f2.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131006093154.GX62063@mdounin.ru>

Hello!

On Sat, Oct 05, 2013 at 06:16:24PM -0400, izghitu wrote:

> Hi,
>
> I have latest nginx which I am using as a reverse proxy for an application
> of mine running on apache on another server. The nginx config is like this:
> upstream upstreamname {
> server ip:port;
> }
>
> ....
>
> location / {
> proxy_pass http://upstreamname;
> proxy_redirect off;
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> }
>
> ...
>
> So my problem appears with a certain change password script new users must
> use when they login first time after their username was created.
>
> If I use apache directly then the change password script works fine. If I do
> it using nginx as reverse proxy then at the point where I press the submit
> button and the POST should happen, I can't see the POST hitting the apache
> backend(no entry in access log) but I can see an odd 301 redirect with the
> POST request to the script in the nginx access log.
>
> Any idea of what could be the cause? Anything I am doing wrong?

As long as the above configuration is a _full_ configuration of a
server{} block, there shouldn't be any 301 redirects returned
directly by nginx. Most likely what you observe is a result of
incorrect configuration of server{} blocks and/or wrong hostname
used in request.

If it's not a full configuration - there isn't enough information
to provide any help. See here for some tips what can be useful:

http://wiki.nginx.org/Debugging#Asking_for_help

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Sun Oct 6 10:59:28 2013
From: nginx-forum at nginx.us (lahiru)
Date: Sun, 06 Oct 2013 06:59:28 -0400
Subject: remove some parameters from $args
Message-ID: <1aa4b826e53f8e82cb0464b63839eded.NginxMailingListEnglish@forum.nginx.org>

Hello,
I'm using memcached + srcache. I need to modify the $args and
take it as the cache key.

For example;

RT=62&SID=BC3781C3-2E02-4A11-89CF-34E5CFE8B0EF&UID=44332&L=EN&M=1&H=1&UNC=0&SRC=LK

I need to convert above $args string to below by removing parameters SID and
UID.

RT=62&L=EN&M=1&H=1&UNC=0&SRC=LK

How can I do this ?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243462,243462#msg-243462


From agentzh at gmail.com Sun Oct 6 17:42:04 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Sun, 6 Oct 2013 10:42:04 -0700
Subject: remove some parameters from $args
In-Reply-To: <1aa4b826e53f8e82cb0464b63839eded.NginxMailingListEnglish@forum.nginx.org>
References: <1aa4b826e53f8e82cb0464b63839eded.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAB4Tn6OgYR6tKVr1yAG1kPrb61==Qfr3P1E=CKucLrSg8KXYFA@mail.gmail.com>

Hello!

On Sun, Oct 6, 2013 at 3:59 AM, lahiru wrote:
> Hello,
> I'm using memcached + srcache. I need to modify the $args and
> take it as the cache key.
>

Easy if you're using the ngx_lua module at the same time. Below is a
minimal example that demonstrates this:

location = /t {
rewrite_by_lua '
local args = ngx.req.get_uri_args()
args.SID = nil
args.UID = nil
ngx.req.set_uri_args(args)
';

echo $args;
}

Here we use the "echo" directive from the ngx_echo module to dump out
the final value of $args in the end. You can replace it with your
ngx_srcache configurations and upstream configurations instead for
your case. Let's test this /t interface with curl:

$ curl 'localhost:8081/t?RT=62&SID=BC3781C3-2E02-4A11-89CF-34E5CFE8B0EF&UID=44332&L=EN&M=1&H=1&UNC=0&SRC=LK'
M=1&UNC=0&RT=62&H=1&L=EN&SRC=LK

Please see the related parts of the ngx_lua documentation for more details:

http://wiki.nginx.org/HttpLuaModule#ngx.req.get_uri_args
http://wiki.nginx.org/HttpLuaModule#ngx.req.set_uri_args

It's worth mentioning that, if you want to retain the order of the URI
arguments, then you can do string substitutions on the value of $args
directly, for example,

location = /t {
rewrite_by_lua '
local args = ngx.var.args
newargs, n, err = ngx.re.gsub(args, [[\b[SU]ID=[^&]*&?]], "", "jo")
if n and n > 0 then
ngx.var.args = newargs
end
';

echo $args;
}

Now test it with the original curl command again, we get exactly what
you expected:

RT=62&L=EN&M=1&H=1&UNC=0&SRC=LK

But for caching purposes, it's good to normalize the URI argument
order so that you can increase the cache hit rate. And the hash table
entry order used by LuaJIT or Lua can be used to normalize the order
as a nice side effect :)

Best regards,
-agentzh


From nginx-forum at nginx.us Sun Oct 6 20:18:44 2013
From: nginx-forum at nginx.us (sandrex)
Date: Sun, 06 Oct 2013 16:18:44 -0400
Subject: Doubt regarding Windows 7, MP4 module and HTML5.
Message-ID: <bbf0e0e0826330ad0b49285ccf5e9a56.NginxMailingListEnglish@forum.nginx.org>

Hello,

I've runned nginx under Win7 and MP4 videos embed with HTML5 and BubblesJS
due to compatibility issues with SRT subtitles.
I'd like to know how to enable ngx_http_mp4_module using Windows (I've just
found out in Linux).

Thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243465,243465#msg-243465


From nginx-forum at nginx.us Sun Oct 6 22:04:16 2013
From: nginx-forum at nginx.us (izghitu)
Date: Sun, 06 Oct 2013 18:04:16 -0400
Subject: nginx reverse proxy odd 301 redirect
In-Reply-To: <20131006093154.GX62063@mdounin.ru>
References: <20131006093154.GX62063@mdounin.ru>
Message-ID: <30c80d68e0bdc2d1b709d1f1645255ac.NginxMailingListEnglish@forum.nginx.org>

Hi,

The problem was hostname related.

Thanks for your help

> incorrect configuration of server{} blocks and/or wrong hostname
> used in request.
>

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243458,243466#msg-243466


From cubicdaiya at gmail.com Mon Oct 7 01:36:47 2013
From: cubicdaiya at gmail.com (cubicdaiya)
Date: Mon, 7 Oct 2013 10:36:47 +0900
Subject: [ANNOUNCE]ngx_small_light
Message-ID: <CABWmZaPN0GfvF=m_SapF2H76hdfYemdtV=xYM1ehRLKcdq9dbw@mail.gmail.com>

Hello, folks!

I have decided to announce abount ngx_small_light here.

ngx_small_light is the extension module for converting image dynamically
like NginxImageFilter.

ngx_small_light supports following features.

* Supports following image-processing
* Resize
* Rotate
* Sharpen, Unsharpen
* Blur
* Border
* Canvas
* Crop
* Composition
* JPEG down-scaling(except GD)
* Supports following image-processing libraries
* ImageMagick
* Imlib2
* GD
* Supports following formats
* JPEG
* GIF(except Imlib2)
* PNG

I developed and released first version of ngx_small_light last year.

Though I'm still developing and maintaining now,
as ngx_small_light grew to equip quit a lot of features,
I announce ngx_small_light anew.

If someone interests in ngx_small_light, I'm glad.


?Source code

https://github.com/cubicdaiya/ngx_small_light

?Documents

* https://github.com/cubicdaiya/ngx_small_light/wiki
* https://github.com/cubicdaiya/ngx_small_light/wiki/Configuration


--
Tatsuhiko Kubo

E-Mail : cubicdaiya at gmail.com
HP : http://cccis.jp/index_en.html
Blog : http://cubicdaiya.github.com/blog/en/
Twitter : http://twitter.com/cubicdaiya
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/283a86a1/attachment.html>

From jabba.laci at gmail.com Mon Oct 7 05:27:37 2013
From: jabba.laci at gmail.com (Jabba Laci)
Date: Mon, 7 Oct 2013 07:27:37 +0200
Subject: nginx config: different projects in different directories
Message-ID: <CAOuJsM=VCAmE1_ZhCedXht8ANUkoNQWgUAbbq=0ZUk7Tom9AGg@mail.gmail.com>

Hi,

I'm new to the list. I started to learn the Python web framework Flask
and I would like to try it in production environment too. I managed to
bring Flask together with uwsgi and nginx. My Flask application is
available at the address localhost:81 .

I would like to add several applications and I want them to be
available under different URLs. For instance, if I have two projects
called "hello" and "world", I want to access them as
localhost:81/hello/ and localhost:81/world/ . The problem is I can't
figure out how to configure nginx for this.

Here is my current setup:

* The project "hello" is in this directory: /home/jabba/public_pyapps/hello/
* Its nginx entry:

server {
listen 81;
server_name localhost;
charset utf-8;
client_max_body_size 75M;

location / { try_files $uri @yourapplication; }
location @yourapplication {
include uwsgi_params;
uwsgi_pass unix:/home/jabba/public_pyapps/hello/hello_uwsgi.sock;
}
}

It's available at localhost:81 .

Questions:

(1) How to make it available under localhost:81/hello/ instead?

(2) If I add a new application (e.g. "world") next to previous ones,
how to add it to nginx?

Thanks,

Laszlo


From soggie at gmail.com Mon Oct 7 05:39:33 2013
From: soggie at gmail.com (Ruben LZ Tan)
Date: Mon, 7 Oct 2013 13:39:33 +0800
Subject: nginx config: different projects in different directories
In-Reply-To: <CAOuJsM=VCAmE1_ZhCedXht8ANUkoNQWgUAbbq=0ZUk7Tom9AGg@mail.gmail.com>
References: <CAOuJsM=VCAmE1_ZhCedXht8ANUkoNQWgUAbbq=0ZUk7Tom9AGg@mail.gmail.com>
Message-ID: <286196600229489898235C23CB9F926B@gmail.com>

Maybe try setting location /hello and /world would help?

Thanks,
Ruben Tan


On Monday, October 7, 2013 at 1:27 PM, Jabba Laci wrote:

> Hi,
>
> I'm new to the list. I started to learn the Python web framework Flask
> and I would like to try it in production environment too. I managed to
> bring Flask together with uwsgi and nginx. My Flask application is
> available at the address localhost:81 .
>
> I would like to add several applications and I want them to be
> available under different URLs. For instance, if I have two projects
> called "hello" and "world", I want to access them as
> localhost:81/hello/ and localhost:81/world/ . The problem is I can't
> figure out how to configure nginx for this.
>
> Here is my current setup:
>
> * The project "hello" is in this directory: /home/jabba/public_pyapps/hello/
> * Its nginx entry:
>
> server {
> listen 81;
> server_name localhost;
> charset utf-8;
> client_max_body_size 75M;
>
> location / { try_files $uri @yourapplication; }
> location @yourapplication {
> include uwsgi_params;
> uwsgi_pass unix:/home/jabba/public_pyapps/hello/hello_uwsgi.sock;
> }
> }
>
> It's available at localhost:81 .
>
> Questions:
>
> (1) How to make it available under localhost:81/hello/ instead?
>
> (2) If I add a new application (e.g. "world") next to previous ones,
> how to add it to nginx?
>
> Thanks,
>
> Laszlo
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org (mailto:nginx at nginx.org)
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/7bd14199/attachment.html>

From jabba.laci at gmail.com Mon Oct 7 05:51:10 2013
From: jabba.laci at gmail.com (Jabba Laci)
Date: Mon, 7 Oct 2013 07:51:10 +0200
Subject: nginx config: different projects in different directories
In-Reply-To: <286196600229489898235C23CB9F926B@gmail.com>
References: <CAOuJsM=VCAmE1_ZhCedXht8ANUkoNQWgUAbbq=0ZUk7Tom9AGg@mail.gmail.com>
<286196600229489898235C23CB9F926B@gmail.com>
Message-ID: <CAOuJsMm+7+=ehixb3JSPjaAXY4QdMqUWapCkrDLmgSVkw2gASQ@mail.gmail.com>

> Maybe try setting location /hello and /world would help?

I tried that but it didn't work.

Laszlo


From nginx-forum at nginx.us Mon Oct 7 06:39:14 2013
From: nginx-forum at nginx.us (lahiru)
Date: Mon, 07 Oct 2013 02:39:14 -0400
Subject: remove some parameters from $args
In-Reply-To: <CAB4Tn6OgYR6tKVr1yAG1kPrb61==Qfr3P1E=CKucLrSg8KXYFA@mail.gmail.com>
References: <CAB4Tn6OgYR6tKVr1yAG1kPrb61==Qfr3P1E=CKucLrSg8KXYFA@mail.gmail.com>
Message-ID: <24559ad9129d8c42786c4e6d5e1477db.NginxMailingListEnglish@forum.nginx.org>

Works like a charm. Thank you very much Yichun.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243462,243472#msg-243472


From lists at ruby-forum.com Mon Oct 7 08:08:48 2013
From: lists at ruby-forum.com (Aivaras La)
Date: Mon, 07 Oct 2013 10:08:48 +0200
Subject: Graceful backend shutdown
Message-ID: <eb47f388ce60b96ed31a6fa358bef9e7@ruby-forum.com>

Hi all!

I'm using Nginx as a reverse proxy and loadbalancer with 2 backends.
Sometimes I need to turn off one of the apps server. And I need to do it
gracefully, that when I comment one server in Nginx config, Nginx master
process starts to send new requests to new server, but old requests and
sessions stay in old server. I tried to use down, but it loses sessions.
Then tried use kill -HUP, but Nginx immediately loads new config and
closes old sessions and redirects them to new server. Thanks for help.

--
Posted via http://www.ruby-forum.com/.


From a.marinov at ucdn.com Mon Oct 7 08:56:45 2013
From: a.marinov at ucdn.com (Anatoli Marinov)
Date: Mon, 7 Oct 2013 11:56:45 +0300
Subject: How to setup nginx to follow 302
Message-ID: <CA+HzRfS_k+xd0kMTEZfrcKw7cEd5JnBFkaNY2-5BpFfwB-UVuQ@mail.gmail.com>

Hello colleagues,
Last week I tried to configure nginx to follow 302 through upstream instead
of relay it to the client.
My config now is like the next one:


http {
proxy_cache_path /home/toli/nginx/run/cache keys_zone=zone_c1:256m
inactive=5d max_size=30g;

upstream up_cdn_cache_l2 {
server 127.0.1.1:8081 weight=100;
}

server {

listen 127.0.0.1:8080;
resolver 8.8.8.8;

location / {
proxy_cache zone_c1;
proxy_pass http://127.0.1.1:8081;
proxy_temp_path tmp;
error_page 301 302 307 @redir;
}

location @redir {
set $foo $upstream_http_location;
proxy_pass $foo;
proxy_cache zone_c1;
proxy_temp_path tmp ;
}
}
}

With this config ginx is trying to connect to the server taken from
location header. In wireshark I found new tcp connection to it but there is
not http request over this connection and after some time 504: Gateway
Time-out is received.
So at the client site I receive "504: Gateway Time-out."

Do you have any idea why nginx dont send http request to the second server?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/54ec0567/attachment.html>

From mdounin at mdounin.ru Mon Oct 7 11:06:29 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 7 Oct 2013 15:06:29 +0400
Subject: Doubt regarding Windows 7, MP4 module and HTML5.
In-Reply-To: <bbf0e0e0826330ad0b49285ccf5e9a56.NginxMailingListEnglish@forum.nginx.org>
References: <bbf0e0e0826330ad0b49285ccf5e9a56.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131007110629.GA76294@mdounin.ru>

Hello!

On Sun, Oct 06, 2013 at 04:18:44PM -0400, sandrex wrote:

> Hello,
>
> I've runned nginx under Win7 and MP4 videos embed with HTML5 and BubblesJS
> due to compatibility issues with SRT subtitles.
> I'd like to know how to enable ngx_http_mp4_module using Windows (I've just
> found out in Linux).

The ngx_http_mp4_module module is compiled in into nginx/Windows
binary as available from http://nginx.org/en/download.html.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Mon Oct 7 11:12:44 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 7 Oct 2013 15:12:44 +0400
Subject: Graceful backend shutdown
In-Reply-To: <eb47f388ce60b96ed31a6fa358bef9e7@ruby-forum.com>
References: <eb47f388ce60b96ed31a6fa358bef9e7@ruby-forum.com>
Message-ID: <20131007111244.GB76294@mdounin.ru>

Hello!

On Mon, Oct 07, 2013 at 10:08:48AM +0200, Aivaras La wrote:

> Hi all!
>
> I'm using Nginx as a reverse proxy and loadbalancer with 2 backends.
> Sometimes I need to turn off one of the apps server. And I need to do it
> gracefully, that when I comment one server in Nginx config, Nginx master
> process starts to send new requests to new server, but old requests and
> sessions stay in old server. I tried to use down, but it loses sessions.
> Then tried use kill -HUP, but Nginx immediately loads new config and
> closes old sessions and redirects them to new server. Thanks for help.

On kill -HUP nginx does a gracefull shutdown of old worker
processes. That is, all requests being handled by old worker
processes are continue to work till they are complete. No
requests are lost and/or unexpectedly terminated. Details on
reconfiguration process can be found here:

http://nginx.org/en/docs/control.html#reconfiguration

--
Maxim Dounin
http://nginx.org/en/donation.html


From thijskoerselman at gmail.com Mon Oct 7 14:03:01 2013
From: thijskoerselman at gmail.com (Thijs Koerselman)
Date: Mon, 7 Oct 2013 16:03:01 +0200
Subject: Using add_header at server level context
In-Reply-To: <20130930143006.GK19345@craic.sysops.org>
References: <CAMCJZorqZAfjM94n2K8hmFpGiBcx7aA4b3rj3833HpuV0eRQqA@mail.gmail.com>
<20130930143006.GK19345@craic.sysops.org>
Message-ID: <CAMCJZoor9_fFgEs9M3cQkJMc1UJuV6oRP5faQDrvCoFqBMFEkg@mail.gmail.com>

Thanks. So using add_header in the location scope omits any earlier
add_header statements used in the parent scope. I am surprised that it
works like that, but it's definitely good to know.



On Mon, Sep 30, 2013 at 4:30 PM, Francis Daly <francis at daoine.org> wrote:

> On Mon, Sep 30, 2013 at 03:42:50PM +0200, Thijs Koerselman wrote:
>
> Hi there,
>
> > From the add_header docs I understand that it works at location, http and
> > server context. But when I use add_header at the server level I don't see
> > the headers being added to the response.
>
> > Am I missing something or is this just not working at the server level
> for
> > some reason?
>
> You're missing something.
>
> You're either missing that if the second argument to add_header expands
> to empty, then the header is not added; or that configuration directive
> inheritance is by replacement, not addition.
>
> ==
> server {
> listen 8080;
> add_header X-Server server-level;
> add_header X-Surprise $http_surprise;
>
> location /one {
> return 200 "location one";
> }
> location /two {
> return 200 "location two";
> add_header X-Location two;
> }
> }
> ==
>
> Compare the outputs you actually get from
>
> curl -i http://127.0.0.1:8080/one
>
> curl -i http://127.0.0.1:8080/two
>
> curl -i -H Surprise:value http://127.0.0.1:8080/one
>
> with what you expect to get.
>
> f
> --
> Francis Daly francis at daoine.org
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/64581fb7/attachment.html>

From ben at indietorrent.org Mon Oct 7 19:22:15 2013
From: ben at indietorrent.org (Ben Johnson)
Date: Mon, 07 Oct 2013 15:22:15 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <52373D89.2000906@indietorrent.org>
References: <52373D89.2000906@indietorrent.org>
Message-ID: <525309E7.6030209@indietorrent.org>



On 9/16/2013 1:19 PM, Ben Johnson wrote:
> Hello,
>
> In an effort to resolve a different issue, I am trying to confirm that
> my stack is capable of servicing at least two simultaneous requests for
> a given PHP script.
>
> In an effort to confirm this, I have written a simple PHP script that
> runs for a specified period of time and outputs the number of seconds
> elapsed since the script was started.
>
> -----------------------------------------------
> <?php
>
> $start = time();
>
> echo 'Starting concurrency test. Seconds elapsed:' . PHP_EOL;
> flush();
>
> $elapsed = time() - $start;
>
> echo $elapsed . PHP_EOL;
> flush();
>
> while ($elapsed < 60) {
> echo time() - $start . PHP_EOL;
> flush();
>
> sleep(5);
>
> $elapsed = time() - $start;
> }
>
> echo time() - $start . PHP_EOL;
> flush();
>
> -----------------------------------------------
>
> For whatever reason, nginx *always* buffers the output, even when I set
>
> output_buffering = off
>
> in the effective php.ini, *and* I set
>
> fastcgi_keep_conn on;
>
> in my nginx.conf.
>
> Of course, when I request the script via the command-line (php -f), the
> output is not buffered.
>
> Is it possible to disable PHP output buffering completely in nginx?
>
> Thanks for any help!
>
> -Ben
>

Sorry to bump this topic, but I feel as though I have exhausted the
available information on this subject.

I'm pretty much in the same boat as Roger from
http://stackoverflow.com/questions/4870697/php-flush-that-works-even-in-nginx
. I have tried all of the suggestions mentioned and still cannot disable
output buffering in PHP scripts that are called via nginx.

I have ensured that:

1.) output_buffering = "Off" in effective php.ini.

2.) zlib.output_compression = "Off" in effective php.ini.

3.) implicit_flush = "On" in effective php.ini.

4.) "gzip off" in nginx.conf.

5.) "fastcgi_keep_conn on" in nginx.conf.

6.) "proxy_buffering off" in nginx.conf.

nginx 1.5.2 (Windows)
PHP 5.4.8 Thread-Safe
Server API: CGI/FastCGI

Is there something else that I've overlooked?

Perhaps there is someone with a few moments free time who would be
willing to give this a shot on his own system. This seems "pretty
basic", but is proving to be a real challenge.

Thanks for any help with this!

-Ben


From nginx-forum at nginx.us Mon Oct 7 20:25:01 2013
From: nginx-forum at nginx.us (itpp2012)
Date: Mon, 07 Oct 2013 16:25:01 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <525309E7.6030209@indietorrent.org>
References: <525309E7.6030209@indietorrent.org>
Message-ID: <6cafde71ab48d0ea7badc272d21ec06b.NginxMailingListEnglish@forum.nginx.org>

Have you seen this one;
http://stackoverflow.com/questions/8882383/how-to-disable-output-buffering-in-php

Also try php NTS, it might also be that a flush only works with non-fcgi.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242895,243487#msg-243487


From wogri at wogri.com Mon Oct 7 20:31:15 2013
From: wogri at wogri.com (Wolfgang Hennerbichler)
Date: Mon, 7 Oct 2013 22:31:15 +0200
Subject: reverse proxy and nested locations
Message-ID: <24A832F0-6483-4D4F-BF24-F2B6F2947C3E@wogri.com>

Hi list,

I'd like to have an elegant reverse proxy configuration, where I allow specific sub-URIs behind the reverse-proxy-URL for specific IP Adresses. My intended configuration looks like this:

# TRAC
location /trac {
proxy_pass https://my.web.server:443/trac/;
location /trac/project {
allow 10.32.1.146;
allow 10.64.0.6;
deny all;
}
}

However, the location /trac/project does not inherit the proxy-pass directive. It works if I add 'proxy_pass https://my.web.server:443/trac/;' in the location /trac/project. This is redundant and I don't like that.

I can't put the proxy_pass into the server directive, as this is a proxy-server that does different proxy passes according to different locations.
Any help for solving this in an elegant way?

Wolfgang

--
http://www.wogri.at


From francis at daoine.org Mon Oct 7 21:35:07 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 7 Oct 2013 22:35:07 +0100
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <525309E7.6030209@indietorrent.org>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org>
Message-ID: <20131007213507.GN19345@craic.sysops.org>

On Mon, Oct 07, 2013 at 03:22:15PM -0400, Ben Johnson wrote:
> On 9/16/2013 1:19 PM, Ben Johnson wrote:

Hi there,

> > For whatever reason, nginx *always* buffers the output, even when I set

> > Is it possible to disable PHP output buffering completely in nginx?

Have you shown that the initial problem is on the nginx side?

I suspect it will be more interesting to people on this list if you
have a simple test case which demonstrates that it is nginx which is
buffering when you don't want it to.

Use a php script like this:

==
<?
echo "The first bit";
sleep(5);
echo "The second bit";
?>
==

Run the fastcgi server like this:

env -i php-cgi -d cgi.fix_pathinfo=0 -q -b 9009

Use an nginx config which includes something like this:

==
location = /php {
fastcgi_param SCRIPT_FILENAME /usr/local/nginx/test.php;
fastcgi_pass 127.0.0.1:9009;
}
==

Then do something like

tcpdump -nn -i any -A -s 0 port 9009

while also doing a

curl -i http://127.0.0.1:8080/php

and look at the network traffic from the fastcgi server.

If you don't see a five-second gap between the two different response
packets, it is being buffered before it gets to nginx.

Now make whichever please-don't-buffer changes seem useful in the php code
and in the fastcgi server configuration. When you can see non-buffered
output getting to nginx, then you know the non-nginx side is doing what
you want. So now you can start testing nginx configuration changes;
and you can share the exact non-nginx configuration you use, so that
someone else can copy-paste it and see the same problem that you see.

(Change 127.0.0.1:9009 to be whatever remote server runs your fastcgi
server, if that makes it easier to run tcpdump.)

Good luck with it,

f
--
Francis Daly francis at daoine.org


From reallfqq-nginx at yahoo.fr Mon Oct 7 21:57:49 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 7 Oct 2013 17:57:49 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <20131007213507.GN19345@craic.sysops.org>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org>
<20131007213507.GN19345@craic.sysops.org>
Message-ID: <CALqce=3Kk7UBT7fxG8ukvg3PyeYQOKUxq8LqkpRddR3T4s31WQ@mail.gmail.com>

Hello,


On Mon, Oct 7, 2013 at 5:35 PM, Francis Daly <francis at daoine.org> wrote:

> Run the fastcgi server like this:
>
> env -i php-cgi -d cgi.fix_pathinfo=0 -q -b 9009
>
> Use an nginx config which includes something like this:
>

?I would recommend being careful about that experiment since there is a
high probability that Ben uses php-fpm (it's actually the recommended way
compared to the old FastCGI + php-cgi and the related issues).
First Ben?

?should ensure that php-cgi and php-fpm shares the exact same ini
configuration. That's a common caveat... :o)
?

> ==
> location = /php {
> fastcgi_param SCRIPT_FILENAME /usr/local/nginx/test.php;
> fastcgi_pass 127.0.0.1:9009;
> }
> ==
>
> Then do something like
>
> tcpdump -nn -i any -A -s 0 port 9009
>
> while also doing a
>
> curl -i http://127.0.0.1:8080/php
>
> and look at the network traffic from the fastcgi server.
>
> If you don't see a five-second gap between the two different response
> packets, it is being buffered before it gets to nginx.
>

?That's the best way of proceeding since it uses the exact environment PHP
will be using for production-ready code. Wireshark may be used to read pcap
dumps with a nice graphical presentation.?

Now make whichever please-don't-buffer changes seem useful in the php code
> and in the fastcgi server configuration. When you can see non-buffered
> output getting to nginx, then you know the non-nginx side is doing what
> you want. So now you can start testing nginx configuration changes;
> and you can share the exact non-nginx configuration you use, so that
> someone else can copy-paste it and see the same problem that you see.
>
> (Change 127.0.0.1:9009 to be whatever remote server runs your fastcgi
> server, if that makes it easier to run tcpdump.)
>
> Good luck with it,
>

?I share the wish. :o)
Please share the results of every step with us for we could help you
further.?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/4177a7c8/attachment.html>

From mdounin at mdounin.ru Mon Oct 7 23:02:40 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 8 Oct 2013 03:02:40 +0400
Subject: reverse proxy and nested locations
In-Reply-To: <24A832F0-6483-4D4F-BF24-F2B6F2947C3E@wogri.com>
References: <24A832F0-6483-4D4F-BF24-F2B6F2947C3E@wogri.com>
Message-ID: <20131007230239.GF76294@mdounin.ru>

Hello!

On Mon, Oct 07, 2013 at 10:31:15PM +0200, Wolfgang Hennerbichler wrote:

> Hi list,
>
> I'd like to have an elegant reverse proxy configuration, where I
> allow specific sub-URIs behind the reverse-proxy-URL for
> specific IP Adresses. My intended configuration looks like this:
>
> # TRAC
> location /trac {
> proxy_pass https://my.web.server:443/trac/;
> location /trac/project {
> allow 10.32.1.146;
> allow 10.64.0.6;
> deny all;
> }
> }
>
> However, the location /trac/project does not inherit the
> proxy-pass directive. It works if I add 'proxy_pass
> https://my.web.server:443/trac/;' in the location /trac/project.
> This is redundant and I don't like that.

The idea is that a request handler (proxy_pass in your case) is
always explicitly set for a location. Hence handlers are not
inherited.

If you want to drop something redundant, than I would recommend to
drop an URI part in proxy_pass intead. Something like this should
do what you need:

location /trac/ {
proxy_pass https://my.web.server;
}

location /trac/project/ {
proxy_pass https://my.web.server;
allow ...
deny all;
}

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Mon Oct 7 23:18:45 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 8 Oct 2013 03:18:45 +0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <525309E7.6030209@indietorrent.org>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org>
Message-ID: <20131007231845.GG76294@mdounin.ru>

Hello!

On Mon, Oct 07, 2013 at 03:22:15PM -0400, Ben Johnson wrote:

[...]

> Sorry to bump this topic, but I feel as though I have exhausted the
> available information on this subject.
>
> I'm pretty much in the same boat as Roger from
> http://stackoverflow.com/questions/4870697/php-flush-that-works-even-in-nginx
> . I have tried all of the suggestions mentioned and still cannot disable
> output buffering in PHP scripts that are called via nginx.
>
> I have ensured that:
>
> 1.) output_buffering = "Off" in effective php.ini.
>
> 2.) zlib.output_compression = "Off" in effective php.ini.
>
> 3.) implicit_flush = "On" in effective php.ini.
>
> 4.) "gzip off" in nginx.conf.
>
> 5.) "fastcgi_keep_conn on" in nginx.conf.
>
> 6.) "proxy_buffering off" in nginx.conf.

Just a side note: proxy_buffering is unrelated to fastcgi,
switching if off does nothing as long as you use fastcgi_pass.

> nginx 1.5.2 (Windows)
> PHP 5.4.8 Thread-Safe
> Server API: CGI/FastCGI
>
> Is there something else that I've overlooked?
>
> Perhaps there is someone with a few moments free time who would be
> willing to give this a shot on his own system. This seems "pretty
> basic", but is proving to be a real challenge.

There are lots of possible places where data can be buffered for
various reasons, e.g. postpone_output (see
http://nginx.org/r/postpone_output). In your configuration you
seems to disable gzip filter - but there are other filter which
may buffer data as well, such as SSI and sub filters, and likely
many 3rd party modules.

While it should be possible to carefully configure nginx to avoid
all places where buffering can happen, it should be much easier to
use

fastcgi_buffering off;

as available in nginx 1.5.6, see
http://nginx.org/r/fastcgi_buffering.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 8 00:12:43 2013
From: nginx-forum at nginx.us (sandrex)
Date: Mon, 07 Oct 2013 20:12:43 -0400
Subject: Doubt regarding Windows 7, MP4 module and HTML5.
In-Reply-To: <20131007110629.GA76294@mdounin.ru>
References: <20131007110629.GA76294@mdounin.ru>
Message-ID: <221e6b788baa6056f900e8f23025e2b7.NginxMailingListEnglish@forum.nginx.org>

Thank you.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243465,243496#msg-243496


From reallfqq-nginx at yahoo.fr Tue Oct 8 02:57:14 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 7 Oct 2013 22:57:14 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <20131007231845.GG76294@mdounin.ru>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org>
<20131007231845.GG76294@mdounin.ru>
Message-ID: <CALqce=2AjjP5f9jq0WsNyPkdTFe=swJvxCWdceZCkbwnqBPLVQ@mail.gmail.com>

I took a bit of time to do that... TBH I lost a lot of time finding a way
to record traffic to a locally hosted Web server in Windows... :o\
Why would people host stuff with Windows? oO

Anyway. Here are the details:

Configuration:
nginx 1.5.6
PHP 5.4.20 Thread-Safe
Wireshark 1.10.2
I took the liberty of upgrading test components to the latest release in
the same branch, since some bugs of interest might have been corrected.

Synthesis:
I didn't go far on the PHP side, but I noticed on early captures that PHP
was still sending everything after 5 seconds.

I cheated a little bit by modifying the test file to use the PHP flush()
procedure which forces buffers to be emptied and content sent to the client.

I then noticed on the capture that PHP was rightfully sending the content
in 2 parts as expected but somehow nginx was still waiting for the last
parto to arrive before sending content to the client.

There is still work to be done on the nginx side. Since we are on the nginx
mailing list, you may prioritize and see to the PHP part later on. :o)

Every modification I made to the original nginx.conf file is self-contained
in the location serving '.php' files.
How to reproduce:
The main concern here was to record traffic between nginx and PHP. Here are
the steps for a successful operation.


1. Use the nginx configuration provided as attachment (nginx.conf to put
in <nginx dir>\conf\, overwriting the default one)
2. Place the test script in <nginx dir>\html\
3. Use the PHP configuration provided as attachement (php.ini to put in
<PHP dir>)
4. Modify Windows' routing table to force local traffic to make a round
trip to the nearest router/switch (local traffic can't be recorded on
modern Windows) :
5. In cmd.exe, type 'route add <computer external IP address> <gateway
IP address>' (you'll find required information with a quick 'ipconfig')
6. Start PHP with following arguments (either command-line or through a
shortcut): 'php-cgi.exe -b <computer external IP address>:9000'
7. Start nginx (simply double click on it)
8. Check that 2 nginx processes and 1 php-cgi.exe process exist in the
task manager.
9. Check (through 'netstat -abn') that php-cgi.exe is listening on
<computer external IP address>:9000
10. Start Wireshark recording on the interface related to the IP address
used before (or all interfaces) with capture filter 'port 9000'
11. Browse to http://localhost/test.php
12. Stop Wireshark recording

You'll find my recording of the backend traffic as attachement.
Please ignore the duplicated traffic (ad traffic going forth and back on
the network interface is recorded 2 times total: that's a drawback to the
'hack' setup you need on Windows to record local traffic...).

Hope that'll help
?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/175c177d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: nginx.conf
Type: application/octet-stream
Size: 2806 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/175c177d/attachment-0003.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test.php
Type: application/x-httpd-php
Size: 77 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/175c177d/attachment-0001.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: php.ini
Type: application/octet-stream
Size: 68879 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/175c177d/attachment-0004.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: nginx_php_backend.pcapng
Type: application/octet-stream
Size: 5592 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131007/175c177d/attachment-0005.obj>

From wogri at wogri.com Tue Oct 8 04:17:33 2013
From: wogri at wogri.com (Wolfgang Hennerbichler)
Date: Tue, 8 Oct 2013 06:17:33 +0200
Subject: reverse proxy and nested locations
In-Reply-To: <20131007230239.GF76294@mdounin.ru>
References: <24A832F0-6483-4D4F-BF24-F2B6F2947C3E@wogri.com>
<20131007230239.GF76294@mdounin.ru>
Message-ID: <5E26F7AF-9AEE-4D07-A0F6-09042755F51F@wogri.com>

On Oct 8, 2013, at 01:02 , Maxim Dounin <mdounin at mdounin.ru> wrote:

> Hello!

Hi and thanks for your quick response,

> On Mon, Oct 07, 2013 at 10:31:15PM +0200, Wolfgang Hennerbichler wrote:
>
>> Hi list,
>>
>> I'd like to have an elegant reverse proxy configuration, where I
>> allow specific sub-URIs behind the reverse-proxy-URL for
>> specific IP Adresses. My intended configuration looks like this:
>>
>> # TRAC
>> location /trac {
>> proxy_pass https://my.web.server:443/trac/;
>> location /trac/project {
>> allow 10.32.1.146;
>> allow 10.64.0.6;
>> deny all;
>> }
>> }
>>
>> However, the location /trac/project does not inherit the
>> proxy-pass directive. It works if I add 'proxy_pass
>> https://my.web.server:443/trac/;' in the location /trac/project.
>> This is redundant and I don't like that.
>
> The idea is that a request handler (proxy_pass in your case) is
> always explicitly set for a location. Hence handlers are not
> inherited.

I already feared so.

> If you want to drop something redundant, than I would recommend to
> drop an URI part in proxy_pass intead. Something like this should
> do what you need:
>
> location /trac/ {
> proxy_pass https://my.web.server;
> }
>
> location /trac/project/ {
> proxy_pass https://my.web.server;
> allow ...
> deny all;
> }

thanks. Will have to cope with it.

Wolfgang

> --
> Maxim Dounin
> http://nginx.org/en/donation.html
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


From lists at ruby-forum.com Tue Oct 8 08:33:38 2013
From: lists at ruby-forum.com (Aivaras La)
Date: Tue, 08 Oct 2013 10:33:38 +0200
Subject: Graceful backend shutdown
In-Reply-To: <20131007111244.GB76294@mdounin.ru>
References: <eb47f388ce60b96ed31a6fa358bef9e7@ruby-forum.com>
<20131007111244.GB76294@mdounin.ru>
Message-ID: <55d84a9cb457200517796c76531ad939@ruby-forum.com>

Maxim Dounin wrote in post #1123727:
> Hello!
>
> On Mon, Oct 07, 2013 at 10:08:48AM +0200, Aivaras La wrote:
>
>> Hi all!
>>
>> I'm using Nginx as a reverse proxy and loadbalancer with 2 backends.
>> Sometimes I need to turn off one of the apps server. And I need to do it
>> gracefully, that when I comment one server in Nginx config, Nginx master
>> process starts to send new requests to new server, but old requests and
>> sessions stay in old server. I tried to use down, but it loses sessions.
>> Then tried use kill -HUP, but Nginx immediately loads new config and
>> closes old sessions and redirects them to new server. Thanks for help.
>
> On kill -HUP nginx does a gracefull shutdown of old worker
> processes. That is, all requests being handled by old worker
> processes are continue to work till they are complete. No
> requests are lost and/or unexpectedly terminated. Details on
> reconfiguration process can be found here:
>
> http://nginx.org/en/docs/control.html#reconfiguration
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html

I'll try to explain my example:
In my config I have upstream with 1 backend, then I change that 1
backend server IP address ( I put something I don't need (just for
example) like local news page). Then I try to access Nginx (with old
config) which starts to load my big page. When page is loading I did HUP
signal and then my page wasn't completed. New workers spawned and old
workers quitted at the same second. Nobody was waiting. Is there a
possibility that old workers wait much longer? Or somehow to change
backend servers with serving old sessions? Thanks for help!

--
Posted via http://www.ruby-forum.com/.


From andrew.galdes at agix.com.au Tue Oct 8 08:35:52 2013
From: andrew.galdes at agix.com.au (Andrew Galdes)
Date: Tue, 8 Oct 2013 19:05:52 +1030
Subject: How to ensure good performance for the first X number of visitors
Message-ID: <CALm+qqDb5cRDpsA=5U9FcSL3dwj757-9ygDj0U8Tg8mQEa6uuQ@mail.gmail.com>

Hi all,

I'm looking for some guidance with Nginx. We have inherited a web-server
that experiences heavy load. The server is running: Ubuntu Server 12.4, 24
Cores, 50~GB RAM.

The CPU usage during peak time is 100%. Memory is about 50%.

The client has asked that we configure Nginx to give the best user
experience for the first X number of visitors while the others are 503'd.
So a visitor to the website can experience good usage from start to
purchase (it's a commerce site). While others who visit (over a threshold)
would get 503 errors. This would ensure that, if you can get in, your
experience would be nice. If you can't, 503 and try again later.

I see there are some modules that can do this with a recompile but we'd
rather not go down that path at this stage. Given that http is
session-less, i suppose cookies have been used to achieve this. Or based on
IP address.

Any thoughts?


--
-Andrew Galdes
Managing Director

RHCSA, LPI, CCENT

AGIX Linux

Ph: 08 7324 4429
Mb: 0422 927 598

Site: http://www.agix.com.au
Twitter: http://twitter.com/agixlinux
LinkedIn: http://au.linkedin.com/in/andrewgaldes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131008/3750aba4/attachment.html>

From steve at greengecko.co.nz Tue Oct 8 08:43:11 2013
From: steve at greengecko.co.nz (Steve Holdoway)
Date: Tue, 08 Oct 2013 21:43:11 +1300
Subject: How to ensure good performance for the first X number of visitors
In-Reply-To: <CALm+qqDb5cRDpsA=5U9FcSL3dwj757-9ygDj0U8Tg8mQEa6uuQ@mail.gmail.com>
References: <CALm+qqDb5cRDpsA=5U9FcSL3dwj757-9ygDj0U8Tg8mQEa6uuQ@mail.gmail.com>
Message-ID: <5253C59F.9000803@greengecko.co.nz>

On 08/10/13 21:35, Andrew Galdes wrote:
> Hi all,
>
> I'm looking for some guidance with Nginx. We have inherited a
> web-server that experiences heavy load. The server is running: Ubuntu
> Server 12.4, 24 Cores, 50~GB RAM.
>
> The CPU usage during peak time is 100%. Memory is about 50%.
>
> The client has asked that we configure Nginx to give the best user
> experience for the first X number of visitors while the others are
> 503'd. So a visitor to the website can experience good usage from
> start to purchase (it's a commerce site). While others who visit (over
> a threshold) would get 503 errors. This would ensure that, if you can
> get in, your experience would be nice. If you can't, 503 and try again
> later.
>
> I see there are some modules that can do this with a recompile but
> we'd rather not go down that path at this stage. Given that http is
> session-less, i suppose cookies have been used to achieve this. Or
> based on IP address.
>
> Any thoughts?
>
>
> --
> -Andrew Galdes
> Managing Director
>
> RHCSA, LPI, CCENT
>
> AGIX Linux
>
> Ph: 08 7324 4429
> Mb: 0422 927 598
>
> Site: http://www.agix.com.au
> Twitter: http://twitter.com/agixlinux
> LinkedIn: http://au.linkedin.com/in/andrewgaldes
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Hi Andrew,

I seem to be doing exactly this for a living at the moment ( especially
Magento installs ), and not that many timezones away. If you want to
drop me a line offlist, please do!

Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131008/90e33692/attachment.html>

From nginx-forum at nginx.us Tue Oct 8 11:09:03 2013
From: nginx-forum at nginx.us (lmwood)
Date: Tue, 08 Oct 2013 07:09:03 -0400
Subject: 404 on Prestashop 1.5 under nginx
In-Reply-To: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org>
References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <48e8a1d8a207ff67a5c5c7a38f5e0333.NginxMailingListEnglish@forum.nginx.org>

What was the solution to this problem? I am also searching for the answer to
this.

Thanks in advance.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,243508#msg-243508


From mdounin at mdounin.ru Tue Oct 8 12:38:05 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 8 Oct 2013 16:38:05 +0400
Subject: Graceful backend shutdown
In-Reply-To: <55d84a9cb457200517796c76531ad939@ruby-forum.com>
References: <eb47f388ce60b96ed31a6fa358bef9e7@ruby-forum.com>
<20131007111244.GB76294@mdounin.ru>
<55d84a9cb457200517796c76531ad939@ruby-forum.com>
Message-ID: <20131008123805.GK76294@mdounin.ru>

Hello!

On Tue, Oct 08, 2013 at 10:33:38AM +0200, Aivaras La wrote:

> Maxim Dounin wrote in post #1123727:
> > Hello!
> >
> > On Mon, Oct 07, 2013 at 10:08:48AM +0200, Aivaras La wrote:
> >
> >> Hi all!
> >>
> >> I'm using Nginx as a reverse proxy and loadbalancer with 2 backends.
> >> Sometimes I need to turn off one of the apps server. And I need to do it
> >> gracefully, that when I comment one server in Nginx config, Nginx master
> >> process starts to send new requests to new server, but old requests and
> >> sessions stay in old server. I tried to use down, but it loses sessions.
> >> Then tried use kill -HUP, but Nginx immediately loads new config and
> >> closes old sessions and redirects them to new server. Thanks for help.
> >
> > On kill -HUP nginx does a gracefull shutdown of old worker
> > processes. That is, all requests being handled by old worker
> > processes are continue to work till they are complete. No
> > requests are lost and/or unexpectedly terminated. Details on
> > reconfiguration process can be found here:
> >
> > http://nginx.org/en/docs/control.html#reconfiguration
>
> I'll try to explain my example:
> In my config I have upstream with 1 backend, then I change that 1
> backend server IP address ( I put something I don't need (just for
> example) like local news page). Then I try to access Nginx (with old
> config) which starts to load my big page. When page is loading I did HUP
> signal and then my page wasn't completed. New workers spawned and old
> workers quitted at the same second. Nobody was waiting. Is there a
> possibility that old workers wait much longer? Or somehow to change
> backend servers with serving old sessions? Thanks for help!

>From your description of the problem I tend to think that by
"loading a big page" you mean a page with many external resources
(like images, etc.), and your new nginx configuration isn't able
to handle requests to these resources.

This is not going to work. As you probably heard before, HTTP is
a stateless protocol. This means that in terms of HTTP there is
no such a thing as a "session", or "page loading". The only thing
HTTP knows about is a _request_. That is, nginx guarantees that
_requests_ are handled correctly according to a configuration it
has. But nginx doesn't know that your browser didn't loaded some
of the external resources it needs yet, and your new configuration
can't handle requests to these resources.

Correct approach to your problem is to use a configuration that
can handle requests all the time, instead of breaking it at some
point.

--
Maxim Dounin
http://nginx.org/en/donation.html


From reallfqq-nginx at yahoo.fr Tue Oct 8 12:39:14 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Tue, 8 Oct 2013 08:39:14 -0400
Subject: How to ensure good performance for the first X number of visitors
In-Reply-To: <CALm+qqDb5cRDpsA=5U9FcSL3dwj757-9ygDj0U8Tg8mQEa6uuQ@mail.gmail.com>
References: <CALm+qqDb5cRDpsA=5U9FcSL3dwj757-9ygDj0U8Tg8mQEa6uuQ@mail.gmail.com>
Message-ID: <CALqce=2crx5MREu9jUzoPmRQXV979pJng4zPkL42F00r5UMfqg@mail.gmail.com>

Hello,

I would take a look at this module:
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html

Official modules are available through the pre-compiled official binaries,
you won't need to do that by hand.
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131008/ea3ae627/attachment.html>

From nginx-forum at nginx.us Tue Oct 8 13:39:18 2013
From: nginx-forum at nginx.us (tonimarmol)
Date: Tue, 08 Oct 2013 09:39:18 -0400
Subject: 404 on Prestashop 1.5 under nginx
In-Reply-To: <48e8a1d8a207ff67a5c5c7a38f5e0333.NginxMailingListEnglish@forum.nginx.org>
References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org>
<48e8a1d8a207ff67a5c5c7a38f5e0333.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <25f454927e15f53130b9960526f4fab2.NginxMailingListEnglish@forum.nginx.org>

The problem is on prestashop configuration. Not nginx.

You must define (add if not exists) the custom name of the url on the "SEO &
URL" tab.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,243522#msg-243522


From nginx-forum at nginx.us Tue Oct 8 13:39:32 2013
From: nginx-forum at nginx.us (tonimarmol)
Date: Tue, 08 Oct 2013 09:39:32 -0400
Subject: 404 on Prestashop 1.5 under nginx
In-Reply-To: <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org>
References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org>
<aa930bf5966d13d864f743e933bd1faa.NginxMailingListEnglish@forum.nginx.org>
<8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <6b0a84278b782f269feb995650e60fb9.NginxMailingListEnglish@forum.nginx.org>

The problem is on prestashop configuration. Not nginx.

You must define (add if not exists) the custom name of the url on the "SEO &
URL" tab.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,243523#msg-243523


From mdounin at mdounin.ru Tue Oct 8 13:42:12 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 8 Oct 2013 17:42:12 +0400
Subject: nginx-1.4.3
Message-ID: <20131008134212.GM76294@mdounin.ru>

Changes with nginx 1.4.3 08 Oct 2013

*) Bugfix: a segmentation fault might occur in a worker process if the
ngx_http_spdy_module was used with the "client_body_in_file_only"
directive.

*) Bugfix: a segmentation fault might occur on start or during
reconfiguration if the "try_files" directive was used with an empty
parameter.

*) Bugfix: the $request_time variable did not work in nginx/Windows.

*) Bugfix: in the ngx_http_auth_basic_module when using "$apr1$"
password encryption method.
Thanks to Markus Linnala.

*) Bugfix: in the ngx_http_autoindex_module.

*) Bugfix: in the mail proxy server.


--
Maxim Dounin
http://nginx.org/en/donation.html


From reallfqq-nginx at yahoo.fr Tue Oct 8 14:45:08 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Tue, 8 Oct 2013 10:45:08 -0400
Subject: nginx-1.4.3
In-Reply-To: <20131008134212.GM76294@mdounin.ru>
References: <20131008134212.GM76294@mdounin.ru>
Message-ID: <CALqce=2GhswsAfa9Ci8D+0-to4tLw4kSh+R4Q=8MPRXwvdW1yw@mail.gmail.com>

Wow, that's maintenance ^^

Thanks to the dev team.

I am getting lost on the trac Web interface: where could I get details o
nthe defect affecting autoindex?

I'll wait for the Debian package to be available in the repo, then... :o)
??---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131008/fb036b4d/attachment.html>

From mdounin at mdounin.ru Tue Oct 8 15:48:51 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 8 Oct 2013 19:48:51 +0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <CALqce=2AjjP5f9jq0WsNyPkdTFe=swJvxCWdceZCkbwnqBPLVQ@mail.gmail.com>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org>
<20131007231845.GG76294@mdounin.ru>
<CALqce=2AjjP5f9jq0WsNyPkdTFe=swJvxCWdceZCkbwnqBPLVQ@mail.gmail.com>
Message-ID: <20131008154851.GQ76294@mdounin.ru>

Hello!

On Mon, Oct 07, 2013 at 10:57:14PM -0400, B.R. wrote:

[...]

> I then noticed on the capture that PHP was rightfully sending the content
> in 2 parts as expected but somehow nginx was still waiting for the last
> parto to arrive before sending content to the client.

What makes you think that nginx was waiting for the last part
without sending data to the client?

Please note that checking by a browser as in your check list isn't
something meaningful as browsers may (and likely will) wait for a
complete response from a server. In my limited testing on
Windows, IE needs a complete response, while Chrome shows data on
arrival.

Just in case, it works fine here with the following minimal
config:

events {}
http {
server {
listen 8080;
location / {
fastcgi_pass backend:9000;
fastcgi_param SCRIPT_FILENAME /path/to/flush.php;
fastcgi_keep_conn on;
}
}
}

But, again, testing with fastcgi_keep_conn is mostly useless now,
it's an abuse of the unrelated directive. The fastcgi_buffering
directive is already here in 1.5.6, use

fastcgi_buffering off;

instead if you need to turn off buffering for fastcgi responses.
Just in case, documentation can be found here:

http://nginx.org/r/fastcgi_buffering

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 8 15:54:22 2013
From: nginx-forum at nginx.us (jlintz)
Date: Tue, 08 Oct 2013 11:54:22 -0400
Subject: Capture 2xx response from proxy and rewrite
Message-ID: <0019af65c984a5912fa0db27ffa65341.NginxMailingListEnglish@forum.nginx.org>

Hi,

Is it possible to catch a 2xx response from a upstream proxy and rewrite the
response? We're able to do this with 4xx and 5xx responses using
proxy_intercept_errors and error_page, but I haven't found an equivalent way
of doing this for 2xx responses.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243530,243530#msg-243530


From 2013 at uxp.de Tue Oct 8 19:42:56 2013
From: 2013 at uxp.de (Markus Gerstel)
Date: Tue, 08 Oct 2013 20:42:56 +0100
Subject: intermittent connectivity issues ngx_mail_pop3_module
Message-ID: <52546040.8030200@uxp.de>

Hi everyone,

I've recently installed nginx as a POP3/IMAP proxy, fronting for a
single server. Everything works most of the time. But every once in a
while, nginx fails to forward incoming connections with "-ERR internal
server error".
I've narrowed it down to the stage *after* the authorization, but need
some help interpreting the nginx debug output.

Here's a failing session: http://pastebin.com/VtQKsb92
and a successful session: http://pastebin.com/vae5YwAy

They start to diverge at line 107:

success:
*3181 recv: fd:18 18 of 4096
*3181 mail proxy send user
*3181 send: fd:18 13 of 13
*3181 post event 0000000001CE83D0
*3181 post event 0000000001D1C3E0
*3181 delete posted event 0000000001D1C3E0
*3181 mail proxy dummy handler

failure:
*3028 recv: fd:26 0 of 4096
*3028 close mail proxy connection: 26
*3028 event timer del: 26: 1381257081145
*3028 reusable connection: 0
*3028 SSL to write: 28
*3028 SSL_write: 28
*3028 close mail connection: 23

What does 'recv 0' mean? Does this mean that nginx has a problem in
opening the connection to the actual pop3 server? (If I open connections
from the nginx computer to the pop3 server directly they always work.)

Anyone have an idea how to get more information from nginx at this stage?

-Markus



tried with 1.2.1 and
nginx version: nginx/1.4.1
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-log-path=/var/log/nginx/access.log
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi
--lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid
--with-pcre-jit --with-debug --with-http_addition_module
--with-http_dav_module --with-http_geoip_module
--with-http_gzip_static_module --with-http_image_filter_module
--with-http_realip_module --with-http_stub_status_module
--with-http_ssl_module --with-http_sub_module --with-http_xslt_module
--with-ipv6 --with-mail --with-mail_ssl_module
--add-module=/build/nginx-WYjrxz/nginx-1.4.1/debian/modules/nginx-auth-pam
--add-module=/build/nginx-WYjrxz/nginx-1.4.1/debian/modules/nginx-dav-ext-module
--add-module=/build/nginx-WYjrxz/nginx-1.4.1/debian/modules/nginx-echo
--add-module=/build/nginx-WYjrxz/nginx-1.4.1/debian/modules/nginx-upstream-fair
--add-module=/build/nginx-WYjrxz/nginx-1.4.1/debian/modules/ngx_http_substitutions_filter_module


POP3 communication - success case:
fetchmail: POP3< +OK POP3 ready
fetchmail: POP3> CAPA
fetchmail: POP3< +OK Capability list follows
fetchmail: POP3< TOP
fetchmail: POP3< USER
fetchmail: POP3< SASL LOGIN PLAIN
fetchmail: POP3< STLS
fetchmail: POP3< .
fetchmail: POP3> STLS
fetchmail: POP3< +OK
fetchmail: Server certificate:
(..)
fetchmail: POP3> CAPA
fetchmail: POP3< +OK Capability list follows
fetchmail: POP3< TOP
fetchmail: POP3< USER
fetchmail: POP3< SASL LOGIN PLAIN
fetchmail: POP3< .
fetchmail: upgrade to TLS succeeded.
fetchmail: POP3> USER web1p1
fetchmail: POP3< +OK
fetchmail: POP3> PASS *
fetchmail: POP3< +OK logged in.
fetchmail: POP3> STAT
fetchmail: POP3< +OK 0 0
fetchmail: No mail for web1p1
fetchmail: POP3> QUIT
fetchmail: POP3< +OK Bye-bye.


POP3 communication - failure case:
fetchmail: POP3< +OK POP3 ready
fetchmail: POP3> CAPA
fetchmail: POP3< +OK Capability list follows
fetchmail: POP3< TOP
fetchmail: POP3< USER
fetchmail: POP3< SASL LOGIN PLAIN
fetchmail: POP3< STLS
fetchmail: POP3< .
fetchmail: POP3> STLS
fetchmail: POP3< +OK
fetchmail: Server certificate:
(..)
fetchmail: POP3> CAPA
fetchmail: POP3< +OK Capability list follows
fetchmail: POP3< TOP
fetchmail: POP3< USER
fetchmail: POP3< SASL LOGIN PLAIN
fetchmail: POP3< .
fetchmail: upgrade to TLS succeeded.
fetchmail: POP3> USER web1p1
fetchmail: POP3< +OK
fetchmail: POP3> PASS *
fetchmail: POP3< -ERR internal server error
fetchmail: POP3> QUIT



POP3 communication - failure case:
fetchmail: POP3< +OK POP3 ready
fetchmail: POP3> CAPA
fetchmail: POP3< +OK Capability list follows
fetchmail: POP3< TOP
fetchmail: POP3< USER
fetchmail: POP3< SASL LOGIN PLAIN
fetchmail: POP3< STLS
fetchmail: POP3< .
fetchmail: POP3> STLS
fetchmail: POP3< +OK
fetchmail: Server certificate:
(..)
fetchmail: POP3> CAPA
fetchmail: POP3< +OK Capability list follows
fetchmail: POP3< TOP
fetchmail: POP3< USER
fetchmail: POP3< SASL LOGIN PLAIN
fetchmail: POP3< .
fetchmail: upgrade to TLS succeeded.
fetchmail: POP3> USER web1p1
fetchmail: POP3< +OK
fetchmail: POP3> PASS *
fetchmail: POP3< -ERR internal server error
fetchmail: POP3> QUIT


From kworthington at gmail.com Tue Oct 8 22:54:09 2013
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 8 Oct 2013 18:54:09 -0400
Subject: nginx-1.4.3
In-Reply-To: <20131008134212.GM76294@mdounin.ru>
References: <20131008134212.GM76294@mdounin.ru>
Message-ID: <CAGo79UUHa8Qa1VFGwftgqOHVx_aQUCv=ik8O+Jx65-3t7-yMBg@mail.gmail.com>

Hello Nginx users,

Now available: Nginx 1.4.3 for Windows http://goo.gl/vjluLA (32-bit and
64-bit versions)

These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.

Announcements are also available via Twitter (
http://twitter.com/kworthington), if you prefer to receive updates that way.

Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
http://kevinworthington.com/
http://twitter.com/kworthington

On Tue, Oct 8, 2013 at 9:42 AM, Maxim Dounin <mdounin at mdounin.ru> wrote:

> Changes with nginx 1.4.3 08 Oct
> 2013
>
> *) Bugfix: a segmentation fault might occur in a worker process if the
> ngx_http_spdy_module was used with the "client_body_in_file_only"
> directive.
>
> *) Bugfix: a segmentation fault might occur on start or during
> reconfiguration if the "try_files" directive was used with an empty
> parameter.
>
> *) Bugfix: the $request_time variable did not work in nginx/Windows.
>
> *) Bugfix: in the ngx_http_auth_basic_module when using "$apr1$"
> password encryption method.
> Thanks to Markus Linnala.
>
> *) Bugfix: in the ngx_http_autoindex_module.
>
> *) Bugfix: in the mail proxy server.
>
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131008/8bcbc4fe/attachment.html>

From nginx-forum at nginx.us Wed Oct 9 06:19:43 2013
From: nginx-forum at nginx.us (mex)
Date: Wed, 09 Oct 2013 02:19:43 -0400
Subject: [DOC] OpenSSL Cookbook v1.1 released (by Ivan Rictic)
Message-ID: <5e2af0e414c98a84f6dad139fa672b2c.NginxMailingListEnglish@forum.nginx.org>

Hi List,

for thos of your who have to deal with SSL there is a goodie,
released by Ivan Ristic; see
http://blog.ivanristic.com/2013/10/openssl-cookbook-v1.1-released.html


from the Blog:

OpenSSL Cookbook is a free ebook based around one chapter
of my in-progress book Bulletproof SSL/TLS and PKI.
The appendix contains the SSL/TLS Deployment Best Practices document
(re-published with permission from Qualys). In total, there's about 50 pages
of
text that covers the OpenSSL essentials, starting with installation, then
key and
certificate management, and finally cipher suite configuration.

Download: https://www.feistyduck.com/books/openssl-cookbook/



regards,


mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243541,243541#msg-243541


From appa at perusio.net Wed Oct 9 11:04:46 2013
From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=)
Date: Wed, 9 Oct 2013 13:04:46 +0200
Subject: Problem with {} in map regex matching
Message-ID: <CA+VA=FYgmnmcDXHbD04naJE31b5QOqrLbKbrmLO_SKP-rqztnw@mail.gmail.com>

To my surprise, apparently doing a match with {} like in:

map $args $has_tr0_arg {
default 0;

~"tr%5B0%5D%3D=[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}%2F[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}%26"
1;

~"%26tr%5B0%5D%3D=[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}%2F[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}"
1;
}

Doesn't work.

Which is bit surprising knowing that location regex matching works with {},
of course you have to quote it, like I do above.

This is what I get from nginx -t:

nginx: [emerg] unexpected "{" in /etc/nginx/nginx.conf

Is this expected behavior or I'm doing something wrong here?


Thanks,
----appa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131009/1894c70c/attachment.html>

From nginx-forum at nginx.us Wed Oct 9 11:08:33 2013
From: nginx-forum at nginx.us (moorthi)
Date: Wed, 09 Oct 2013 07:08:33 -0400
Subject: how to get remote_addr in ngix for POP/IMAP with perl module
In-Reply-To: <7d44bae7dff645a60e7b170812bab137.NginxMailingListEnglish@forum.nginx.org>
References: <20100825181658.GX48332@rambler-co.ru>
<c05be27d1d9c207e1d082fa1840791a3.NginxMailingListEnglish@forum.nginx.org>
<7d44bae7dff645a60e7b170812bab137.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <8dd62fa865b70f8ac3ced2fc7c0e4689.NginxMailingListEnglish@forum.nginx.org>

one more problem is how get original ip in authentication details of
imap-server log instead of nginx server ip.
when i see cyrus authentication log (/var/log/maillog) it shows nginx ip as
client-ip instead of original desktop ip.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,123661,243550#msg-243550


From mdounin at mdounin.ru Wed Oct 9 11:09:20 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 9 Oct 2013 15:09:20 +0400
Subject: intermittent connectivity issues ngx_mail_pop3_module
In-Reply-To: <52546040.8030200@uxp.de>
References: <52546040.8030200@uxp.de>
Message-ID: <20131009110919.GU76294@mdounin.ru>

Hello!

On Tue, Oct 08, 2013 at 08:42:56PM +0100, Markus Gerstel wrote:

> Hi everyone,
>
> I've recently installed nginx as a POP3/IMAP proxy, fronting for a
> single server. Everything works most of the time. But every once in
> a while, nginx fails to forward incoming connections with "-ERR
> internal server error".
> I've narrowed it down to the stage *after* the authorization, but
> need some help interpreting the nginx debug output.

[...]

> failure:
> *3028 recv: fd:26 0 of 4096
> *3028 close mail proxy connection: 26
> *3028 event timer del: 26: 1381257081145
> *3028 reusable connection: 0
> *3028 SSL to write: 28
> *3028 SSL_write: 28
> *3028 close mail connection: 23
>
> What does 'recv 0' mean? Does this mean that nginx has a problem in
> opening the connection to the actual pop3 server? (If I open
> connections from the nginx computer to the pop3 server directly they
> always work.)

It means the recv() syscall returned 0, which in turn means
connection was closed by other side, i.e. by your backend server.
Try looking into your backend's logs to find out why - there is no
additional information available on nginx side.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Wed Oct 9 11:17:33 2013
From: nginx-forum at nginx.us (moorthi)
Date: Wed, 09 Oct 2013 07:17:33 -0400
Subject: how to get remote_addr in ngix for POP/IMAP with perl module
In-Reply-To: <8dd62fa865b70f8ac3ced2fc7c0e4689.NginxMailingListEnglish@forum.nginx.org>
References: <20100825181658.GX48332@rambler-co.ru>
<c05be27d1d9c207e1d082fa1840791a3.NginxMailingListEnglish@forum.nginx.org>
<7d44bae7dff645a60e7b170812bab137.NginxMailingListEnglish@forum.nginx.org>
<8dd62fa865b70f8ac3ced2fc7c0e4689.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <dccd7547aec79822ebb11b65c4552929.NginxMailingListEnglish@forum.nginx.org>

other issue i'm getting is when i login thru php webmail (which connects to
nginx for imap proxy) I am not getting original remote_addr in nginx server,
instead i am getting 127.0.0.1, I've tried below header in php webmail where
imap login happens, using
header('X-Forwarded-For: '.$_SERVER['REMOTE_ADDR']);

how should i get original remote address in nginx.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,123661,243552#msg-243552


From mdounin at mdounin.ru Wed Oct 9 11:44:39 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 9 Oct 2013 15:44:39 +0400
Subject: Problem with {} in map regex matching
In-Reply-To: <CA+VA=FYgmnmcDXHbD04naJE31b5QOqrLbKbrmLO_SKP-rqztnw@mail.gmail.com>
References: <CA+VA=FYgmnmcDXHbD04naJE31b5QOqrLbKbrmLO_SKP-rqztnw@mail.gmail.com>
Message-ID: <20131009114439.GX76294@mdounin.ru>

Hello!

On Wed, Oct 09, 2013 at 01:04:46PM +0200, Ant?nio P. P. Almeida wrote:

> To my surprise, apparently doing a match with {} like in:
>
> map $args $has_tr0_arg {
> default 0;
>
> ~"tr%5B0%5D%3D=[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}%2F[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}%26"
> 1;
>
> ~"%26tr%5B0%5D%3D=[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}%2F[1-9][[:digit:]]{3}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}"
> 1;
> }
>
> Doesn't work.
>
> Which is bit surprising knowing that location regex matching works with {},
> of course you have to quote it, like I do above.
>
> This is what I get from nginx -t:
>
> nginx: [emerg] unexpected "{" in /etc/nginx/nginx.conf
>
> Is this expected behavior or I'm doing something wrong here?

Leading '~' is outside quotes, which results in your regular
expression being interpreted as non-quoted string. Use something
like this instead:

map ... {
"~(foo){3}" 1;
}

--
Maxim Dounin
http://nginx.org/en/donation.html


From quintessence at bulinfo.net Wed Oct 9 14:21:51 2013
From: quintessence at bulinfo.net (Bozhidara Marinchovska)
Date: Wed, 09 Oct 2013 17:21:51 +0300
Subject: nginx limit_rate if in location - strange behaviour - possible bug ?
Message-ID: <5255667F.2060803@bulinfo.net>

Hi,

nginx -V
nginx version: nginx/1.4.2
configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I
/usr/local/include' --with-ld-opt='-L /usr/local/lib'
--conf-path=/usr/local/etc/nginx/nginx.conf
--sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid
--error-log-path=/var/log/nginx-error.log --user=www --group=www
--http-client-body-temp-path=/var/tmp/nginx/client_body_temp
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
--http-proxy-temp-path=/var/tmp/nginx/proxy_temp
--http-scgi-temp-path=/var/tmp/nginx/scgi_temp
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp
--http-log-path=/var/log/nginx-access.log
--with-http_image_filter_module --with-http_stub_status_module
--add-module=/usr/ports/www/nginx/work/naxsi-core-0.50/naxsi_src --with-pcre

FreeBSD 9.1-STABLE #0: Sat May 18 00:32:18 EEST 2013 amd64

I'm using limit_rate case if in location. Regarding documentation "if in
location" context is avaiable.

My configuration is as follows:

location some_extension {

# case 1
if match_something {
root ...
break;
}

# case 2
if match_another {
root ...
break;
}

# else (case3)
root ...
something other ...
here it is placed also limit_rate / limit_after directives
}

There is a root inside location with a (strong) reason :) (nginx
pitfails case "root inside location block - BAD").

When I open in my browser http://my.website.com/myfile.ext it matches
case 3 from the cofiguration. Limit_rate/limit_after works correct as
expected.
I want case1 not to have limit_rate / limit_after.

Test one:
In case1 I place limit_rate 0, case3 is the same limit_rate_after XXm;
limit_rate some_rate. When I open in my browser URL matching case1 -
limit_rate 0 is ignored. After hitting XXm from the file I get
limit_rate from case 3.

Test 2:
In case 1 I place limit_rate_after 0; limit_rate 0, case3 is the same
limit_rate_after XXm; limit_rate some rate. When I open in my browser
URL matching case 1 - limit_rate_after 0 and limit_rate 0 are ignored.
Worst is that when I try to download the file, I even didn't match case3
- my download starts from the first MB with limit_rate bandwidth from case3.

Both tests are made in interval from 20 minutes, 1 connection from my
IP, etc.

I don't post my whole configuration, because may be it is unnessesary.
If cases are with http_referer.

Case 1 - if I see referer some_referer, I do something. (here I don't
want to place any limits)
Case 2 - If I see another referer , I do something else.
Case 3 - Else ... something other (here I have some limits)

I'm sure I match case1 when I test (nginx-error.log with debug say it),
I mean my configuration with cases is working as expected, the new thing
is limit_rate and limit_rate_after which is not working as expected.

Any thoughts ? Meanwhile I will test on another version.

Thanks




From quintessence at bulinfo.net Wed Oct 9 14:41:30 2013
From: quintessence at bulinfo.net (Bozhidara Marinchovska)
Date: Wed, 09 Oct 2013 17:41:30 +0300
Subject: nginx limit_rate if in location - strange behaviour - possible
bug ?
In-Reply-To: <5255667F.2060803@bulinfo.net>
References: <5255667F.2060803@bulinfo.net>
Message-ID: <52556B1A.4060701@bulinfo.net>

I'm sorry, misread documentation.

Placed set $limit_rate 0 in my case1 instead limit_rate 0 and now works
as expected.


On 09.10.2013 17:21 ?., Bozhidara Marinchovska wrote:
> Hi,
>
> nginx -V
> nginx version: nginx/1.4.2
> configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I
> /usr/local/include' --with-ld-opt='-L /usr/local/lib'
> --conf-path=/usr/local/etc/nginx/nginx.conf
> --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid
> --error-log-path=/var/log/nginx-error.log --user=www --group=www
> --http-client-body-temp-path=/var/tmp/nginx/client_body_temp
> --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
> --http-proxy-temp-path=/var/tmp/nginx/proxy_temp
> --http-scgi-temp-path=/var/tmp/nginx/scgi_temp
> --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp
> --http-log-path=/var/log/nginx-access.log
> --with-http_image_filter_module --with-http_stub_status_module
> --add-module=/usr/ports/www/nginx/work/naxsi-core-0.50/naxsi_src
> --with-pcre
>
> FreeBSD 9.1-STABLE #0: Sat May 18 00:32:18 EEST 2013 amd64
>
> I'm using limit_rate case if in location. Regarding documentation "if
> in location" context is avaiable.
>
> My configuration is as follows:
>
> location some_extension {
>
> # case 1
> if match_something {
> root ...
> break;
> }
>
> # case 2
> if match_another {
> root ...
> break;
> }
>
> # else (case3)
> root ...
> something other ...
> here it is placed also limit_rate / limit_after directives
> }
>
> There is a root inside location with a (strong) reason :) (nginx
> pitfails case "root inside location block - BAD").
>
> When I open in my browser http://my.website.com/myfile.ext it matches
> case 3 from the cofiguration. Limit_rate/limit_after works correct as
> expected.
> I want case1 not to have limit_rate / limit_after.
>
> Test one:
> In case1 I place limit_rate 0, case3 is the same limit_rate_after XXm;
> limit_rate some_rate. When I open in my browser URL matching case1 -
> limit_rate 0 is ignored. After hitting XXm from the file I get
> limit_rate from case 3.
>
> Test 2:
> In case 1 I place limit_rate_after 0; limit_rate 0, case3 is the same
> limit_rate_after XXm; limit_rate some rate. When I open in my browser
> URL matching case 1 - limit_rate_after 0 and limit_rate 0 are ignored.
> Worst is that when I try to download the file, I even didn't match
> case3 - my download starts from the first MB with limit_rate bandwidth
> from case3.
>
> Both tests are made in interval from 20 minutes, 1 connection from my
> IP, etc.
>
> I don't post my whole configuration, because may be it is unnessesary.
> If cases are with http_referer.
>
> Case 1 - if I see referer some_referer, I do something. (here I don't
> want to place any limits)
> Case 2 - If I see another referer , I do something else.
> Case 3 - Else ... something other (here I have some limits)
>
> I'm sure I match case1 when I test (nginx-error.log with debug say
> it), I mean my configuration with cases is working as expected, the
> new thing is limit_rate and limit_rate_after which is not working as
> expected.
>
> Any thoughts ? Meanwhile I will test on another version.
>
> Thanks
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


From mdounin at mdounin.ru Wed Oct 9 15:46:44 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 9 Oct 2013 19:46:44 +0400
Subject: nginx limit_rate if in location - strange behaviour - possible
bug ?
In-Reply-To: <5255667F.2060803@bulinfo.net>
References: <5255667F.2060803@bulinfo.net>
Message-ID: <20131009154644.GY76294@mdounin.ru>

Hello!

On Wed, Oct 09, 2013 at 05:21:51PM +0300, Bozhidara Marinchovska wrote:

[...]

> I'm using limit_rate case if in location. Regarding documentation
> "if in location" context is avaiable.
>
> My configuration is as follows:
>
> location some_extension {
>
> # case 1
> if match_something {
> root ...
> break;
> }
>
> # case 2
> if match_another {
> root ...
> break;
> }
>
> # else (case3)
> root ...
> something other ...
> here it is placed also limit_rate / limit_after directives
> }
>
> There is a root inside location with a (strong) reason :) (nginx
> pitfails case "root inside location block - BAD").
>
> When I open in my browser http://my.website.com/myfile.ext it
> matches case 3 from the cofiguration. Limit_rate/limit_after works
> correct as expected.
> I want case1 not to have limit_rate / limit_after.

The limit_rate directive is somewhat special (== wierd), and a
limit once applied on a location matched is preserved for future
use. Further matches with different limit_rate set doesn't
influence the limit actually used.

If you want to override limit_rate on a specific condition, use

set $limit_rate <rate>;

instead, see http://nginx.org/r/limit_rate.

--
Maxim Dounin
http://nginx.org/en/donation.html


From damonswirled at hotmail.com Thu Oct 10 09:47:46 2013
From: damonswirled at hotmail.com (nomad Bellcam)
Date: Thu, 10 Oct 2013 03:47:46 -0600
Subject: unicorn as simple cgi without rails
Message-ID: <BAY168-W652912C0166BFBFF9FFFDAD41E0@phx.gbl>

hello,
i recently set up a new server upon which i installed nginx to try it out (and which i have been quite happy with since).
my website is mostly static with some small cgi areas, and i like to use ruby for the cgi. when i did my research for the best ruby cgi handler for nginx, unicorn figured prominently in my results, and so i became interested in trying it. i spent some time reading up on how to configure and use it but have been unsuccessful implementing, mostly i believe due to the fact that i do not have a rails framework installed nor a legitimate rackup config.ru
my question is this: does it make any sense at all to use unicorn as a ruby cgi handler if i am not also using rails? and if there is indeed some sense in this idea, how might i go about it? is there a simple rackup file that would work for a configuration such as this? i couldn't find any information on rackup configs of this sort, and not being familiar with rails the terrain simply became to steep at this point to continue without some guidance or assurances.
thank you for your consideration.
sincerely,nomad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131010/5cdaca09/attachment.html>

From contact at jpluscplusm.com Thu Oct 10 11:04:03 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Thu, 10 Oct 2013 12:04:03 +0100
Subject: unicorn as simple cgi without rails
In-Reply-To: <BAY168-W652912C0166BFBFF9FFFDAD41E0@phx.gbl>
References: <BAY168-W652912C0166BFBFF9FFFDAD41E0@phx.gbl>
Message-ID: <CAKsTx7AFXv4xg1QkQjnvj5UqneMbmq+P4HN5=ORmdQ-140bb3A@mail.gmail.com>

On 10 Oct 2013 10:48, "nomad Bellcam" <damonswirled at hotmail.com> wrote:
> my question is this: does it make any sense at all to use unicorn as a
ruby cgi handler if i am not also using rails?

This is a perfectly sensible thing to consider doing; I do that myself for
some sites.

However, this may not be the best forum on which to discuss it, as it is
not particularly related to nginx.

Jonathan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131010/6da3d594/attachment.html>

From damonswirled at hotmail.com Thu Oct 10 11:42:45 2013
From: damonswirled at hotmail.com (nomad Bellcam)
Date: Thu, 10 Oct 2013 05:42:45 -0600
Subject: unicorn as simple cgi without rails
In-Reply-To: <CAKsTx7AFXv4xg1QkQjnvj5UqneMbmq+P4HN5=ORmdQ-140bb3A@mail.gmail.com>
References: <BAY168-W652912C0166BFBFF9FFFDAD41E0@phx.gbl>,
<CAKsTx7AFXv4xg1QkQjnvj5UqneMbmq+P4HN5=ORmdQ-140bb3A@mail.gmail.com>
Message-ID: <BAY168-W692E45D61BE7278423668FD41E0@phx.gbl>





On 10 Oct 2013 10:48, "nomad Bellcam" <damonswirled at hotmail.com> wrote:

> my question is this: does it make any sense at all to use unicorn as a ruby cgi handler if i am not also using rails?
This is a perfectly sensible thing to consider doing; I do that myself for some sites.
However, this may not be the best forum on which to discuss it, as it is not particularly related to nginx.
Jonathan
thanks jonathan,
i will try my luck on the unicorn list.
nomad.

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131010/be56a33a/attachment.html>

From ben at indietorrent.org Thu Oct 10 15:13:40 2013
From: ben at indietorrent.org (Ben Johnson)
Date: Thu, 10 Oct 2013 11:13:40 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <20131008154851.GQ76294@mdounin.ru>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org> <20131007231845.GG76294@mdounin.ru>
<CALqce=2AjjP5f9jq0WsNyPkdTFe=swJvxCWdceZCkbwnqBPLVQ@mail.gmail.com>
<20131008154851.GQ76294@mdounin.ru>
Message-ID: <5256C424.9080708@indietorrent.org>



On 10/8/2013 11:48 AM, Maxim Dounin wrote:
> Hello!
>
> On Mon, Oct 07, 2013 at 10:57:14PM -0400, B.R. wrote:
>
> [...]
>
>> I then noticed on the capture that PHP was rightfully sending the content
>> in 2 parts as expected but somehow nginx was still waiting for the last
>> parto to arrive before sending content to the client.
>
> What makes you think that nginx was waiting for the last part
> without sending data to the client?
>
> Please note that checking by a browser as in your check list isn't
> something meaningful as browsers may (and likely will) wait for a
> complete response from a server. In my limited testing on
> Windows, IE needs a complete response, while Chrome shows data on
> arrival.
>
> Just in case, it works fine here with the following minimal
> config:
>
> events {}
> http {
> server {
> listen 8080;
> location / {
> fastcgi_pass backend:9000;
> fastcgi_param SCRIPT_FILENAME /path/to/flush.php;
> fastcgi_keep_conn on;
> }
> }
> }
>
> But, again, testing with fastcgi_keep_conn is mostly useless now,
> it's an abuse of the unrelated directive. The fastcgi_buffering
> directive is already here in 1.5.6, use
>
> fastcgi_buffering off;
>
> instead if you need to turn off buffering for fastcgi responses.
> Just in case, documentation can be found here:
>
> http://nginx.org/r/fastcgi_buffering
>

Hi, everyone, so sorry for the delayed reply.

Thank you to ittp2012, Francis, Maxim, and B.R.

Well, after all of the configuration changes, both to nginx and PHP, the
solution was to add the following header to the response:

header('Content-Encoding: none;');

With this header in-place (sent as the first output in the PHP test
script), I see the timing intervals from the test script printed to the
browser in real-time. This works even in nginx-1.5.2, with my existing
configuration. (This seems to work in Chrome and Firefox, but not IE,
which corroborates Maxim's above observations re: individual browser
behavior.)

The whole reason for which I was seeking to disable output buffering is
that I need to test nginx's ability to handle multiple requests
simultaneously. This need is inspired by yet another problem, about
which I asked on this list in late August: "504 Gateway Time-out when
calling curl_exec() in PHP with SSL peer verification
(CURLOPT_SSL_VERIFYPEER) off".

Some folks suggested that the cURL problem could result from nginx not
being able to serve more than one request for a PHP file at a time. So,
that's why I cooked up this test with sleep() and so forth.

Now that output buffering is disabled, I am able to test concurrency.
Sure enough, if I request my concurrency test script in two different
browser tabs, the second tab will not begin producing output until the
first tab has finished. I set the test time to 120 seconds and at
exactly 120 seconds, the second script begins producing output.

Also, while one of these tests is running, I am unable to request a
"normal PHP web page" from the same server (localhost). The request
"hangs" until the concurrency test in the other tab is finished.

I even tried requesting the test script from two different browsers, and
the second browser always hangs until the first completes.

These observations lend credence to the notion that my cURL script is
failing due to dead-locking of some kind. (I'll refrain from discussing
this other problem here, as it has its own thread.)

Is this inability to handle concurrent requests a limitation of nginx on
Windows? Do others on Windows observe this same behavior?

I did see the Windows limitation, "Although several workers can be
started, only one of them actually does any work", but that isn't the
problem here, right? One nginx worker does not mean that only one PHP
request can be satisfied at a time, correct?

Thanks again for all the help, everyone!

-Ben


From mdounin at mdounin.ru Thu Oct 10 15:26:26 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 10 Oct 2013 19:26:26 +0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <5256C424.9080708@indietorrent.org>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org>
<20131007231845.GG76294@mdounin.ru>
<CALqce=2AjjP5f9jq0WsNyPkdTFe=swJvxCWdceZCkbwnqBPLVQ@mail.gmail.com>
<20131008154851.GQ76294@mdounin.ru>
<5256C424.9080708@indietorrent.org>
Message-ID: <20131010152626.GE76294@mdounin.ru>

Hello!

On Thu, Oct 10, 2013 at 11:13:40AM -0400, Ben Johnson wrote:

[...]

> Well, after all of the configuration changes, both to nginx and PHP, the
> solution was to add the following header to the response:
>
> header('Content-Encoding: none;');

Just in case: this is very-very wrong, there is no such
content-coding. Never use this in real programs.

But the fact that it helps suggests you actually have gzip enabled
somewhere in your nginx config - as gzip doesn't work if it sees
Content-Encoding set.

All this probably doesn't matter due to you only used it as a
debugging tool.

[...]

> The whole reason for which I was seeking to disable output buffering is
> that I need to test nginx's ability to handle multiple requests
> simultaneously. This need is inspired by yet another problem, about
> which I asked on this list in late August: "504 Gateway Time-out when
> calling curl_exec() in PHP with SSL peer verification
> (CURLOPT_SSL_VERIFYPEER) off".
>
> Some folks suggested that the cURL problem could result from nginx not
> being able to serve more than one request for a PHP file at a time. So,
> that's why I cooked up this test with sleep() and so forth.
>
> Now that output buffering is disabled, I am able to test concurrency.
> Sure enough, if I request my concurrency test script in two different
> browser tabs, the second tab will not begin producing output until the
> first tab has finished. I set the test time to 120 seconds and at
> exactly 120 seconds, the second script begins producing output.
>
> Also, while one of these tests is running, I am unable to request a
> "normal PHP web page" from the same server (localhost). The request
> "hangs" until the concurrency test in the other tab is finished.
>
> I even tried requesting the test script from two different browsers, and
> the second browser always hangs until the first completes.
>
> These observations lend credence to the notion that my cURL script is
> failing due to dead-locking of some kind. (I'll refrain from discussing
> this other problem here, as it has its own thread.)
>
> Is this inability to handle concurrent requests a limitation of nginx on
> Windows? Do others on Windows observe this same behavior?

Your problem is that you only have one PHP process running - and
it can only service one request at a time. AFAIK, php-cgi can't
run more than one process on Windows (on Unix it can, with
PHP_FCGI_CHILDREN set). Not sure if there are good options to run
multiple PHP processes on Windows.

Quick-and-dirty solution would be to run multiple php-cgi
processes on different ports and list them all in an upstream{}
block.

> I did see the Windows limitation, "Although several workers can be
> started, only one of them actually does any work", but that isn't the
> problem here, right? One nginx worker does not mean that only one PHP
> request can be satisfied at a time, correct?

Correct. One nginx process can handle multiple requests, it's one
PHP process which limits you.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Thu Oct 10 15:42:16 2013
From: nginx-forum at nginx.us (mex)
Date: Thu, 10 Oct 2013 11:42:16 -0400
Subject: Getting forward secrecy enabled
In-Reply-To: <524BD86C.8060805@bluerosetech.com>
References: <524BD86C.8060805@bluerosetech.com>
Message-ID: <db6897fca4c649f0038cdf295a0afd11.NginxMailingListEnglish@forum.nginx.org>

hi darren,

your ciphers look very good!

i included your suggestion in my ssl-guide, looking forward to perftest
those
cipher_suites.



regards,

mex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243341,243594#msg-243594


From ben at indietorrent.org Thu Oct 10 16:11:58 2013
From: ben at indietorrent.org (Ben Johnson)
Date: Thu, 10 Oct 2013 12:11:58 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <20131010152626.GE76294@mdounin.ru>
References: <52373D89.2000906@indietorrent.org>
<525309E7.6030209@indietorrent.org> <20131007231845.GG76294@mdounin.ru>
<CALqce=2AjjP5f9jq0WsNyPkdTFe=swJvxCWdceZCkbwnqBPLVQ@mail.gmail.com>
<20131008154851.GQ76294@mdounin.ru> <5256C424.9080708@indietorrent.org>
<20131010152626.GE76294@mdounin.ru>
Message-ID: <5256D1CE.6000308@indietorrent.org>



On 10/10/2013 11:26 AM, Maxim Dounin wrote:
> Hello!
>
> On Thu, Oct 10, 2013 at 11:13:40AM -0400, Ben Johnson wrote:
>
> [...]
>
>> Well, after all of the configuration changes, both to nginx and PHP, the
>> solution was to add the following header to the response:
>>
>> header('Content-Encoding: none;');
>
> Just in case: this is very-very wrong, there is no such
> content-coding. Never use this in real programs.
>
> But the fact that it helps suggests you actually have gzip enabled
> somewhere in your nginx config - as gzip doesn't work if it sees
> Content-Encoding set.
>
> All this probably doesn't matter due to you only used it as a
> debugging tool.
>
> [...]
>
>> The whole reason for which I was seeking to disable output buffering is
>> that I need to test nginx's ability to handle multiple requests
>> simultaneously. This need is inspired by yet another problem, about
>> which I asked on this list in late August: "504 Gateway Time-out when
>> calling curl_exec() in PHP with SSL peer verification
>> (CURLOPT_SSL_VERIFYPEER) off".
>>
>> Some folks suggested that the cURL problem could result from nginx not
>> being able to serve more than one request for a PHP file at a time. So,
>> that's why I cooked up this test with sleep() and so forth.
>>
>> Now that output buffering is disabled, I am able to test concurrency.
>> Sure enough, if I request my concurrency test script in two different
>> browser tabs, the second tab will not begin producing output until the
>> first tab has finished. I set the test time to 120 seconds and at
>> exactly 120 seconds, the second script begins producing output.
>>
>> Also, while one of these tests is running, I am unable to request a
>> "normal PHP web page" from the same server (localhost). The request
>> "hangs" until the concurrency test in the other tab is finished.
>>
>> I even tried requesting the test script from two different browsers, and
>> the second browser always hangs until the first completes.
>>
>> These observations lend credence to the notion that my cURL script is
>> failing due to dead-locking of some kind. (I'll refrain from discussing
>> this other problem here, as it has its own thread.)
>>
>> Is this inability to handle concurrent requests a limitation of nginx on
>> Windows? Do others on Windows observe this same behavior?
>
> Your problem is that you only have one PHP process running - and
> it can only service one request at a time. AFAIK, php-cgi can't
> run more than one process on Windows (on Unix it can, with
> PHP_FCGI_CHILDREN set). Not sure if there are good options to run
> multiple PHP processes on Windows.
>

Thank you for clarifying this crucial point, Maxim. I believe that this
is indeed the crux of the issue.

> Quick-and-dirty solution would be to run multiple php-cgi
> processes on different ports and list them all in an upstream{}
> block.
>
>> I did see the Windows limitation, "Although several workers can be
>> started, only one of them actually does any work", but that isn't the
>> problem here, right? One nginx worker does not mean that only one PHP
>> request can be satisfied at a time, correct?
>
> Correct. One nginx process can handle multiple requests, it's one
> PHP process which limits you.
>

Understood. This is so hard to believe (the lack of support for multiple
simultaneous PHP processes on Windows) that I had overlooked this as a
possibility.

And, now that you've explained the problem, finding corroborating
evidence is much easier:
http://stackoverflow.com/questions/2793996/php-running-as-a-fastcgi-application-php-cgi-how-to-issue-concurrent-request
.

An interesting excerpt from the above thread:

"I did look deeper into the PHP source code after that and found that
the section of code which responds to PHP_FCGI_CHILDREN has been
encapsulated by #ifndef WIN32 So the developers must be aware of the issue."

For the time being, I'll have to run these cURL scripts using Apache
with mod_php, instead of nginx. Not the end of the world.

Thanks again for your valuable time and for clearing-up this major
limitation of PHP (NOT nginx) on Windows.

Best regards,

-Ben


From nginx-forum at nginx.us Thu Oct 10 17:35:00 2013
From: nginx-forum at nginx.us (itpp2012)
Date: Thu, 10 Oct 2013 13:35:00 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <20131010152626.GE76294@mdounin.ru>
References: <20131010152626.GE76294@mdounin.ru>
Message-ID: <b1f176ec0c83f8076c276f714bdaedf9.NginxMailingListEnglish@forum.nginx.org>

> Correct. One nginx process can handle multiple requests, it's one
> PHP process which limits you.

Not really, use the NTS version of php not the TS, and use a pool as
suggested, e.a.;

# loadbalancing php
upstream myLoadBalancer {
server 127.0.0.1:19001 weight=1 fail_timeout=5;
server 127.0.0.1:19002 weight=1 fail_timeout=5;
server 127.0.0.1:19003 weight=1 fail_timeout=5;
server 127.0.0.1:19004 weight=1 fail_timeout=5;
server 127.0.0.1:19005 weight=1 fail_timeout=5;
server 127.0.0.1:19006 weight=1 fail_timeout=5;
server 127.0.0.1:19007 weight=1 fail_timeout=5;
server 127.0.0.1:19008 weight=1 fail_timeout=5;
server 127.0.0.1:19009 weight=1 fail_timeout=5;
server 127.0.0.1:19010 weight=1 fail_timeout=5;
# usage: fastcgi_pass myLoadBalancer;
}

For a 100mb pipeline this is enough to handle many, many concurrent users.

runcgi.cmd
----------------------------------------------------
@ECHO OFF
ECHO Starting PHP FastCGI...
c:
cd \php

del abort.now

start multi_runcgi.cmd 19001
start multi_runcgi.cmd 19002
start multi_runcgi.cmd 19003
start multi_runcgi.cmd 19004
start multi_runcgi.cmd 19005
start multi_runcgi.cmd 19006
start multi_runcgi.cmd 19007
start multi_runcgi.cmd 19008
start multi_runcgi.cmd 19009
start multi_runcgi.cmd 19010
----------------------------------------------------

multi_runcgi.cmd
----------------------------------------------------
@ECHO OFF
ECHO Starting PHP FastCGI...
set PATH=C:\PHP;%PATH%
set TEMP=\webroot\_other\xcache
set TMP=\webroot\_other\xcache
set PHP_FCGI_CHILDREN=0
set PHP_FCGI_MAX_REQUESTS=10000

:loop
c:
cd \php

C:\PHP\php-cgi.exe -b 127.0.0.1:%1

set errorlvl=%errorlevel%
choice /t:y,3

date /t>>\webroot\_other\fzlogs\ServerWatch.log
time /t>>\webroot\_other\fzlogs\ServerWatch.log
echo Process php-cgi %1 restarted>>\webroot\_other\fzlogs\ServerWatch.log
echo Errorlevel = %errorlvl% >>\webroot\_other\fzlogs\ServerWatch.log
echo:>>\webroot\_other\fzlogs\ServerWatch.log

if not exist abort.now goto loop
----------------------------------------------------

Create a service which starts runcgi.cmd at boot.
After the service is running, assign a very limited user to the service.
ea. always jail nginx and php-cgi separately.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242895,243597#msg-243597


From mdounin at mdounin.ru Thu Oct 10 18:24:09 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 10 Oct 2013 22:24:09 +0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <b1f176ec0c83f8076c276f714bdaedf9.NginxMailingListEnglish@forum.nginx.org>
References: <20131010152626.GE76294@mdounin.ru>
<b1f176ec0c83f8076c276f714bdaedf9.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131010182409.GG76294@mdounin.ru>

Hello!

On Thu, Oct 10, 2013 at 01:35:00PM -0400, itpp2012 wrote:

> > Correct. One nginx process can handle multiple requests, it's one
> > PHP process which limits you.
>
> Not really, use the NTS version of php not the TS, and use a pool as
> suggested, e.a.;
>
> # loadbalancing php
> upstream myLoadBalancer {
> server 127.0.0.1:19001 weight=1 fail_timeout=5;
> server 127.0.0.1:19002 weight=1 fail_timeout=5;
> server 127.0.0.1:19003 weight=1 fail_timeout=5;
> server 127.0.0.1:19004 weight=1 fail_timeout=5;
> server 127.0.0.1:19005 weight=1 fail_timeout=5;
> server 127.0.0.1:19006 weight=1 fail_timeout=5;
> server 127.0.0.1:19007 weight=1 fail_timeout=5;
> server 127.0.0.1:19008 weight=1 fail_timeout=5;
> server 127.0.0.1:19009 weight=1 fail_timeout=5;
> server 127.0.0.1:19010 weight=1 fail_timeout=5;
> # usage: fastcgi_pass myLoadBalancer;
> }

Just in case, it would be good idea to use least_conn balancer
here.

http://nginx.org/r/least_conn

--
Maxim Dounin
http://nginx.org/en/donation.html


From lists at ruby-forum.com Thu Oct 10 22:27:34 2013
From: lists at ruby-forum.com (Matt Spitz)
Date: Fri, 11 Oct 2013 00:27:34 +0200
Subject: Logging header values from the first upstream
Message-ID: <53e06ac5162eb789bb0601f01cdffabd@ruby-forum.com>

The HttpUpstreamModule states that variable values for
$upstream_http_$HEADER are only valid for the last upstream accessed
during a request. I'd like to know if there's a workaround.

Specifically, I'm setting (and then clearing) headers in my first
upstream to get request-specific information like 'username'. Here's
what I've got:

===

http {
log_format myapp '$remote_addr [$time_local] '
'$request_time '
'"$request" $status '
'$request_length $body_bytes_sent '
'"$upstream_addr" $upstream_response_time'
'$upstream_http_x_myapp_username ';
...

server {
location /api/ {
...
access_log /var/log/nginx/myapp_access.log myapp;

proxy_pass http://myapp_upstream;

more_clear_headers 'X-MyApp-Username';
...
}

location ~* /internal/media_url/(.*) {
# only allowed for internal redirects (X-Accel-Redirect,
included)
internal;

# still need to set our access log to get the original
request
access_log /var/log/nginx/myapp_access.log myapp;

proxy_pass $1;
}
}
}

===

Some of my /api accesses result in internal media redirects, so I use
the X-Accel-Redirect header to redirect to an internal location.
Obviously, when the first location redirects to /internal/media_url/*,
the reference to $upstream_http_x_myapp_username in my log line refers
to the X-MyApp-Username header that is returned by the media server,
which is empty.

How can I log the username with my requests that result in an internal
redirect when the information is only available from the first upstream?

Thanks for your help!

--
Posted via http://www.ruby-forum.com/.


From mdounin at mdounin.ru Thu Oct 10 22:56:01 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 11 Oct 2013 02:56:01 +0400
Subject: Logging header values from the first upstream
In-Reply-To: <53e06ac5162eb789bb0601f01cdffabd@ruby-forum.com>
References: <53e06ac5162eb789bb0601f01cdffabd@ruby-forum.com>
Message-ID: <20131010225601.GK76294@mdounin.ru>

Hello!

On Fri, Oct 11, 2013 at 12:27:34AM +0200, Matt Spitz wrote:

> The HttpUpstreamModule states that variable values for
> $upstream_http_$HEADER are only valid for the last upstream accessed
> during a request. I'd like to know if there's a workaround.

[...]

The $upstream_* variables are cleared once nginx starts to work
with a new upstream. If you want to preserve values before
X-Accel-Redirect, you may do so by using "set" in a destination
location of X-Accel-Redirect (this works as rewrite directives are
executed before upstream data is re-initialized). E.g.:

location /api/ {
proxy_pass ...
}

location /internal/ {
set $previous_upstream_http_foo $upstream_http_foo;
proxy_pass ...
}

--
Maxim Dounin
http://nginx.org/en/donation.html


From ville.j.mattila at gmail.com Fri Oct 11 01:23:38 2013
From: ville.j.mattila at gmail.com (Ville Mattila)
Date: Fri, 11 Oct 2013 04:23:38 +0300
Subject: How to tell Nginx not to decode URL for proxy_pass?
In-Reply-To: <CAPf1p=OnC05Lyt2JHeW-Z68UOYR7_uQt6xgrJ9=1uXw0F7MrKQ@mail.gmail.com>
References: <CAPf1p=OnC05Lyt2JHeW-Z68UOYR7_uQt6xgrJ9=1uXw0F7MrKQ@mail.gmail.com>
Message-ID: <CAPf1p=NJE-DUKLd7C2EC9zFaB72TSZj5eb6Qv2J_YfYXogV_6g@mail.gmail.com>

Hi,

I need to pass a certain URI namespace to an upstream servers, while taking
away the prefix. Consider following configuration:

location ^~ /going-to-upstream/ {
access_log off;
rewrite /upstream(/.*) $1 break;
proxy_pass http://upstream;
}

location / {
# Actual server
}

So, whenever I will get a request to
http://server/going-to-upstream/something -> I should have a request in my
upstream server for "/something". And I do.

However, as soon as the upstream part has something urlencoded, for example
an url, nginx decodes the url and passes it in decoded format to the
upstream. An example:

http://server/going-to-upstream/something/http%3A%2F%2Fserver%2F
will cause an upstream request "/something/http://server/" while I would
need literally "/something/http%3A%2F%2Fserver%2F"

How could I make the nginx to not decode the URI in rewrite?

(My actual use case is related to using Thumbor, see
http://tech.yipit.com/2013/01/03/how-yipit-scales-thumbnailing-with-thumbor-and-cloudfront/.
They have a dedicated nginx server { } for this, but I need to use an
existing to make Thumbor urls to live under our main application domain.)

Best regards,
Ville
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131011/48ead6c5/attachment.html>

From lists at ruby-forum.com Fri Oct 11 03:33:37 2013
From: lists at ruby-forum.com (Matt Spitz)
Date: Fri, 11 Oct 2013 05:33:37 +0200
Subject: Logging header values from the first upstream
In-Reply-To: <53e06ac5162eb789bb0601f01cdffabd@ruby-forum.com>
References: <53e06ac5162eb789bb0601f01cdffabd@ruby-forum.com>
Message-ID: <e06dcc2d12c7122760167a625e872ef6@ruby-forum.com>

Perfect. Thank you very much!

--
Posted via http://www.ruby-forum.com/.


From nmilas at noa.gr Fri Oct 11 08:07:22 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Fri, 11 Oct 2013 11:07:22 +0300
Subject: Quick performance deterioration when No. of clients increases
Message-ID: <5257B1BA.1050702@noa.gr>

Hello,

I am trying to migrate a Joomla 2.5.8 website from Apache to NGINX 1.4.2
with php-fpm 5.3.3 (and MySQL 5.5.34) on a CentOS 6.4 x86_64 Virtual
Machine (running under KMS).

The goal is to achieve better peak performance: This site has occasional
high peaks; while the normal traffic is ~10 req/sec, it may reach > 3000
req/sec for periods of a few hours (due to the type of services the site
provides - it is a non-profit, real-time seismicity-related site - so
php caching should not be more than 10 seconds).

The new VM (using Nginx) currently is in testing mode and it only has
1-core CPU / 3 GB of RAM. We tested performance with loadimpact and the
results are attached.

You can see at the load graph that as the load approaches 250 clients,
the response time increases very much and is already unacceptable (this
happens consistently). I expected better performance, esp. since caching
is enabled. Despite many efforts, I cannot find the cause of the
bottleneck, and how to deal with it. We would like to achieve better
scaling, esp. since NGINX is famous for its scaling capabilities. Having
very little experience with Nginx, I would like to ask for your
assistance for a better configuration.

When this performance deterioration occurs, we don't see very high CPU
load (Unix load peaks 2.5), neither RAM exhaustion (System RAM usage
appears to be below 30%). [Monitoring is through Nagios.]

Can you please guide me on how to correct this issue? Any and all
suggestions will be appreciated.

Current configuration, based on info available on the Internet, is as
follows (replaced true domain/website name and public IP address(es)):

=================== Nginx.conf ===================

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

worker_rlimit_nofile 200000;

events {
worker_connections 8192;
multi_accept on;
use epoll;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 64;

log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

log_format cache '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $upstream_cache_status $body_bytes_sent
"$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

fastcgi_cache_path /var/cache/nginx levels=1:2
keys_zone=microcache:5m max_size=1000m;

access_log /var/log/nginx/access.log main;

sendfile on;

tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;

types_hash_max_size 2048;
server_tokens off;

keepalive_requests 30;

open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

gzip on;
gzip_static on;
gzip_disable "msie6";
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;

include /etc/nginx/conf.d/*.conf;
}

==================================================

================ website config ==================

server {
listen 80;
server_name www.example.com;
access_log /var/webs/wwwexample/log/access_log main;
error_log /var/webs/wwwexample/log/error_log warn;
root /var/webs/wwwexample/www/;

index index.php index.html index.htm index.cgi default.html
default.htm default.php;
location / {
try_files $uri $uri/ /index.php?$args;
}

location /nginx_status {
stub_status on;
access_log off;
allow 10.10.10.0/24;
deny all;
}

location ~*
/(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
return 403;
error_page 403 /403_error.html;
}

location ~ /\.ht {
deny all;
}

location /administrator {
allow 10.10.10.0/24;
deny all;
}

location ~ \.php$ {

# Setup var defaults
set $no_cache "";
# If non GET/HEAD, don't cache & mark user as uncacheable for 1
second via cookie
if ($request_method !~ ^(GET|HEAD)$) {
set $no_cache "1";
}
# Drop no cache cookie if need be
# (for some reason, add_header fails if included in prior if-block)
if ($no_cache = "1") {
add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
add_header X-Microcachable "0";
}
# Bypass cache if no-cache cookie is set
if ($http_cookie ~* "_mcnc") {
set $no_cache "1";
}
# Bypass cache if flag is set
fastcgi_no_cache $no_cache;
fastcgi_cache_bypass $no_cache;
fastcgi_cache microcache;
fastcgi_cache_key $scheme$host$request_uri$request_method;
fastcgi_cache_valid 200 301 302 10s;
fastcgi_cache_use_stale updating error timeout invalid_header
http_500;
fastcgi_pass_header Set-Cookie;
fastcgi_pass_header Cookie;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_intercept_errors on;

fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_read_timeout 240;

fastcgi_pass unix:/tmp/php-fpm.sock;

fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

}

location ~* \.(ico|pdf|flv)$ {
expires 1d;
}

location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
expires 1d;
}

}
==================================================

================= php-fpm.conf ===================
include=/etc/php-fpm.d/*.conf
[global]
pid = /var/run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/error.log

daemonize = no
==================================================

============== php-fpm.d/www.conf ================

[www]
listen = /tmp/php-fpm.sock
listen.allowed_clients = 127.0.0.1
user = nginx
group = nginx

pm = dynamic
pm.max_children = 1024
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35

slowlog = /var/log/php-fpm/www-slow.log

php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 128M

php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/session

==================================================

================ mysql my.cnf ====================

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
user=mysql

query_cache_limit = 2M
query_cache_size = 200M
query_cache_type=1
thread_cache_size=128
key_buffer = 100M
join_buffer = 2M
table_cache= 150M
sort_buffer= 2M
read_rnd_buffer_size=10M
tmp_table_size=200M
max_heap_table_size=200M

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

==================================================

=============== mysqltuner report ================

>> MySQLTuner 1.2.0 - Major Hayden <major at mhtx.net>

-------- General Statistics
--------------------------------------------------
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.5.34
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics
-------------------------------------------
[--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 9M (Tables: 80)
[--] Data in InnoDB tables: 1M (Tables: 65)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[--] Data in MEMORY tables: 0B (Tables: 4)
[!!] Total fragmented tables: 66

-------- Security Recommendations
-------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics
-------------------------------------------------
[--] Up for: 12h 51m 16s (21K q [0.471 qps], 1K conn, TX: 10M, RX: 1M)
[--] Reads / Writes: 55% / 45%
[--] Total buffers: 694.0M global + 21.4M per thread (151 max threads)
[!!] Maximum possible memory usage: 3.8G (135% of installed RAM)
[OK] Slow queries: 0% (0/21K)
[OK] Highest usage of available connections: 23% (36/151)
[OK] Key buffer size / total MyISAM indexes: 150.0M/5.1M
[OK] Key buffer hit rate: 99.3% (51K cached / 358 reads)
[OK] Query cache efficiency: 80.9% (10K cached / 13K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 55 sorts)
[OK] Temporary tables created on disk: 8% (5 on disk / 60 total)
[OK] Thread cache hit rate: 98% (36 created / 1K connections)
[OK] Table cache hit rate: 20% (192 open / 937 opened)
[OK] Open file limit used: 0% (210/200K)
[OK] Table locks acquired immediately: 99% (4K immediate / 4K locks)
[!!] Connections aborted: 8%
[OK] InnoDB data size / buffer pool: 1.1M/128.0M

==================================================

Please advise.

Thanks and Regards,
Nick

-------------- next part --------------
A non-text attachment was scrubbed...
Name: load_impact_1.png
Type: image/png
Size: 56882 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131011/c8d6445e/attachment-0001.png>

From steve at greengecko.co.nz Fri Oct 11 08:18:39 2013
From: steve at greengecko.co.nz (Steve Holdoway)
Date: Fri, 11 Oct 2013 21:18:39 +1300
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <5257B1BA.1050702@noa.gr>
References: <5257B1BA.1050702@noa.gr>
Message-ID: <5257B45F.504@greengecko.co.nz>

The ultimate bottleneck in any setup like this is usually raw cpu
power. A single virtual core doesn't look like it'll hack it. You've
got 35 php processes serving 250 users, and I think it's just spread a
bit thin.

Apart from adding cores, there are 2 things I'd suggest looking at

- are you using an opcode cacher? APC ( install via pecl to get the
latest ) works really well with php in fpm... allocate plenty of memory
to it too
- check the bandwidth at the network interface. The usual 100Mbit
connection can easily get swamped by a graphics rich site - especially
with 250 concurrent users. If this is a problem, then look at using a
CDN to ease things.

hth,

Steve

On 11/10/13 21:07, Nikolaos Milas wrote:
> Hello,
>
> I am trying to migrate a Joomla 2.5.8 website from Apache to NGINX
> 1.4.2 with php-fpm 5.3.3 (and MySQL 5.5.34) on a CentOS 6.4 x86_64
> Virtual Machine (running under KMS).
>
> The goal is to achieve better peak performance: This site has
> occasional high peaks; while the normal traffic is ~10 req/sec, it may
> reach > 3000 req/sec for periods of a few hours (due to the type of
> services the site provides - it is a non-profit, real-time
> seismicity-related site - so php caching should not be more than 10
> seconds).
>
> The new VM (using Nginx) currently is in testing mode and it only has
> 1-core CPU / 3 GB of RAM. We tested performance with loadimpact and
> the results are attached.
>
> You can see at the load graph that as the load approaches 250 clients,
> the response time increases very much and is already unacceptable
> (this happens consistently). I expected better performance, esp. since
> caching is enabled. Despite many efforts, I cannot find the cause of
> the bottleneck, and how to deal with it. We would like to achieve
> better scaling, esp. since NGINX is famous for its scaling
> capabilities. Having very little experience with Nginx, I would like
> to ask for your assistance for a better configuration.
>
> When this performance deterioration occurs, we don't see very high CPU
> load (Unix load peaks 2.5), neither RAM exhaustion (System RAM usage
> appears to be below 30%). [Monitoring is through Nagios.]
>
> Can you please guide me on how to correct this issue? Any and all
> suggestions will be appreciated.
>
> Current configuration, based on info available on the Internet, is as
> follows (replaced true domain/website name and public IP address(es)):
>
> =================== Nginx.conf ===================
>
> user nginx;
> worker_processes 1;
>
> error_log /var/log/nginx/error.log warn;
> pid /var/run/nginx.pid;
>
> worker_rlimit_nofile 200000;
>
> events {
> worker_connections 8192;
> multi_accept on;
> use epoll;
> }
>
> http {
> include /etc/nginx/mime.types;
> default_type application/octet-stream;
> server_names_hash_bucket_size 64;
>
> log_format main '$remote_addr - $remote_user [$time_local]
> "$request" '
> '$status $body_bytes_sent "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"';
>
> log_format cache '$remote_addr - $remote_user [$time_local]
> "$request" '
> '$status $upstream_cache_status $body_bytes_sent
> "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"';
>
> fastcgi_cache_path /var/cache/nginx levels=1:2
> keys_zone=microcache:5m max_size=1000m;
>
> access_log /var/log/nginx/access.log main;
>
> sendfile on;
>
> tcp_nopush on;
> tcp_nodelay on;
> keepalive_timeout 2;
>
> types_hash_max_size 2048;
> server_tokens off;
>
> keepalive_requests 30;
>
> open_file_cache max=5000 inactive=20s;
> open_file_cache_valid 30s;
> open_file_cache_min_uses 2;
> open_file_cache_errors on;
>
> gzip on;
> gzip_static on;
> gzip_disable "msie6";
> gzip_http_version 1.1;
> gzip_vary on;
> gzip_comp_level 6;
> gzip_proxied any;
> gzip_types text/plain text/css application/json
> application/x-javascript text/xml application/xml application/xml+rss
> text/javascript application/javascript text/x-js;
> gzip_buffers 16 8k;
>
> include /etc/nginx/conf.d/*.conf;
> }
>
> ==================================================
>
> ================ website config ==================
>
> server {
> listen 80;
> server_name www.example.com;
> access_log /var/webs/wwwexample/log/access_log main;
> error_log /var/webs/wwwexample/log/error_log warn;
> root /var/webs/wwwexample/www/;
>
> index index.php index.html index.htm index.cgi default.html
> default.htm default.php;
> location / {
> try_files $uri $uri/ /index.php?$args;
> }
>
> location /nginx_status {
> stub_status on;
> access_log off;
> allow 10.10.10.0/24;
> deny all;
> }
>
> location ~*
> /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
> return 403;
> error_page 403 /403_error.html;
> }
>
> location ~ /\.ht {
> deny all;
> }
>
> location /administrator {
> allow 10.10.10.0/24;
> deny all;
> }
>
> location ~ \.php$ {
>
> # Setup var defaults
> set $no_cache "";
> # If non GET/HEAD, don't cache & mark user as uncacheable for
> 1 second via cookie
> if ($request_method !~ ^(GET|HEAD)$) {
> set $no_cache "1";
> }
> # Drop no cache cookie if need be
> # (for some reason, add_header fails if included in prior
> if-block)
> if ($no_cache = "1") {
> add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
> add_header X-Microcachable "0";
> }
> # Bypass cache if no-cache cookie is set
> if ($http_cookie ~* "_mcnc") {
> set $no_cache "1";
> }
> # Bypass cache if flag is set
> fastcgi_no_cache $no_cache;
> fastcgi_cache_bypass $no_cache;
> fastcgi_cache microcache;
> fastcgi_cache_key $scheme$host$request_uri$request_method;
> fastcgi_cache_valid 200 301 302 10s;
> fastcgi_cache_use_stale updating error timeout invalid_header
> http_500;
> fastcgi_pass_header Set-Cookie;
> fastcgi_pass_header Cookie;
> fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
>
> try_files $uri =404;
> include /etc/nginx/fastcgi_params;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> fastcgi_intercept_errors on;
>
> fastcgi_buffer_size 128k;
> fastcgi_buffers 256 16k;
> fastcgi_busy_buffers_size 256k;
> fastcgi_temp_file_write_size 256k;
> fastcgi_read_timeout 240;
>
> fastcgi_pass unix:/tmp/php-fpm.sock;
>
> fastcgi_index index.php;
> include /etc/nginx/fastcgi_params;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
>
> }
>
> location ~* \.(ico|pdf|flv)$ {
> expires 1d;
> }
>
> location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
> expires 1d;
> }
>
> }
> ==================================================
>
> ================= php-fpm.conf ===================
> include=/etc/php-fpm.d/*.conf
> [global]
> pid = /var/run/php-fpm/php-fpm.pid
> error_log = /var/log/php-fpm/error.log
>
> daemonize = no
> ==================================================
>
> ============== php-fpm.d/www.conf ================
>
> [www]
> listen = /tmp/php-fpm.sock
> listen.allowed_clients = 127.0.0.1
> user = nginx
> group = nginx
>
> pm = dynamic
> pm.max_children = 1024
> pm.start_servers = 5
> pm.min_spare_servers = 5
> pm.max_spare_servers = 35
>
> slowlog = /var/log/php-fpm/www-slow.log
>
> php_flag[display_errors] = off
> php_admin_value[error_log] = /var/log/php-fpm/www-error.log
> php_admin_flag[log_errors] = on
> php_admin_value[memory_limit] = 128M
>
> php_value[session.save_handler] = files
> php_value[session.save_path] = /var/lib/php/session
>
> ==================================================
>
> ================ mysql my.cnf ====================
>
> [mysqld]
> datadir=/var/lib/mysql
> socket=/var/lib/mysql/mysql.sock
> symbolic-links=0
> user=mysql
>
> query_cache_limit = 2M
> query_cache_size = 200M
> query_cache_type=1
> thread_cache_size=128
> key_buffer = 100M
> join_buffer = 2M
> table_cache= 150M
> sort_buffer= 2M
> read_rnd_buffer_size=10M
> tmp_table_size=200M
> max_heap_table_size=200M
>
> [mysqld_safe]
> log-error=/var/log/mysqld.log
> pid-file=/var/run/mysqld/mysqld.pid
>
> ==================================================
>
> =============== mysqltuner report ================
>
> >> MySQLTuner 1.2.0 - Major Hayden <major at mhtx.net>
>
> -------- General Statistics
> --------------------------------------------------
> [--] Skipped version check for MySQLTuner script
> [OK] Currently running supported MySQL version 5.5.34
> [OK] Operating on 64-bit architecture
>
> -------- Storage Engine Statistics
> -------------------------------------------
> [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
> [--] Data in MyISAM tables: 9M (Tables: 80)
> [--] Data in InnoDB tables: 1M (Tables: 65)
> [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
> [--] Data in MEMORY tables: 0B (Tables: 4)
> [!!] Total fragmented tables: 66
>
> -------- Security Recommendations
> -------------------------------------------
> [OK] All database users have passwords assigned
>
> -------- Performance Metrics
> -------------------------------------------------
> [--] Up for: 12h 51m 16s (21K q [0.471 qps], 1K conn, TX: 10M, RX: 1M)
> [--] Reads / Writes: 55% / 45%
> [--] Total buffers: 694.0M global + 21.4M per thread (151 max threads)
> [!!] Maximum possible memory usage: 3.8G (135% of installed RAM)
> [OK] Slow queries: 0% (0/21K)
> [OK] Highest usage of available connections: 23% (36/151)
> [OK] Key buffer size / total MyISAM indexes: 150.0M/5.1M
> [OK] Key buffer hit rate: 99.3% (51K cached / 358 reads)
> [OK] Query cache efficiency: 80.9% (10K cached / 13K selects)
> [OK] Query cache prunes per day: 0
> [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 55 sorts)
> [OK] Temporary tables created on disk: 8% (5 on disk / 60 total)
> [OK] Thread cache hit rate: 98% (36 created / 1K connections)
> [OK] Table cache hit rate: 20% (192 open / 937 opened)
> [OK] Open file limit used: 0% (210/200K)
> [OK] Table locks acquired immediately: 99% (4K immediate / 4K locks)
> [!!] Connections aborted: 8%
> [OK] InnoDB data size / buffer pool: 1.1M/128.0M
>
> ==================================================
>
> Please advise.
>
> Thanks and Regards,
> Nick
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131011/bbbe5124/attachment-0001.html>

From richard at kearsley.me Fri Oct 11 11:11:10 2013
From: richard at kearsley.me (Richard Kearsley)
Date: Fri, 11 Oct 2013 12:11:10 +0100
Subject: log timeouts to access log
Message-ID: <5257DCCE.6000503@kearsley.me>

Hi
I would like to log an indication of weather a request ended because of
a client timeout - in the access.log

e.g.

log_format normal '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent"
"$client_send_timed_out"';
where $client_send_timed_out would be a true/false or an 0/1 etc

Is there anything in there that would allow me to log that?

Many thanks
Richard


From dennisml at conversis.de Fri Oct 11 11:24:33 2013
From: dennisml at conversis.de (Dennis Jacobfeuerborn)
Date: Fri, 11 Oct 2013 13:24:33 +0200
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <5257B45F.504@greengecko.co.nz>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
Message-ID: <5257DFF1.4050709@conversis.de>

On 11.10.2013 10:18, Steve Holdoway wrote:
> The ultimate bottleneck in any setup like this is usually raw cpu
> power. A single virtual core doesn't look like it'll hack it. You've
> got 35 php processes serving 250 users, and I think it's just spread a
> bit thin.
>
> Apart from adding cores, there are 2 things I'd suggest looking at
>
> - are you using an opcode cacher? APC ( install via pecl to get the
> latest ) works really well with php in fpm... allocate plenty of memory
> to it too

APC is sort of deprecated though (at least the opcode cache part) in
favor of zend-opcache which is integrated in php 5.5.

Regards,
Dennis


From mdounin at mdounin.ru Fri Oct 11 11:25:13 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 11 Oct 2013 15:25:13 +0400
Subject: log timeouts to access log
In-Reply-To: <5257DCCE.6000503@kearsley.me>
References: <5257DCCE.6000503@kearsley.me>
Message-ID: <20131011112513.GP76294@mdounin.ru>

Hello!

On Fri, Oct 11, 2013 at 12:11:10PM +0100, Richard Kearsley wrote:

> Hi
> I would like to log an indication of weather a request ended because
> of a client timeout - in the access.log
>
> e.g.
>
> log_format normal '$remote_addr - $remote_user [$time_local] '
> '"$request" $status $bytes_sent ' '"$http_referer"
> "$http_user_agent" "$client_send_timed_out"';
> where $client_send_timed_out would be a true/false or an 0/1 etc
>
> Is there anything in there that would allow me to log that?

Closest to what you ask about I can think of is the
$request_completion variable. Though it marks not only timeouts
but whether a request was completely served or not.

http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_completion

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 11 11:35:56 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 11 Oct 2013 07:35:56 -0400
Subject: Each website with its own fastcgi_cache_path
Message-ID: <23f327b170491dc006cfbd30c98c8cd0.NginxMailingListEnglish@forum.nginx.org>

Hello guys,

I would like to know if it is possible to have multiple fastcgi_cache_path /
keys_zone.

If I host multiple websites and all share the same keys_zone, it becomes a
problem if I have to purge the cache. I cannot purge it for a single
website, only for all of them.

This is more out of curiosity than a real problem.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243622,243622#msg-243622


From aidan at aodhandigital.com Fri Oct 11 12:34:40 2013
From: aidan at aodhandigital.com (Aidan Scheller)
Date: Fri, 11 Oct 2013 07:34:40 -0500
Subject: Quick performance deterioration when No. of clients increases
Message-ID: <CADNPKbN2sB1YWdAtRwUb+Cjo9UNizOaezkO_HjhfWX=utNmLiw@mail.gmail.com>

Hi Nick,

There was a discussion recently that yielded a marked performance
difference between smaller and higher levels of gzip compression. I'll
echo the concerns over CPU power and would also suggest trying
gzip_comp_level 1.

Thanks,

Aidan

On Fri, Oct 11, 2013 at 3:07 AM, <nginx-request at nginx.org> wrote:

Message: 4
Date: Fri, 11 Oct 2013 11:07:22 +0300
From: Nikolaos Milas <nmilas at noa.gr>
To: nginx at nginx.org
Subject: Quick performance deterioration when No. of clients increases
Message-ID: <5257B1BA.1050702 at noa.gr>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hello,

I am trying to migrate a Joomla 2.5.8 website from Apache to NGINX 1.4.2
with php-fpm 5.3.3 (and MySQL 5.5.34) on a CentOS 6.4 x86_64 Virtual
Machine (running under KMS).

The goal is to achieve better peak performance: This site has occasional
high peaks; while the normal traffic is ~10 req/sec, it may reach > 3000
req/sec for periods of a few hours (due to the type of services the site
provides - it is a non-profit, real-time seismicity-related site - so
php caching should not be more than 10 seconds).

The new VM (using Nginx) currently is in testing mode and it only has
1-core CPU / 3 GB of RAM. We tested performance with loadimpact and the
results are attached.

You can see at the load graph that as the load approaches 250 clients,
the response time increases very much and is already unacceptable (this
happens consistently). I expected better performance, esp. since caching
is enabled. Despite many efforts, I cannot find the cause of the
bottleneck, and how to deal with it. We would like to achieve better
scaling, esp. since NGINX is famous for its scaling capabilities. Having
very little experience with Nginx, I would like to ask for your
assistance for a better configuration.

When this performance deterioration occurs, we don't see very high CPU
load (Unix load peaks 2.5), neither RAM exhaustion (System RAM usage
appears to be below 30%). [Monitoring is through Nagios.]

Can you please guide me on how to correct this issue? Any and all
suggestions will be appreciated.

Current configuration, based on info available on the Internet, is as
follows (replaced true domain/website name and public IP address(es)):

=================== Nginx.conf ===================

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

worker_rlimit_nofile 200000;

events {
worker_connections 8192;
multi_accept on;
use epoll;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 64;

log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

log_format cache '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $upstream_cache_status $body_bytes_sent
"$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

fastcgi_cache_path /var/cache/nginx levels=1:2
keys_zone=microcache:5m max_size=1000m;

access_log /var/log/nginx/access.log main;

sendfile on;

tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;

types_hash_max_size 2048;
server_tokens off;

keepalive_requests 30;

open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

gzip on;
gzip_static on;
gzip_disable "msie6";
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;

include /etc/nginx/conf.d/*.conf;
}

==================================================

================ website config ==================

server {
listen 80;
server_name www.example.com;
access_log /var/webs/wwwexample/log/access_log main;
error_log /var/webs/wwwexample/log/error_log warn;
root /var/webs/wwwexample/www/;

index index.php index.html index.htm index.cgi default.html
default.htm default.php;
location / {
try_files $uri $uri/ /index.php?$args;
}

location /nginx_status {
stub_status on;
access_log off;
allow 10.10.10.0/24;
deny all;
}

location ~*
/(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
return 403;
error_page 403 /403_error.html;
}

location ~ /\.ht {
deny all;
}

location /administrator {
allow 10.10.10.0/24;
deny all;
}

location ~ \.php$ {

# Setup var defaults
set $no_cache "";
# If non GET/HEAD, don't cache & mark user as uncacheable for 1
second via cookie
if ($request_method !~ ^(GET|HEAD)$) {
set $no_cache "1";
}
# Drop no cache cookie if need be
# (for some reason, add_header fails if included in prior if-block)
if ($no_cache = "1") {
add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
add_header X-Microcachable "0";
}
# Bypass cache if no-cache cookie is set
if ($http_cookie ~* "_mcnc") {
set $no_cache "1";
}
# Bypass cache if flag is set
fastcgi_no_cache $no_cache;
fastcgi_cache_bypass $no_cache;
fastcgi_cache microcache;
fastcgi_cache_key $scheme$host$request_uri$request_method;
fastcgi_cache_valid 200 301 302 10s;
fastcgi_cache_use_stale updating error timeout invalid_header
http_500;
fastcgi_pass_header Set-Cookie;
fastcgi_pass_header Cookie;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_intercept_errors on;

fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_read_timeout 240;

fastcgi_pass unix:/tmp/php-fpm.sock;

fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

}

location ~* \.(ico|pdf|flv)$ {
expires 1d;
}

location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
expires 1d;
}

}
==================================================

================= php-fpm.conf ===================
include=/etc/php-fpm.d/*.conf
[global]
pid = /var/run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/error.log

daemonize = no
==================================================

============== php-fpm.d/www.conf ================

[www]
listen = /tmp/php-fpm.sock
listen.allowed_clients = 127.0.0.1
user = nginx
group = nginx

pm = dynamic
pm.max_children = 1024
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35

slowlog = /var/log/php-fpm/www-slow.log

php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 128M

php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/session

==================================================

================ mysql my.cnf ====================

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
user=mysql

query_cache_limit = 2M
query_cache_size = 200M
query_cache_type=1
thread_cache_size=128
key_buffer = 100M
join_buffer = 2M
table_cache= 150M
sort_buffer= 2M
read_rnd_buffer_size=10M
tmp_table_size=200M
max_heap_table_size=200M

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

==================================================

=============== mysqltuner report ================

>> MySQLTuner 1.2.0 - Major Hayden <major at mhtx.net>

-------- General Statistics
--------------------------------------------------
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.5.34
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics
-------------------------------------------
[--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 9M (Tables: 80)
[--] Data in InnoDB tables: 1M (Tables: 65)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[--] Data in MEMORY tables: 0B (Tables: 4)
[!!] Total fragmented tables: 66

-------- Security Recommendations
-------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics
-------------------------------------------------
[--] Up for: 12h 51m 16s (21K q [0.471 qps], 1K conn, TX: 10M, RX: 1M)
[--] Reads / Writes: 55% / 45%
[--] Total buffers: 694.0M global + 21.4M per thread (151 max threads)
[!!] Maximum possible memory usage: 3.8G (135% of installed RAM)
[OK] Slow queries: 0% (0/21K)
[OK] Highest usage of available connections: 23% (36/151)
[OK] Key buffer size / total MyISAM indexes: 150.0M/5.1M
[OK] Key buffer hit rate: 99.3% (51K cached / 358 reads)
[OK] Query cache efficiency: 80.9% (10K cached / 13K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 55 sorts)
[OK] Temporary tables created on disk: 8% (5 on disk / 60 total)
[OK] Thread cache hit rate: 98% (36 created / 1K connections)
[OK] Table cache hit rate: 20% (192 open / 937 opened)
[OK] Open file limit used: 0% (210/200K)
[OK] Table locks acquired immediately: 99% (4K immediate / 4K locks)
[!!] Connections aborted: 8%
[OK] InnoDB data size / buffer pool: 1.1M/128.0M

==================================================

Please advise.

Thanks and Regards,
Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131011/9d1ff4a5/attachment.html>

From mdounin at mdounin.ru Fri Oct 11 14:35:08 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 11 Oct 2013 18:35:08 +0400
Subject: Each website with its own fastcgi_cache_path
In-Reply-To: <23f327b170491dc006cfbd30c98c8cd0.NginxMailingListEnglish@forum.nginx.org>
References: <23f327b170491dc006cfbd30c98c8cd0.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131011143508.GR76294@mdounin.ru>

Hello!

On Fri, Oct 11, 2013 at 07:35:56AM -0400, ddutra wrote:

> I would like to know if it is possible to have multiple fastcgi_cache_path /
> keys_zone.

Yes, it is possible. And that's actually why the fastcgi_cache
directive needs zone name as a parameter.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 11 16:51:19 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 11 Oct 2013 12:51:19 -0400
Subject: Each website with its own fastcgi_cache_path
In-Reply-To: <20131011143508.GR76294@mdounin.ru>
References: <20131011143508.GR76294@mdounin.ru>
Message-ID: <45d22866b81c00cb5223642b0e6bf383.NginxMailingListEnglish@forum.nginx.org>

Maxim
Thanks for your time.

It really works. Thanks alot!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243622,243627#msg-243627


From richard at kearsley.me Fri Oct 11 17:07:10 2013
From: richard at kearsley.me (Richard Kearsley)
Date: Fri, 11 Oct 2013 18:07:10 +0100
Subject: log timeouts to access log
In-Reply-To: <20131011112513.GP76294@mdounin.ru>
References: <5257DCCE.6000503@kearsley.me> <20131011112513.GP76294@mdounin.ru>
Message-ID: <5258303E.40805@kearsley.me>

On 11/10/13 12:25, Maxim Dounin wrote:
> Closest to what you ask about I can think of is the
> $request_completion variable. Though it marks not only timeouts but
> whether a request was completely served or not.
> http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_completion


thanks that might be what I need (as long as I ignore mp4/flv seeking
requests)

Cheers
Richard


From nginx-forum at nginx.us Fri Oct 11 18:28:16 2013
From: nginx-forum at nginx.us (Peleke)
Date: Fri, 11 Oct 2013 14:28:16 -0400
Subject: 98: Address already in use
In-Reply-To: <20110419152042.GA56867@mdounin.ru>
References: <20110419152042.GA56867@mdounin.ru>
Message-ID: <41f6ce8f64b80cd42e2895b8eb12165f.NginxMailingListEnglish@forum.nginx.org>

I have the same problem and even if I stop nginx with /etc/init.d/nginx stop
it is running on Debian Wheezy:

PID PPID %CPU VSZ WCHAN COMMAND
2709 1 0.0 127476 ep_pol nginx: worker process
2710 1 0.0 127716 ep_pol nginx: worker process
2711 1 0.0 127476 ep_pol nginx: worker process
2713 1 0.0 127444 ep_pol nginx: worker process
2714 1 0.0 125208 ep_pol nginx: cache manager process
3458 3218 0.0 6392 pipe_w egrep (nginx|PID)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,191227,243631#msg-243631


From nginx-forum at nginx.us Fri Oct 11 23:42:28 2013
From: nginx-forum at nginx.us (justin)
Date: Fri, 11 Oct 2013 19:42:28 -0400
Subject: An official yum repo for 1.5x
Message-ID: <caf2e31e849f9919b095599808da2b17.NginxMailingListEnglish@forum.nginx.org>

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1

Is the 1.4x branch. Is it possible to get an official 1.5x repo? Thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243633,243633#msg-243633


From nginx-forum at nginx.us Sat Oct 12 05:19:51 2013
From: nginx-forum at nginx.us (justin)
Date: Sat, 12 Oct 2013 01:19:51 -0400
Subject: bug in spdy - 499 response code on long running requests
In-Reply-To: <ba1a3f5497dbaf8157706a681002b85d.NginxMailingListEnglish@forum.nginx.org>
References: <ba1a3f5497dbaf8157706a681002b85d.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <79252d02b13bac1840cada32cdb4cf05.NginxMailingListEnglish@forum.nginx.org>

Just upgraded to nginx 1.5.6 and still seeing this behavior where long
running requests are being called twice with SPDY enabled. As soon as I
disabled SPDY it goes away.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,243634#msg-243634


From sb at nginx.com Sat Oct 12 13:23:48 2013
From: sb at nginx.com (Sergey Budnevitch)
Date: Sat, 12 Oct 2013 17:23:48 +0400
Subject: An official yum repo for 1.5x
In-Reply-To: <caf2e31e849f9919b095599808da2b17.NginxMailingListEnglish@forum.nginx.org>
References: <caf2e31e849f9919b095599808da2b17.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <52E3B10D-91E7-4A8E-BCDD-11CCF0D09304@nginx.com>


On 12 Oct2013, at 03:42 , justin <nginx-forum at nginx.us> wrote:

> [nginx]
> name=nginx repo
> baseurl=http://nginx.org/packages/centos/6/$basearch/
> gpgcheck=0
> enabled=1
>
> Is the 1.4x branch. Is it possible to get an official 1.5x repo? Thanks.

http://nginx.org/en/linux_packages.html#mainline


From nmilas at noa.gr Sat Oct 12 13:47:50 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Sat, 12 Oct 2013 16:47:50 +0300
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <5257B45F.504@greengecko.co.nz>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
Message-ID: <52595306.8030100@noa.gr>

On 11/10/2013 11:18 ??, Steve Holdoway wrote:

> Apart from adding cores, there are 2 things I'd suggest looking at
>
> - are you using an opcode cacher? APC ( install via pecl to get the
> latest ) works really well with php in fpm... allocate plenty of
> memory to it too
> - check the bandwidth at the network interface. The usual 100Mbit
> connection can easily get swamped by a graphics rich site - especially
> with 250 concurrent users. If this is a problem, then look at using a
> CDN to ease things.
>

Thanks for the hints.

The strange thing is that unix load does not seem to be over-strained
when this performance deterioration occurs.

APCu seems to be enabled:

extension = apcu.so
apc.enabled=1
apc.mmap_file_mask=/tmp/apc.XXXXXX

All other params are default.

The network interface is Gigabit and should not be a problem.

We'll add virtual RAM and cores. *Any other suggestions? *

I wish there were a tool which benchmark/analyze the box and running
services and produce suggestions for all lemp stack config: mysqld, php,
php-fpm, apc, nginx! Some magic would help!!

Thanks,
Nick


From ben at indietorrent.org Sat Oct 12 20:38:59 2013
From: ben at indietorrent.org (Ben Johnson)
Date: Sat, 12 Oct 2013 16:38:59 -0400
Subject: How to disable output buffering with PHP and nginx
In-Reply-To: <20131010182409.GG76294@mdounin.ru>
References: <20131010152626.GE76294@mdounin.ru>
<b1f176ec0c83f8076c276f714bdaedf9.NginxMailingListEnglish@forum.nginx.org>
<20131010182409.GG76294@mdounin.ru>
Message-ID: <5259B363.1050305@indietorrent.org>



On 10/10/2013 2:24 PM, Maxim Dounin wrote:
> Hello!
>
> On Thu, Oct 10, 2013 at 01:35:00PM -0400, itpp2012 wrote:
>
>>> Correct. One nginx process can handle multiple requests, it's one
>>> PHP process which limits you.
>>
>> Not really, use the NTS version of php not the TS, and use a pool as
>> suggested, e.a.;
>>
>> # loadbalancing php
>> upstream myLoadBalancer {
>> server 127.0.0.1:19001 weight=1 fail_timeout=5;
>> server 127.0.0.1:19002 weight=1 fail_timeout=5;
>> server 127.0.0.1:19003 weight=1 fail_timeout=5;
>> server 127.0.0.1:19004 weight=1 fail_timeout=5;
>> server 127.0.0.1:19005 weight=1 fail_timeout=5;
>> server 127.0.0.1:19006 weight=1 fail_timeout=5;
>> server 127.0.0.1:19007 weight=1 fail_timeout=5;
>> server 127.0.0.1:19008 weight=1 fail_timeout=5;
>> server 127.0.0.1:19009 weight=1 fail_timeout=5;
>> server 127.0.0.1:19010 weight=1 fail_timeout=5;
>> # usage: fastcgi_pass myLoadBalancer;
>> }
>
> Just in case, it would be good idea to use least_conn balancer
> here.
>
> http://nginx.org/r/least_conn
>

Cool, this looks great.

Thanks for providing a full, concrete example, itpp2012! That's hugely
helpful!

I'll bear in mind your advice regarding least_conn balancer, too, Maxim.

Thanks again, guys!

-Ben


From julien at linuxwall.info Sat Oct 12 21:54:59 2013
From: julien at linuxwall.info (Julien Vehent)
Date: Sat, 12 Oct 2013 23:54:59 +0200
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
Message-ID: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>

Hi Nginx folks,

I spent some time hacking on my SSL conf recently. Nothing new, but I
figured I'd share it with the group:
https://jve.linuxwall.info/blog/index.php?post/2013/10/12/A-grade-SSL/TLS-with-Nginx-and-StartSSL

Feel free to comment here.

Cheers

--
Julien Vehent
http://jve.linuxwall.info



From nginx-forum at nginx.us Sun Oct 13 03:43:11 2013
From: nginx-forum at nginx.us (harddo78)
Date: Sat, 12 Oct 2013 23:43:11 -0400
Subject: http://www.msngoodsale.com Offers fake shoes clothes
Message-ID: <736e245143245ad3115a3f463f4607f1.NginxMailingListEnglish@forum.nginx.org>

http://www.msngoodsale.com Offers a wide range of men brand , clothing &
accessories. Shop online for men brand with JD Sports, the UK's leading
sports fashion retailer.

http://www.msngoodsale.com Shop Mens Adidas Shoes, Adidas Running Shoes and
Adidas Sandals. Buy Mens Adidas Shoes at msngoodsale.com and Get Free
Shipping with $56 Purchase

http://www.msngoodsale.com shop adidas athletic socks ? shop women's apparel
? shop men's underwear ... Experience the high-performing fabrics and adidas
shoes' innovative designs.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243655,243655#msg-243655


From nginx-forum at nginx.us Sun Oct 13 03:45:56 2013
From: nginx-forum at nginx.us (harddo78)
Date: Sat, 12 Oct 2013 23:45:56 -0400
Subject: men running shoes nike,jordan..
Message-ID: <05f9248bf06b4e56a396679668d78adc.NginxMailingListEnglish@forum.nginx.org>

http://www.goodsalebrand.com Mens Adidas Clothing, including Adidas
Originals trainers & shoes, hoodys, running & track tops and T Shirts. Buy
online today for next day shipping & free ...

http://www.goodsalebrand.com Shop our official selection of adidas Men
Shoes, Trainers and Sneakers at adidas.co.uk. See the latest styles of Men
Shoes, Trainers and Sneakers, Football ...

http://www.goodsalebrand.com Adidas Shoes Online Shopping at Prices you
Love & Free Shipping. ... Highest Discount .... 167Sports ShoesAdidasMen
adidas silver men running shoes.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243656,243656#msg-243656


From nginx-forum at nginx.us Mon Oct 14 01:55:12 2013
From: nginx-forum at nginx.us (mzabani)
Date: Sun, 13 Oct 2013 21:55:12 -0400
Subject: modules and thread creation
Message-ID: <39c292fe9b5a14d6c975c460f79293cb.NginxMailingListEnglish@forum.nginx.org>

I've seen an older thread saying that nginx modules shouldn't create
threads. I was wondering how valid that still is, and if there is a guide or
pointers on how to deal with threads in a module.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243662,243662#msg-243662


From support-nginx at oeko.net Mon Oct 14 11:59:23 2013
From: support-nginx at oeko.net (Toni Mueller)
Date: Mon, 14 Oct 2013 13:59:23 +0200
Subject: limit_req for spiders only
Message-ID: <20131014115922.GA8111@birch.wiehl.oeko.net>


Hi,

I would like to put a brake on spiders which are hammering a site with
dynamic content generation. They should still get to see the content,
but only not generate excessive load. I therefore constructed a map to
identify spiders, which works well, and then tried to

limit_req_zone $binary_remote_addr zone=slow:10m ...;

if ($is_spider) {
limit_req zone=slow;
}


Unfortunately, limit_req is not allowed inside "if", and I don't see an
obvious way to achieve this effect otherwise.

If you have any tips, that would be much appreciated!


Kind regards,
--Toni++


From calin.don at gmail.com Mon Oct 14 12:56:09 2013
From: calin.don at gmail.com (Calin Don)
Date: Mon, 14 Oct 2013 15:56:09 +0300
Subject: Nginx configuration variable max size
Message-ID: <CAEOe2JyCcwZyAYH6p9NRxUawgJEh4eQ1ZAPux3kdd3k31_hraA@mail.gmail.com>

Hi,

Is there any limit on what amount of data an nginx config variable can hold?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131014/48e554a7/attachment.html>

From nginx-forum at nginx.us Mon Oct 14 13:25:24 2013
From: nginx-forum at nginx.us (Sylvia)
Date: Mon, 14 Oct 2013 09:25:24 -0400
Subject: limit_req for spiders only
In-Reply-To: <20131014115922.GA8111@birch.wiehl.oeko.net>
References: <20131014115922.GA8111@birch.wiehl.oeko.net>
Message-ID: <1b2ca79ea9b409ed8ba30b600eac9d63.NginxMailingListEnglish@forum.nginx.org>

Hello.

Doesnt robots.txt "Crawl-Delay" directive satisfy your needs?
Normal spiders should obey robots.txt, if they dont - they can be banned.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243670,243674#msg-243674


From support-nginx at oeko.net Mon Oct 14 14:02:39 2013
From: support-nginx at oeko.net (Toni Mueller)
Date: Mon, 14 Oct 2013 16:02:39 +0200
Subject: limit_req for spiders only
In-Reply-To: <1b2ca79ea9b409ed8ba30b600eac9d63.NginxMailingListEnglish@forum.nginx.org>
References: <20131014115922.GA8111@birch.wiehl.oeko.net>
<1b2ca79ea9b409ed8ba30b600eac9d63.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131014140238.GA21524@spruce.wiehl.oeko.net>


Hello,

On Mon, Oct 14, 2013 at 09:25:24AM -0400, Sylvia wrote:
> Doesnt robots.txt "Crawl-Delay" directive satisfy your needs?

I have it already there, but I don't know how long it takes for such a
directive, or any changes to robots.txt for that matter, to take effect.
Observing the logs, I'd say that this delay between changing robots.txt
and a change in robot behaviour would take several days, as I cannot see
any effects so far.

> Normal spiders should obey robots.txt, if they dont - they can be banned.

Banning Google is not a good idea, no matter how abusive they might be,
and they incidentically operate one of those robots which keep hammering
the site. I'd much prefer a technical solution to enforce such limits,
over convention.

I'd also like to limit the request frequency over an entire pool, so
that I can say "clients from this pool can make requests only with this
fequency, combined, not per client IP", because it doesn't buy me
anything if I can limit the individual search robot to a decent
frequency, but then get hammered by 1000 search robots in parallel, each
one observing the request limit. Right?


Kind regards,
--Toni++


From francis at daoine.org Mon Oct 14 14:23:03 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 14 Oct 2013 15:23:03 +0100
Subject: limit_req for spiders only
In-Reply-To: <20131014115922.GA8111@birch.wiehl.oeko.net>
References: <20131014115922.GA8111@birch.wiehl.oeko.net>
Message-ID: <20131014142303.GY19345@craic.sysops.org>

On Mon, Oct 14, 2013 at 01:59:23PM +0200, Toni Mueller wrote:

Hi there,

This is untested, but follows the docs at http://nginx.org/r/limit_req_zone:

> I therefore constructed a map to
> identify spiders, which works well, and then tried to
>
> limit_req_zone $binary_remote_addr zone=slow:10m ...;
>
> if ($is_spider) {
> limit_req zone=slow;
> }
>

> If you have any tips, that would be much appreciated!

In your map, let $is_spider be empty if is not a spider ("default",
presumably), and be something else if it is a spider (possibly
$binary_remote_addr if every client should be counted individually,
or something else if you want to group some spiders together.)

Then define

limit_req_zone $is_spider zone=slow:10m ...;

instead of what you currently have.

f
--
Francis Daly francis at daoine.org


From support-nginx at oeko.net Mon Oct 14 14:47:38 2013
From: support-nginx at oeko.net (Toni Mueller)
Date: Mon, 14 Oct 2013 16:47:38 +0200
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <52595306.8030100@noa.gr>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr>
Message-ID: <20131014144737.GB21524@spruce.wiehl.oeko.net>


Hi Nick,

On Sat, Oct 12, 2013 at 04:47:50PM +0300, Nikolaos Milas wrote:
> We'll add virtual RAM and cores. *Any other suggestions? *

did you investigate disk I/O?

I found this to be the limiting factor. If you have shell access and if
it is a Linux machine, you can run 'top', 'dstat' and 'htop' to get an
idea about what is happening. 'dstat' gives you disk I/O and network
I/O.


Kind regards,
--Toni++


From support-nginx at oeko.net Mon Oct 14 14:52:18 2013
From: support-nginx at oeko.net (Toni Mueller)
Date: Mon, 14 Oct 2013 16:52:18 +0200
Subject: limit_req for spiders only
In-Reply-To: <20131014142303.GY19345@craic.sysops.org>
References: <20131014115922.GA8111@birch.wiehl.oeko.net>
<20131014142303.GY19345@craic.sysops.org>
Message-ID: <20131014145218.GC21524@spruce.wiehl.oeko.net>



H Francis,

On Mon, Oct 14, 2013 at 03:23:03PM +0100, Francis Daly wrote:
> In your map, let $is_spider be empty if is not a spider ("default",
> presumably), and be something else if it is a spider (possibly
> $binary_remote_addr if every client should be counted individually,
> or something else if you want to group some spiders together.)

thanks a bunch! This works like a charm!


Kind regards,
--Toni++


From nginx-forum at nginx.us Mon Oct 14 16:01:32 2013
From: nginx-forum at nginx.us (codemonkey)
Date: Mon, 14 Oct 2013 12:01:32 -0400
Subject: Any rough ETA on SPDY/3 & push?
Message-ID: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>

Contemplating switching my site over to Jetty to take advantage of spdy/3
and push, but would rather stay with nginx really...

Is there a "rough" ETA on spdy3 in nginx? 1 month? 6 months? 2 years?

Thanks, sorry if this is a frequent request...

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243684,243684#msg-243684


From andrew at nginx.com Mon Oct 14 16:37:50 2013
From: andrew at nginx.com (Andrew Alexeev)
Date: Mon, 14 Oct 2013 20:37:50 +0400
Subject: Any rough ETA on SPDY/3 & push?
In-Reply-To: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <2ECA7C2F-AFC0-402A-A6EF-46B79B6A3C9D@nginx.com>

On Oct 14, 2013, at 8:01 PM, codemonkey <nginx-forum at nginx.us> wrote:

> Contemplating switching my site over to Jetty to take advantage of spdy/3
> and push, but would rather stay with nginx really...
>
> Is there a "rough" ETA on spdy3 in nginx? 1 month? 6 months? 2 years?
>
> Thanks, sorry if this is a frequent request...

It is!

Considering it, but frankly, for a better ETA wouldn't reject a corporate sponsor
if there's anybody here who'd be open to sponsoring an implementation similar to

http://barry.wordpress.com/2012/06/16/nginx-spdy-and-automattic/

:)

> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243684,243684#msg-243684
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1776 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131014/1846e747/attachment.bin>

From nginx-forum at nginx.us Mon Oct 14 22:16:14 2013
From: nginx-forum at nginx.us (sfrazer)
Date: Mon, 14 Oct 2013 18:16:14 -0400
Subject: cookie and source IP logic in server block
Message-ID: <c23cbf2c66a171aabc9f0e85ef9660b6.NginxMailingListEnglish@forum.nginx.org>

Hello,

I'm trying to block certain IP ranges at my nginx server, but would like to
offer the ability to bypass the block by completing a back-end CAPTCHA,
which would set a cookie.

Currently I set the block like so:

geo $remote_addr $blocked {
default 0;
include /etc/nginx/conf/nginx-blocked-ips.conf;
}

...

recursive_error_pages on;
error_page 429 = @banned;
if ($blocked = 1) {
return 429;
}

location @banned {
set $args "";
rewrite ^ /banned/ ;
}

Since I can't nest "if" statements and I can't make a compound check using
"&&" or "||" or something similar, how can I check both the blocked variable
and look to see if a cookie is set?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243687,243687#msg-243687


From francis at daoine.org Mon Oct 14 22:35:12 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 14 Oct 2013 23:35:12 +0100
Subject: cookie and source IP logic in server block
In-Reply-To: <c23cbf2c66a171aabc9f0e85ef9660b6.NginxMailingListEnglish@forum.nginx.org>
References: <c23cbf2c66a171aabc9f0e85ef9660b6.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131014223512.GZ19345@craic.sysops.org>

On Mon, Oct 14, 2013 at 06:16:14PM -0400, sfrazer wrote:

Hi there,

untested, but...

> geo $remote_addr $blocked {
> default 0;
> include /etc/nginx/conf/nginx-blocked-ips.conf;
> }

map $blocked$cookie_whatever $reallyblocked {
default 0;
1 1;
}

If it is blocked by geo, and has no cookie_whatever, then $reallyblocked
is 1. If it has any value for cookie_whatever, or $blocked is not 1,
then $reallyblocked is 0.

f
--
Francis Daly francis at daoine.org


From piotr at cloudflare.com Tue Oct 15 04:39:42 2013
From: piotr at cloudflare.com (Piotr Sikora)
Date: Mon, 14 Oct 2013 21:39:42 -0700
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
In-Reply-To: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>
References: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>
Message-ID: <CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>

Hi Julien,

> I spent some time hacking on my SSL conf recently. Nothing new, but I
> figured I'd share it with the group:
> https://jve.linuxwall.info/blog/index.php?post/2013/10/12/A-grade-SSL/TLS-with-Nginx-and-StartSSL
>
> Feel free to comment here.

> a few pointers for configuring state-of-the-art TLS on Nginx.

Far from it, from the top:

> build_static_nginx.sh

You should be using:

--with-openssl=../openssl-1.0.1e
--with-openssl-opt="enable-ec_nistp_64_gcc_128"

instead of compiling OpenSSL yourself and playing with CFLAGS & LDFLAGS.

> listen 443;
> ssl on;

That's deprecated syntax, you should be using:

listen 443 ssl;

> ssl_dhparam /path/to/dhparam.pem;

While there is nothing wrong with it per se, DH params are only used
by DHE, which is simply too slow to be used.

> ssl_session_timeout 5m;

Not only doesn't it change anything (5m is the default value), but
it's way too low value to be used.

Few examples from the real world:

Google : 28h
Facebook : 24h
CloudFlare: 18h
Twitter : 4h

> ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

SSLv3 is still out there, so you shouldn't be dropping support for it
unless you know the consequences very well... This definitely
shouldn't be a general recommendation.

> ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';

Why would you put ECDSA cipher suites here when you're using RSA certificate?

You should also disable:
- DHE cipher suites, because they're too slow compared to the alternative,
- CAMELLIA cipher suites (if you're using AES-NI), because they're too
slow compared to the alternative.

Overall, that's far from the state-of-the-art SSL configuration for
nginx. The only good thing about it is that it's using OCSP and
achieves "A" grade on ssllabs.com, which can tell you a lot about the
quality of the tests they're running.

Best regards,
Piotr Sikora


From nginx-forum at nginx.us Tue Oct 15 06:03:10 2013
From: nginx-forum at nginx.us (justin)
Date: Tue, 15 Oct 2013 02:03:10 -0400
Subject: Multiple DNS servers in resolver directive
Message-ID: <07379d2eee4bd437cd60719f2f347325.NginxMailingListEnglish@forum.nginx.org>

The documentation is not clear. Can I provide two IP addresses in the
resolver config directive?

Example:

resolver 208.67.222.222 208.67.220.220;

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243692,243692#msg-243692


From ru at nginx.com Tue Oct 15 06:48:49 2013
From: ru at nginx.com (Ruslan Ermilov)
Date: Tue, 15 Oct 2013 10:48:49 +0400
Subject: Multiple DNS servers in resolver directive
In-Reply-To: <07379d2eee4bd437cd60719f2f347325.NginxMailingListEnglish@forum.nginx.org>
References: <07379d2eee4bd437cd60719f2f347325.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131015064849.GY56191@lo0.su>

On Tue, Oct 15, 2013 at 02:03:10AM -0400, justin wrote:
> The documentation is not clear. Can I provide two IP addresses in the
> resolver config directive?
>
> Example:
>
> resolver 208.67.222.222 208.67.220.220;

Please use official docs, not Wiki:

http://nginx.org/r/resolver


From nginx-forum at nginx.us Tue Oct 15 11:41:45 2013
From: nginx-forum at nginx.us (itpp2012)
Date: Tue, 15 Oct 2013 07:41:45 -0400
Subject: Accessing binding nginx via Lua
Message-ID: <e70664c676916fda7c1bdf877faaf168.NginxMailingListEnglish@forum.nginx.org>

Would it be possible (and how) to access the bindings inside nginx via Lua?
for an experiment I'd like to change the listening port of a running nginx
process.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243705,243705#msg-243705


From nginx-forum at nginx.us Tue Oct 15 13:13:52 2013
From: nginx-forum at nginx.us (gaspy)
Date: Tue, 15 Oct 2013 09:13:52 -0400
Subject: SSL certificate not loaded
Message-ID: <67c57bfd5bab794b871c81ed61c8db82.NginxMailingListEnglish@forum.nginx.org>

I have a strange problem with SLL.

I purchased a SSL cert and combined the intermediary files into one:
cat www_mydomain_com.crt PositiveSSLCA2.crt AddTrustExternalCARoot.crt >>
mydomain-budle.crt

In the server conf I have the following:

server
{
listen 80;
listen 443 ssl;

server_name www.mydomain.com;
root /var/www/mydomain/;

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
ssl_certificate /etc/nginx/conf/mydomain-bundle.crt;
ssl_certificate_key /etc/nginx/conf/server.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_verify_depth 2;
...
}

SSL doesn't work and error log shows
no "ssl_certificate" is defined in server listening on SSL port while SSL
handshaking, client: x.x.x.x, server: 0.0.0.0:443

What's wrong? Of course, the file exists, I restarted the server. I tried
everything I could think of (absolute path, I added ssl_verify_depth,
verified that in the crt file the END/BEGIN blocks are on separate lines)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243713,243713#msg-243713


From julien at linuxwall.info Tue Oct 15 13:27:17 2013
From: julien at linuxwall.info (Julien Vehent)
Date: Tue, 15 Oct 2013 09:27:17 -0400
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
In-Reply-To: <CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
References: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>
<CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
Message-ID: <dbb403bf42c4898bf52343e9be5024aa@linuxwall.info>

On 2013-10-15 00:39, Piotr Sikora wrote:
> Hi Julien,
>
>> I spent some time hacking on my SSL conf recently. Nothing new, but I
>> figured I'd share it with the group:
>>
>> https://jve.linuxwall.info/blog/index.php?post/2013/10/12/A-grade-SSL/TLS-with-Nginx-and-StartSSL
>>
>> Feel free to comment here.
>
>> a few pointers for configuring state-of-the-art TLS on Nginx.
>
> Far from it, from the top:
>
>> build_static_nginx.sh
>
> You should be using:
>
> --with-openssl=../openssl-1.0.1e
> --with-openssl-opt="enable-ec_nistp_64_gcc_128"
>
> instead of compiling OpenSSL yourself and playing with CFLAGS & LDFLAGS.
>

Afaik, the above dynamically links openssl. Am I wrong?

>> listen 443;
>> ssl on;
>
> That's deprecated syntax, you should be using:
>
> listen 443 ssl;
>

noted, but that doesn't impact security

>> ssl_dhparam /path/to/dhparam.pem;
>
> While there is nothing wrong with it per se, DH params are only used
> by DHE, which is simply too slow to be used.
>

Are you saying you would rather use non-PFS ciphers than wait an extra 15ms
to complete a DHE handshake? I wouldn't.

>> ssl_session_timeout 5m;
>
> Not only doesn't it change anything (5m is the default value), but
> it's way too low value to be used.
>
> Few examples from the real world:
>
> Google : 28h
> Facebook : 24h
> CloudFlare: 18h
> Twitter : 4h
>

Interesting information, which I didn't have before. May I ask how you
collected it?

>> ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
>
> SSLv3 is still out there, so you shouldn't be dropping support for it
> unless you know the consequences very well... This definitely
> shouldn't be a general recommendation.
>
>> ssl_ciphers
>> 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
>
> Why would you put ECDSA cipher suites here when you're using RSA
> certificate?
>

Because someone else might use DSA certificates.

> You should also disable:
> - DHE cipher suites, because they're too slow compared to the alternative,

No. The alternatives aren't available everywhere.

> - CAMELLIA cipher suites (if you're using AES-NI), because they're too
> slow compared to the alternative.

Again, I don't control clients. I push down unwanted ciphers, but I won't
disable them unless they are obviously broken (MD5, ...).

>
> Overall, that's far from the state-of-the-art SSL configuration for
> nginx. The only good thing about it is that it's using OCSP and
> achieves "A" grade on ssllabs.com, which can tell you a lot about the
> quality of the tests they're running.
>

I appreciate the feedback, but no need to be rude about it ;)

- Julien


From mdounin at mdounin.ru Tue Oct 15 13:29:34 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 15 Oct 2013 17:29:34 +0400
Subject: Accessing binding nginx via Lua
In-Reply-To: <e70664c676916fda7c1bdf877faaf168.NginxMailingListEnglish@forum.nginx.org>
References: <e70664c676916fda7c1bdf877faaf168.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131015132933.GI2144@mdounin.ru>

Hello!

On Tue, Oct 15, 2013 at 07:41:45AM -0400, itpp2012 wrote:

> Would it be possible (and how) to access the bindings inside nginx via Lua?
> for an experiment I'd like to change the listening port of a running nginx
> process.

I can't really speak of Lua, but given the nginx architecture it's
highly unlikely to be ever possible. Listen sockets are created
by master process and inherited by workers. In most cases,
workers just can't open listening sockets due to security
restrictions.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Tue Oct 15 13:48:42 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 15 Oct 2013 17:48:42 +0400
Subject: SSL certificate not loaded
In-Reply-To: <67c57bfd5bab794b871c81ed61c8db82.NginxMailingListEnglish@forum.nginx.org>
References: <67c57bfd5bab794b871c81ed61c8db82.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131015134842.GK2144@mdounin.ru>

Hello!

On Tue, Oct 15, 2013 at 09:13:52AM -0400, gaspy wrote:

> I have a strange problem with SLL.
>
> I purchased a SSL cert and combined the intermediary files into one:
> cat www_mydomain_com.crt PositiveSSLCA2.crt AddTrustExternalCARoot.crt >>
> mydomain-budle.crt
>
> In the server conf I have the following:
>
> server
> {
> listen 80;
> listen 443 ssl;
>
> server_name www.mydomain.com;
> root /var/www/mydomain/;
>
> ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
> ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
> ssl_certificate /etc/nginx/conf/mydomain-bundle.crt;
> ssl_certificate_key /etc/nginx/conf/server.key;
> ssl_session_cache shared:SSL:10m;
> ssl_session_timeout 10m;
> ssl_verify_depth 2;
> ...
> }
>
> SSL doesn't work and error log shows
> no "ssl_certificate" is defined in server listening on SSL port while SSL
> handshaking, client: x.x.x.x, server: 0.0.0.0:443
>
> What's wrong? Of course, the file exists, I restarted the server. I tried
> everything I could think of (absolute path, I added ssl_verify_depth,
> verified that in the crt file the END/BEGIN blocks are on separate lines)

The message suggests you have another server{} listening on the
same port, without ssl_certificate defined, and it's selected
based on SNI.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 15 14:11:10 2013
From: nginx-forum at nginx.us (itpp2012)
Date: Tue, 15 Oct 2013 10:11:10 -0400
Subject: Accessing binding nginx via Lua
In-Reply-To: <20131015132933.GI2144@mdounin.ru>
References: <20131015132933.GI2144@mdounin.ru>
Message-ID: <d01eeb43dce9d54029f1defb33378b03.NginxMailingListEnglish@forum.nginx.org>

Maxim Dounin Wrote:
-------------------------------------------------------
> In most cases, workers just can't open listening sockets due to security
restrictions.

I'd still like to try, can you point me where a worker binds to the
inherited values?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243705,243718#msg-243718


From mdounin at mdounin.ru Tue Oct 15 14:37:54 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 15 Oct 2013 18:37:54 +0400
Subject: Accessing binding nginx via Lua
In-Reply-To: <d01eeb43dce9d54029f1defb33378b03.NginxMailingListEnglish@forum.nginx.org>
References: <20131015132933.GI2144@mdounin.ru>
<d01eeb43dce9d54029f1defb33378b03.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131015143754.GM2144@mdounin.ru>

Hello!

On Tue, Oct 15, 2013 at 10:11:10AM -0400, itpp2012 wrote:

> Maxim Dounin Wrote:
> -------------------------------------------------------
> > In most cases, workers just can't open listening sockets due to security
> restrictions.
>
> I'd still like to try, can you point me where a worker binds to the
> inherited values?

It just have them in the cycle->listening array.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 15 15:14:42 2013
From: nginx-forum at nginx.us (gaspy)
Date: Tue, 15 Oct 2013 11:14:42 -0400
Subject: SSL certificate not loaded
In-Reply-To: <20131015134842.GK2144@mdounin.ru>
References: <20131015134842.GK2144@mdounin.ru>
Message-ID: <1fec78bd22149a2298d356bebecb2fbe.NginxMailingListEnglish@forum.nginx.org>

> The message suggests you have another server{} listening on the
> same port, without ssl_certificate defined, and it's selected
> based on SNI.

Hi Maxim and thanks for the quick reply.

I have another server block just for redirect, I disabled SSL on it but the
problem persists.
Here's how the other block looks like:

server
{
listen 80;
#listen 443 ssl;
server_name mydomain.com;
return 301 $scheme://www.mydomain.com$request_uri;
}

If it helps, I'm using nginx/1.1.19 on Ubuntu 12.04 32bit / XEN VPS.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243713,243721#msg-243721


From mdounin at mdounin.ru Tue Oct 15 15:42:58 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 15 Oct 2013 19:42:58 +0400
Subject: SSL certificate not loaded
In-Reply-To: <1fec78bd22149a2298d356bebecb2fbe.NginxMailingListEnglish@forum.nginx.org>
References: <20131015134842.GK2144@mdounin.ru>
<1fec78bd22149a2298d356bebecb2fbe.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131015154258.GP2144@mdounin.ru>

Hello!

On Tue, Oct 15, 2013 at 11:14:42AM -0400, gaspy wrote:

> > The message suggests you have another server{} listening on the
> > same port, without ssl_certificate defined, and it's selected
> > based on SNI.
>
> Hi Maxim and thanks for the quick reply.
>
> I have another server block just for redirect, I disabled SSL on it but the
> problem persists.
> Here's how the other block looks like:
>
> server
> {
> listen 80;
> #listen 443 ssl;
> server_name mydomain.com;
> return 301 $scheme://www.mydomain.com$request_uri;
> }

If the problem persists, it means that you either didn't reloaded
the configuration or there is one more server{} block. Just for
testing you may want to configure ssl_certificate at http{} level.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 15 20:39:37 2013
From: nginx-forum at nginx.us (sfrazer)
Date: Tue, 15 Oct 2013 16:39:37 -0400
Subject: cookie and source IP logic in server block
In-Reply-To: <20131014223512.GZ19345@craic.sysops.org>
References: <20131014223512.GZ19345@craic.sysops.org>
Message-ID: <0fc587c6d44e62550c2de2a8b1a5259e.NginxMailingListEnglish@forum.nginx.org>

Thanks! I wasn't aware you could combine variables like that in a map
statement. handy.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243687,243736#msg-243736


From piotr at cloudflare.com Tue Oct 15 22:00:51 2013
From: piotr at cloudflare.com (Piotr Sikora)
Date: Tue, 15 Oct 2013 15:00:51 -0700
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
In-Reply-To: <dbb403bf42c4898bf52343e9be5024aa@linuxwall.info>
References: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>
<CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
<dbb403bf42c4898bf52343e9be5024aa@linuxwall.info>
Message-ID: <CADMhe6eQV=J27YqSHhMOPMedrBQnDOOxFp7MKdxu63Rv7oH9rQ@mail.gmail.com>

Hi Julien,

> Afaik, the above dynamically links openssl. Am I wrong?

Yes, you're wrong.

> Are you saying you would rather use non-PFS ciphers than wait an extra 15ms
> to complete a DHE handshake? I wouldn't.

No, I'm saying that since you're compiling against OpenSSL-1.0.1,
you've got ECDHE cipher suites, which are much faster than DHE and all
modern browsers support ECDHE.

I know this kind of contradicts my "you shouldn't be dropping SSLv3
support" statement (since SSLv3 doesn't support ECDHE, so it would end
up without PFS cipher suite), but you cannot have everything.

Also, while this isn't the best reason to do things, none of the "big"
players offers DHE.

> Interesting information, which I didn't have before. May I ask how you
> collected it?

openssl s_client -connect <host>:443 </dev/null 2>/dev/null | grep lifetime

While this only shows you the Session Ticket lifetime hint and not the
internal session cache expire policy, it shows you the value they are
aiming for with resumption. Also, in nginx's case both values are the
same.

Trust me, you want this to be high :)

> Because someone else might use DSA certificates.

It's ECDSA, not DSA... And I'm yet to see a site that offers ECDSA
instead of RSA certificate.

> No. The alternatives aren't available everywhere.

Virtually everywhere ;)

> Again, I don't control clients. I push down unwanted ciphers, but I won't
> disable them unless they are obviously broken (MD5, ...).

Kind of the same reasoning as for DHE - AES (with AES-NI) is much
faster than CAMELLIA and I dare you to find a software that supports
CAMELLIA but not AES.

Keep in mind that the reason for disabling slow cipher suites is not
to limit interoperability, but to limit impact of attacks that use
time-consuming crypto... For example, AES (with AES-NI) is 4x faster
than CAMELLIA while essentially providing the same level of security,
which means that (D)DoS attacks on SSL require 4x less resources if
you don't disable it.

> I appreciate the feedback, but no need to be rude about it ;)

Actually, I was trying hard to not sound rude (apparently I failed),
but the fact is that calling it "A grade" and "state of the art"
configuration results in people that don't know any better picking up
your recommendations and deploying them in production.

Best regards,
Piotr Sikora


From nginx-forum at nginx.us Wed Oct 16 07:43:53 2013
From: nginx-forum at nginx.us (hcmnttan)
Date: Wed, 16 Oct 2013 03:43:53 -0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
Message-ID: <728ec0b019d65f776e3a1a6d547a4948.NginxMailingListEnglish@forum.nginx.org>

Hi there,

Is NGINX support Microsoft Exchange POP3 / SMTP ???
If yes, can anyone help me to config NGINX as reverse proxy for MS Exchange.

Followed the link : http://wiki.nginx.org/ImapProxyExample but I could
still dont understand where to put the IP POP3/IMAP target server's IP

Many thanks.
Tan

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243743,243743#msg-243743


From smainklh at free.fr Wed Oct 16 09:53:20 2013
From: smainklh at free.fr (smainklh at free.fr)
Date: Wed, 16 Oct 2013 11:53:20 +0200 (CEST)
Subject: =?UTF-8?Q?Nginx_Webdav_=26_POST=C2=A0method?=
In-Reply-To: <770690612.403571735.1381916312436.JavaMail.root@zimbra23-e3.priv.proxad.net>
Message-ID: <517031284.403622188.1381917200395.JavaMail.root@zimbra23-e3.priv.proxad.net>


Hello,

We have an appliance which trying to perform POST?methods against the Nginx server.
However this doesn't seem to be supported.
Could you please confirm me that ?
And is there a workaround in order to allow POST requests ?

Regards,
Smana


From nmilas at noa.gr Wed Oct 16 10:32:55 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Wed, 16 Oct 2013 13:32:55 +0300
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <20131014144737.GB21524@spruce.wiehl.oeko.net>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
Message-ID: <525E6B57.30509@noa.gr>

On 14/10/2013 5:47 ??, Toni Mueller wrote:

> did you investigate disk I/O?

Hi again,

Thanks for your suggestions (see below on that).

In the meantime, we have increased CPU power to 4 cores and the behavior
of the server is much better.

I found that the server performance was reaching a bottleneck (by
php-fpm) by NOT using microcache, because most pages were returning
codes 303 502 (and these return codes were not included in
fastcgi_cache_valid by default). When I set:

fastcgi_cache_valid 200 301 302 303 502 3s;

then I saw immediate performance gains and drop to unix load down to
almost 0 (from 100 - not a typo -) during load.

I used iostat during a load test and I didn't see any serious stress on
I/O. The worst (max load) recorded entry is:

==========================================================================================================
avg-cpu: %user %nice %system %iowait %steal %idle
85.43 0.00 12.96 0.38 0.00 1.23

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await
svctm %util
vda 0.00 136.50 0.00 21.20 0.00 1260.00 59.43 1.15 54.25 3.92 8.30
dm-0 0.00 0.00 0.00 157.50 0.00 1260.00 8.00 13.39 85.04 0.53 8.29
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
==========================================================================================================

Can you see a serious problem here? (I am not an expert, but, judging
from what I've read on the Internet, it should not be bad.)

Now my problem is that there seems to be a limit of performance to
around 1200 req/sec (which is not too bad, anyway), although CPU and
memory is ample during all test. Increasing stress load more than that
(I am using tsung for load testing), results only to increasing
"error_connect_emfile" errors.

See results of a test attached. (100 users arriving per second for 5
minutes (with max 10000 users), each of them hitting the homepage 100
times. Details of the test at the bottom of this mail.)

My research showed that this should be a result of file descriptor
exhaustion, however I could not find the root cause. The following seem OK:

# cat /proc/sys/fs/file-max
592940
# ulimit -n
200000
# ulimit -Hn
200000
# ulimit -Sn
200000
# grep nofile /etc/security/limits.conf
* - nofile 200000

Could you please guide me on how to resolve this issue? What is the real
bottleneck here and how to overcome?

My config remains as was initially posted (it can also be seen here:
https://www.ruby-forum.com/topic/4417776), with the difference of:
"worker_processes 4" (since we now have 4 CPU cores).

Please advise.

============================= tsung.xml <start>
=============================

<?xml version="1.0"?>
<!DOCTYPE tsung SYSTEM "/usr/share/tsung/tsung-1.0.dtd">

<tsung loglevel="debug" dumptraffic="false" version="1.0">

<clients>
<client host="localhost" use_controller_vm="true" maxusers="10000"/>
</clients>

<servers>
<server host="www.example.com" port="80" type="tcp"></server>
</servers>

<load duration="5" unit="minute">
<arrivalphase phase="1" duration="5" unit="minute">
<users arrivalrate="100" unit="second"/>
</arrivalphase>
</load>

<sessions>
<session probability="100" name="hit_en_homepage" type="ts_http">
<for from="1" to="100" var="i">
<request><http url='/' version='1.1' method='GET'></http></request>
<thinktime random='true' value='1'/>
</for>
</session>
</sessions>

</tsung>

============================== tsung.xml <end>
===============================

Thanks and Regards,
Nick

-------------- next part --------------
A non-text attachment was scrubbed...
Name: graphes-Perfs-rate_tn.png
Type: image/png
Size: 3023 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/64ab8551/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graphes-Users_Arrival-rate_tn.png
Type: image/png
Size: 3530 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/64ab8551/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graphes-Users-simultaneous_tn.png
Type: image/png
Size: 2924 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/64ab8551/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graphes-Errors-rate_tn.png
Type: image/png
Size: 3370 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/64ab8551/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graphes-Perfs-mean_tn.png
Type: image/png
Size: 3223 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/64ab8551/attachment-0009.png>

From mdounin at mdounin.ru Wed Oct 16 10:47:39 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 16 Oct 2013 14:47:39 +0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <728ec0b019d65f776e3a1a6d547a4948.NginxMailingListEnglish@forum.nginx.org>
References: <728ec0b019d65f776e3a1a6d547a4948.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131016104739.GS2144@mdounin.ru>

Hello!

On Wed, Oct 16, 2013 at 03:43:53AM -0400, hcmnttan wrote:

> Hi there,
>
> Is NGINX support Microsoft Exchange POP3 / SMTP ???
> If yes, can anyone help me to config NGINX as reverse proxy for MS Exchange.
>
> Followed the link : http://wiki.nginx.org/ImapProxyExample but I could
> still dont understand where to put the IP POP3/IMAP target server's IP

Backend server IP address should be returned by auth_http in the
Auth-Server header. See authentication protocol description here:

http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Wed Oct 16 11:13:53 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 16 Oct 2013 15:13:53 +0400
Subject: =?UTF-8?Q?Re=3A_Nginx_Webdav_=26_POST=C2=A0method?=
In-Reply-To: <517031284.403622188.1381917200395.JavaMail.root@zimbra23-e3.priv.proxad.net>
References: <770690612.403571735.1381916312436.JavaMail.root@zimbra23-e3.priv.proxad.net>
<517031284.403622188.1381917200395.JavaMail.root@zimbra23-e3.priv.proxad.net>
Message-ID: <20131016111352.GT2144@mdounin.ru>

Hello!

On Wed, Oct 16, 2013 at 11:53:20AM +0200, smainklh at free.fr wrote:

> We have an appliance which trying to perform POST?methods against the Nginx server.
> However this doesn't seem to be supported.
> Could you please confirm me that ?
> And is there a workaround in order to allow POST requests ?

POST requests are more or less undefined in WebDAV (apart from
relatively new RFC5995, which defines a discovery mechanism
through which servers can advertise support for POST requests with
"add collection member" semantics).

What do you expect to happen on POST with WebDAV resources?

--
Maxim Dounin
http://nginx.org/en/donation.html


From smainklh at free.fr Wed Oct 16 11:32:19 2013
From: smainklh at free.fr (smainklh at free.fr)
Date: Wed, 16 Oct 2013 13:32:19 +0200 (CEST)
Subject: =?UTF-8?Q?Re=3A_Nginx_Webdav_=26_POST=C2=A0method?=
In-Reply-To: <20131016111352.GT2144@mdounin.ru>
Message-ID: <2041990160.403882993.1381923139749.JavaMail.root@zimbra23-e3.priv.proxad.net>

Thank you Maxim,

Actually it's a video encoding appliance which seems to push files with POST?request.
Please find below the error logs :

2013/10/16 09:19:14 [error] 17204#0: *237 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "PROPFIND /864/ HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:19:27 [error] 17204#0: *253 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "PROPFIND /864/ HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:20:23 [error] 17204#0: *282 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "OPTIONS /864/ HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:20:33 [error] 17204#0: *283 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "OPTIONS /864/ HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:20:38 [error] 17204#0: *284 user "(" was not found in "/etc/nginx/.864_htpasswd", client: x.x.x.x, server: localhost, request: "OPTIONS /864/ HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:20:39 [error] 17204#0: *285 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "POST /864/index1_00001.ts HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:20:39 [error] 17204#0: *286 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "OPTIONS /864/ HTTP/1.1", host: "95.81.159.200"

2013/10/16 09:20:39 [error] 17204#0: *287 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "POST /864/index2_00001.ts HTTP/1.1", host: "x.x.x.x"

2013/10/16 09:20:41 [error] 17204#0: *288 no user/password was provided for basic authentication, client: x.x.x.x, server: localhost, request: "POST /864/index1_00001.ts HTTP/1.1", host: "x.x.x.x"

Don't take care of the authentication issue, that's another issue.


Smana




----- Mail original -----
De: "Maxim Dounin" <mdounin at mdounin.ru>
?: nginx at nginx.org
Envoy?: Mercredi 16 Octobre 2013 13:13:53
Objet: Re: Nginx Webdav & POST?method

Hello!

On Wed, Oct 16, 2013 at 11:53:20AM +0200, smainklh at free.fr wrote:

> We have an appliance which trying to perform POST?methods against the Nginx server.
> However this doesn't seem to be supported.
> Could you please confirm me that ?
> And is there a workaround in order to allow POST requests ?

POST requests are more or less undefined in WebDAV (apart from
relatively new RFC5995, which defines a discovery mechanism
through which servers can advertise support for POST requests with
"add collection member" semantics).

What do you expect to happen on POST with WebDAV resources?

--
Maxim Dounin
http://nginx.org/en/donation.html

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


From mdounin at mdounin.ru Wed Oct 16 11:35:13 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 16 Oct 2013 15:35:13 +0400
Subject: =?UTF-8?Q?Re=3A_Nginx_Webdav_=26_POST=C2=A0method?=
In-Reply-To: <2041990160.403882993.1381923139749.JavaMail.root@zimbra23-e3.priv.proxad.net>
References: <20131016111352.GT2144@mdounin.ru>
<2041990160.403882993.1381923139749.JavaMail.root@zimbra23-e3.priv.proxad.net>
Message-ID: <20131016113513.GV2144@mdounin.ru>

Hello!

On Wed, Oct 16, 2013 at 01:32:19PM +0200, smainklh at free.fr wrote:

> Actually it's a video encoding appliance which seems to push files with POST?request.

It seems that appliance needs to be fixed to properly use WebDAV,
there is the PUT method to put files.

--
Maxim Dounin
http://nginx.org/en/donation.html


From smainklh at free.fr Wed Oct 16 12:08:38 2013
From: smainklh at free.fr (smainklh at free.fr)
Date: Wed, 16 Oct 2013 14:08:38 +0200 (CEST)
Subject: =?UTF-8?Q?Re=3A_Nginx_Webdav_=26_POST=C2=A0method?=
In-Reply-To: <20131016113513.GV2144@mdounin.ru>
Message-ID: <1506438817.403951684.1381925318049.JavaMail.root@zimbra23-e3.priv.proxad.net>

Thanks Maxim,
I'll contact their support in order to understand its behavior.

See you,
Smana

----- Mail original -----
De: "Maxim Dounin" <mdounin at mdounin.ru>
?: nginx at nginx.org
Envoy?: Mercredi 16 Octobre 2013 13:35:13
Objet: Re: Nginx Webdav & POST?method

Hello!

On Wed, Oct 16, 2013 at 01:32:19PM +0200, smainklh at free.fr wrote:

> Actually it's a video encoding appliance which seems to push files with POST?request.

It seems that appliance needs to be fixed to properly use WebDAV,
there is the PUT method to put files.

--
Maxim Dounin
http://nginx.org/en/donation.html

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


From contact at jpluscplusm.com Wed Oct 16 13:40:42 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Wed, 16 Oct 2013 14:40:42 +0100
Subject: Nginx Webdav & POST method
In-Reply-To: <1506438817.403951684.1381925318049.JavaMail.root@zimbra23-e3.priv.proxad.net>
References: <20131016113513.GV2144@mdounin.ru>
<1506438817.403951684.1381925318049.JavaMail.root@zimbra23-e3.priv.proxad.net>
Message-ID: <CAKsTx7DuH6JemQNRffrE0WRn8v3Yp_PGoHGexDrP68R3naf26g@mail.gmail.com>

On 16 Oct 2013 13:09, <smainklh at free.fr> wrote:
>
> Thanks Maxim,
> I'll contact their support in order to understand its behavior.

If you discover that it does indeed use POSTs in an nginx-incompatible way,
you could use nginx to hack the request into something usable. [ NB I'd
only do this for an absolutely immutable appliance; in any other situation
I'd personally tell the devs their code was broken and we couldn't help:
don't inherit other people's technical debt without a commitment to a fix! ]

There's a directive (proxy_method?) which changes the verb when used in a
proxy_pass'd context.

You could just have a double pass through nginx, with the
publicly-listening server{} solely being responsible for doing s/POST/PUT/
, before proxy_pass'ing to the actual webdav server via a
127.0.0.0/8address. Use a map to define the verb, I suggest.

If it's not clear from the above how to do this, let me know and I'll run
up a test and guide you towards some config. I suggest it's not very
difficult to do, however ;-)

Yes, this is an utterly horrible hack. No, I have never used it in
production. Yes, there is an lie hidden in this paragraph.

Cheers,
Jonathan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/e1ae52ed/attachment.html>

From smainklh at free.fr Wed Oct 16 13:50:47 2013
From: smainklh at free.fr (smainklh at free.fr)
Date: Wed, 16 Oct 2013 15:50:47 +0200 (CEST)
Subject: Nginx Webdav & POST method
In-Reply-To: <CAKsTx7DuH6JemQNRffrE0WRn8v3Yp_PGoHGexDrP68R3naf26g@mail.gmail.com>
Message-ID: <650702470.404141180.1381931447316.JavaMail.root@zimbra23-e3.priv.proxad.net>

Lol, thanks Jonathan.
I'll let you know what the devs will reply to this issue.

You're right, this should be done from their side as Nginx is rfc compliant ^^

But your hack can be usefull under certain circumtances :p

Regards,
Smana


----- Mail original -----
De: "Jonathan Matthews" <contact at jpluscplusm.com>
?: nginx at nginx.org
Envoy?: Mercredi 16 Octobre 2013 15:40:42
Objet: Re: Nginx Webdav & POST method





On 16 Oct 2013 13:09, < smainklh at free.fr > wrote:
>
> Thanks Maxim,
> I'll contact their support in order to understand its behavior.

If you discover that it does indeed use POSTs in an nginx-incompatible way, you could use nginx to hack the request into something usable. [ NB I'd only do this for an absolutely immutable appliance; in any other situation I'd personally tell the devs their code was broken and we couldn't help: don't inherit other people's technical debt without a commitment to a fix! ]

There's a directive (proxy_method?) which changes the verb when used in a proxy_pass'd context.

You could just have a double pass through nginx, with the publicly-listening server{} solely being responsible for doing s/POST/PUT/ , before proxy_pass'ing to the actual webdav server via a 127.0.0.0/8 address. Use a map to define the verb, I suggest.

If it's not clear from the above how to do this, let me know and I'll run up a test and guide you towards some config. I suggest it's not very difficult to do, however ;-)

Yes, this is an utterly horrible hack. No, I have never used it in production. Yes, there is an lie hidden in this paragraph.

Cheers,
Jonathan

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


From nginx-forum at nginx.us Wed Oct 16 14:25:25 2013
From: nginx-forum at nginx.us (gaspy)
Date: Wed, 16 Oct 2013 10:25:25 -0400
Subject: SSL certificate not loaded
In-Reply-To: <20131015154258.GP2144@mdounin.ru>
References: <20131015154258.GP2144@mdounin.ru>
Message-ID: <a75a47e28ed1815206cbf2cd1bb4c135.NginxMailingListEnglish@forum.nginx.org>

Maxim Dounin Wrote:
-------------------------------------------------------

> > I have another server block just for redirect, I disabled SSL on it
> but the
> > problem persists.
> > Here's how the other block looks like:
> >
> > server
> > {
> > listen 80;
> > #listen 443 ssl;
> > server_name mydomain.com;
> > return 301 $scheme://www.mydomain.com$request_uri;
> > }
>
> If the problem persists, it means that you either didn't reloaded
> the configuration or there is one more server{} block. Just for
> testing you may want to configure ssl_certificate at http{} level.

Maxim, it works now. I re-enabled SSL on this redirection server block and
added the certificates to it. Reloaded and all is fine.
It's strange because previously that server was listening only to port 80
(see that the 443 part was commented).
Anyway, all is well now, thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243713,243764#msg-243764


From nmilas at noa.gr Wed Oct 16 16:07:29 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Wed, 16 Oct 2013 19:07:29 +0300
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <525E6B57.30509@noa.gr>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
<525E6B57.30509@noa.gr>
Message-ID: <525EB9C1.7070900@noa.gr>

On 16/10/2013 1:32 ??, Nikolaos Milas wrote:

> Now my problem is that there seems to be a limit of performance...
>
> Increasing stress load more than that (I am using tsung for load
> testing), results only to increasing "error_connect_emfile" errors.

I have been trying to resolve this behavior and I increased file
descriptors to 400.000:

# ulimit -n
400000

since:

# cat /proc/sys/fs/file-max
592940

Now, I am running the following test: X number of users per sec visit
the homepage and each one of them refreshes the page 4 times (at random
intervals).

Although the test scales OK until 500 users per sec, then
"error_connect_emfile" errors start again and performance deteriorates.
See the attached comparative chart.

So, I have two questions:

1. Is there a way we can tweak settings to make the web server scale
gracefully up to the limit of its resources (and not deteriorate
performance) as load increases? Can we leverage additional RAM (the
box always uses up to 3.5 GB RAM, despite the load, and despite the
fact that the VM now has 6 GB)?
2. If not, how can we safeguard the web server by setting a suitable
limit which cannot be surpassed to cause performance deterioration?

Please advise.

Thanks and regards,
Nick
-------------- next part --------------
A non-text attachment was scrubbed...
Name: compare.png
Type: image/png
Size: 24032 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131016/42adbebf/attachment-0001.png>

From scott_ribe at elevated-dev.com Wed Oct 16 16:10:00 2013
From: scott_ribe at elevated-dev.com (Scott Ribe)
Date: Wed, 16 Oct 2013 10:10:00 -0600
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <525EB9C1.7070900@noa.gr>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
<525E6B57.30509@noa.gr> <525EB9C1.7070900@noa.gr>
Message-ID: <51BECB3D-2EB3-497E-B097-A1FCF5745371@elevated-dev.com>

On Oct 16, 2013, at 10:07 AM, Nikolaos Milas <nmilas at noa.gr> wrote:

> 2. If not, how can we safeguard the web server by setting a suitable
> limit which cannot be surpassed to cause performance deterioration?

Have you considered not having vastly more worker processes than you have cores? (IIRC, you have configured things that way...)

--
Scott Ribe
scott_ribe at elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





From nmilas at noa.gr Wed Oct 16 16:16:21 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Wed, 16 Oct 2013 19:16:21 +0300
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <51BECB3D-2EB3-497E-B097-A1FCF5745371@elevated-dev.com>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
<525E6B57.30509@noa.gr> <525EB9C1.7070900@noa.gr>
<51BECB3D-2EB3-497E-B097-A1FCF5745371@elevated-dev.com>
Message-ID: <525EBBD5.40601@noa.gr>

On 16/10/2013 7:10 ??, Scott Ribe wrote:

> Have you considered not having vastly more worker processes than you have cores? (IIRC, you have configured things that way...)

I have (4 CPU cores and):

worker_processes 4;
worker_rlimit_nofile 400000;

events {
worker_connections 8192;
multi_accept on;
use epoll;
}

Any ideas will be appreciated!

Nick


From scott_ribe at elevated-dev.com Wed Oct 16 16:21:41 2013
From: scott_ribe at elevated-dev.com (Scott Ribe)
Date: Wed, 16 Oct 2013 10:21:41 -0600
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <525EBBD5.40601@noa.gr>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
<525E6B57.30509@noa.gr> <525EB9C1.7070900@noa.gr>
<51BECB3D-2EB3-497E-B097-A1FCF5745371@elevated-dev.com>
<525EBBD5.40601@noa.gr>
Message-ID: <BCB6CA0A-DEEA-430F-8D80-6D0D523E8BE3@elevated-dev.com>

On Oct 16, 2013, at 10:16 AM, Nikolaos Milas <nmilas at noa.gr> wrote:

> I have (4 CPU cores and):
>
> worker_processes 4;
> worker_rlimit_nofile 400000;
>
> events {
> worker_connections 8192;
> multi_accept on;
> use epoll;
> }

Then I have confused this thread with a different one. Sorry for the noise.

--
Scott Ribe
scott_ribe at elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





From nginx-forum at nginx.us Thu Oct 17 02:22:35 2013
From: nginx-forum at nginx.us (eiji-gravion)
Date: Wed, 16 Oct 2013 22:22:35 -0400
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
In-Reply-To: <CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
References: <CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
Message-ID: <f56e41553eb9e148f1b0e636d0512a14.NginxMailingListEnglish@forum.nginx.org>

Piotr Sikora Wrote:
-------------------------------------------------------
> > ssl_session_timeout 5m;
>
> Not only doesn't it change anything (5m is the default value), but
> it's way too low value to be used.
>
> Few examples from the real world:
>
> Google : 28h
> Facebook : 24h
> CloudFlare: 18h
> Twitter : 4h
Wouldn't having a timeout that high lower the effectiveness of forward
secrecy? You'd have the potential to be using the same key for up to 28
hours on Google.

I suppose most sites don't even rotate their session tickets that often, so
it probably doesn't matter for a lot of people.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243653,243779#msg-243779


From nginx-forum at nginx.us Thu Oct 17 02:42:24 2013
From: nginx-forum at nginx.us (hcmnttan)
Date: Wed, 16 Oct 2013 22:42:24 -0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <20131016104739.GS2144@mdounin.ru>
References: <20131016104739.GS2144@mdounin.ru>
Message-ID: <dcb351992b24849ae1d441c6f6d7a644.NginxMailingListEnglish@forum.nginx.org>

Thanks for your respond.
So we must 1st setup an HTTP authentication server (PHP or something ) for
auth_http, right ?
Could you tell me a little more how to setup this HTTP authen URL ?

In my example:
NGINX server IP : 192.168.1.100
POP3 / SMTP server IP : 192.168.1.101

Thanks.
Tan

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243743,243780#msg-243780


From nmilas at noa.gr Thu Oct 17 07:51:27 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Thu, 17 Oct 2013 10:51:27 +0300
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <525EB9C1.7070900@noa.gr>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
<525E6B57.30509@noa.gr> <525EB9C1.7070900@noa.gr>
Message-ID: <525F96FF.804@noa.gr>

On 16/10/2013 7:07 ??, Nikolaos Milas wrote:

> Although the test scales OK until 500 users per sec, then
> "error_connect_emfile" errors start again and performance
> deteriorates. See the attached comparative chart.

I resolved the "error_connect_emfile" errors by increasing the file
descriptors on the tsung machine. However, the behavior remains the same
(although no errors occur). I suspect that the problem may not be on the
nginx side but on the tsung box side: the latter may be unable to
generate a higher number of requests and handle the load.

So, I think this case might be considered "closed" until further testing
confirms findings (or rejects them).

Regards,
Nick


From mdounin at mdounin.ru Thu Oct 17 09:06:27 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 17 Oct 2013 13:06:27 +0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <dcb351992b24849ae1d441c6f6d7a644.NginxMailingListEnglish@forum.nginx.org>
References: <20131016104739.GS2144@mdounin.ru>
<dcb351992b24849ae1d441c6f6d7a644.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131017090626.GE2144@mdounin.ru>

Hello!

On Wed, Oct 16, 2013 at 10:42:24PM -0400, hcmnttan wrote:

> Thanks for your respond.
> So we must 1st setup an HTTP authentication server (PHP or something ) for
> auth_http, right ?

Yes.

> Could you tell me a little more how to setup this HTTP authen URL ?
>
> In my example:
> NGINX server IP : 192.168.1.100
> POP3 / SMTP server IP : 192.168.1.101

There were couple of examples here:

http://wiki.nginx.org/Configuration#Mail_examples

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Thu Oct 17 09:55:46 2013
From: nginx-forum at nginx.us (hcmnttan)
Date: Thu, 17 Oct 2013 05:55:46 -0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <20131017090626.GE2144@mdounin.ru>
References: <20131017090626.GE2144@mdounin.ru>
Message-ID: <f5da003b147534f5cdefde00ee1285fc.NginxMailingListEnglish@forum.nginx.org>

I found that link before,
Things I wonder is that where "localhost:9000/cgi-bin/auth;" is coming
from? Is it a http URL ?

I don't know how to define "localhost:9000/cgi-bin/auth" URL for auth_http
Sorry if my question is so silly. I'm very new to NGINX.

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243743,243787#msg-243787


From mdounin at mdounin.ru Thu Oct 17 12:03:46 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 17 Oct 2013 16:03:46 +0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <f5da003b147534f5cdefde00ee1285fc.NginxMailingListEnglish@forum.nginx.org>
References: <20131017090626.GE2144@mdounin.ru>
<f5da003b147534f5cdefde00ee1285fc.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131017120346.GF2144@mdounin.ru>

Hello!

On Thu, Oct 17, 2013 at 05:55:46AM -0400, hcmnttan wrote:

> I found that link before,
> Things I wonder is that where "localhost:9000/cgi-bin/auth;" is coming
> from? Is it a http URL ?
>
> I don't know how to define "localhost:9000/cgi-bin/auth" URL for auth_http
> Sorry if my question is so silly. I'm very new to NGINX.

It's an auth http script URL - it's a script you are expected to
write to check passwords/return appropriate backends in your
system. Try looking at other configuration examples at the link
provided to see complete examples with some simple auth scripts
included.

--
Maxim Dounin
http://nginx.org/en/donation.html


From rob.stradling at comodo.com Thu Oct 17 14:05:14 2013
From: rob.stradling at comodo.com (Rob Stradling)
Date: Thu, 17 Oct 2013 15:05:14 +0100
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
In-Reply-To: <CADMhe6eQV=J27YqSHhMOPMedrBQnDOOxFp7MKdxu63Rv7oH9rQ@mail.gmail.com>
References: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>
<CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
<dbb403bf42c4898bf52343e9be5024aa@linuxwall.info>
<CADMhe6eQV=J27YqSHhMOPMedrBQnDOOxFp7MKdxu63Rv7oH9rQ@mail.gmail.com>
Message-ID: <525FEE9A.7040902@comodo.com>

On 15/10/13 23:00, Piotr Sikora wrote:
<snip>
>> Because someone else might use DSA certificates.
>
> It's ECDSA, not DSA... And I'm yet to see a site that offers ECDSA
> instead of RSA certificate.

There are some sites that offer an ECDSA cert where possible, but
fallback to an RSA cert when the client doesn't offer any ECDSA ciphers.
AFAIK, Apache httpd is the only major webserver that can currently be
configured this way.
I expect to see this configuration become more common in the (near?)
future, given that some commercial CAs are now actively selling ECDSA certs.

Nginx currently only allows one cert to be configured, and I too am yet
to see a site that offers _only_ an ECDSA cert. I expect this is due to
the large proportion (I estimate ~20%) of clients that support RSA certs
but not ECDSA certs.

I'd love to see the ECDSA cert + RSA cert feature implemented in Nginx
too. OpenSSL does most of the hard work already. I've written a PoC
patch, but I'll post it to a different thread.

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online


From rob.stradling at comodo.com Thu Oct 17 14:05:17 2013
From: rob.stradling at comodo.com (Rob Stradling)
Date: Thu, 17 Oct 2013 15:05:17 +0100
Subject: [PATCH] Re: RSA+DSA+ECC bundles
In-Reply-To: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com>
References: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com>
Message-ID: <525FEE9D.5090302@comodo.com>

On 06/02/13 17:24, Primoz Bratanic wrote:
> Hi,
>
> Apache supports specifying multiple certificates (different types) for same
> host in line with OpenSSL support (RSA, DSA, ECC). This allows using ECC key
> exchange methods with clients that support it and it's backwards compatible.
> I wonder how much work would it be to add support for this to nginx. Is it
> just allowing specifying 2-3 certificates (and checking they have different
> key type) + adding support for returning proper key chain or are the any
> other obvious roadblocks (that are not obvious to me).

Here's a first stab at a patch. I hope this is a useful starting point
for getting this feature added to Nginx. :-)

To specify an RSA cert plus an ECC cert, use...
ssl_certificate my_rsa.crt my_ecc.crt;
ssl_certificate_key my_rsa.key my_ecc.key;
ssl_prefer_server_ciphers on;
Also, configure ssl_ciphers to prefer at least 1 ECDSA cipher and permit
at least 1 RSA cipher.

I think DSA certs should work too, but I've not tested this.


Issues I'm aware of with this patch:

- It doesn't check that each of the certs has a different key type
(but perhaps it should). If you specify multiple certs with the same
algorithm, all but the last one will be ignored.

- The certs and keys need to be specified in the correct order. If
you specify "my_rsa.crt my_ecc.crt" and "my_ecc.key my_rsa.key", Nginx
will start but it won't be able to complete any SSL handshakes. This
could be improved.

- It doesn't add the new feature to mail_ssl_module. Perhaps it should.

- The changes I made to ngx_conf_set_str_array_slot() work for me,
but do they break anything?

- An RSA cert and an ECC cert might well be issued by different CAs.
On Apache httpd, you have to use SSLCACertificatePath to persuade
OpenSSL to send different Intermediate certs for each one.
Nginx doesn't currently have an equivalent directive, and Maxim has
previously said it's unlikely to be added [1].
I haven't researched this properly yet, but I think it might be possible
to do "certificate path" in memory (i.e. without syscalls and disk
access on each certificate check) using the OpenSSL X509_LOOKUP API.

- I expect Maxim will have other comments. :-)


[1] http://forum.nginx.org/read.php?2,229129,229151

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

-------------- next part --------------
A non-text attachment was scrubbed...
Name: nginx_multiple_certs.patch
Type: text/x-patch
Size: 11873 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131017/8588a6f8/attachment.bin>

From rob.stradling at comodo.com Thu Oct 17 14:07:00 2013
From: rob.stradling at comodo.com (Rob Stradling)
Date: Thu, 17 Oct 2013 15:07:00 +0100
Subject: [PATCH] Re: RSA+DSA+ECC bundles
In-Reply-To: <525FEE9D.5090302@comodo.com>
References: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com>
<525FEE9D.5090302@comodo.com>
Message-ID: <525FEF04.2090104@comodo.com>

Hmmm, I guess I should've posted this to nginx-devel. Reposting...

On 17/10/13 15:05, Rob Stradling wrote:
> On 06/02/13 17:24, Primoz Bratanic wrote:
>> Hi,
>>
>> Apache supports specifying multiple certificates (different types) for
>> same
>> host in line with OpenSSL support (RSA, DSA, ECC). This allows using
>> ECC key
>> exchange methods with clients that support it and it's backwards
>> compatible.
>> I wonder how much work would it be to add support for this to nginx.
>> Is it
>> just allowing specifying 2-3 certificates (and checking they have
>> different
>> key type) + adding support for returning proper key chain or are the any
>> other obvious roadblocks (that are not obvious to me).
>
> Here's a first stab at a patch. I hope this is a useful starting point
> for getting this feature added to Nginx. :-)
>
> To specify an RSA cert plus an ECC cert, use...
> ssl_certificate my_rsa.crt my_ecc.crt;
> ssl_certificate_key my_rsa.key my_ecc.key;
> ssl_prefer_server_ciphers on;
> Also, configure ssl_ciphers to prefer at least 1 ECDSA cipher and permit
> at least 1 RSA cipher.
>
> I think DSA certs should work too, but I've not tested this.
>
>
> Issues I'm aware of with this patch:
>
> - It doesn't check that each of the certs has a different key type
> (but perhaps it should). If you specify multiple certs with the same
> algorithm, all but the last one will be ignored.
>
> - The certs and keys need to be specified in the correct order. If
> you specify "my_rsa.crt my_ecc.crt" and "my_ecc.key my_rsa.key", Nginx
> will start but it won't be able to complete any SSL handshakes. This
> could be improved.
>
> - It doesn't add the new feature to mail_ssl_module. Perhaps it should.
>
> - The changes I made to ngx_conf_set_str_array_slot() work for me,
> but do they break anything?
>
> - An RSA cert and an ECC cert might well be issued by different CAs.
> On Apache httpd, you have to use SSLCACertificatePath to persuade
> OpenSSL to send different Intermediate certs for each one.
> Nginx doesn't currently have an equivalent directive, and Maxim has
> previously said it's unlikely to be added [1].
> I haven't researched this properly yet, but I think it might be possible
> to do "certificate path" in memory (i.e. without syscalls and disk
> access on each certificate check) using the OpenSSL X509_LOOKUP API.
>
> - I expect Maxim will have other comments. :-)
>
>
> [1] http://forum.nginx.org/read.php?2,229129,229151
>

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Office Tel: +44.(0)1274.730505
Office Fax: +44.(0)1274.730909
www.comodo.com

COMODO CA Limited, Registered in England No. 04058690
Registered Office:
3rd Floor, 26 Office Village, Exchange Quay,
Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
sender by replying to the e-mail containing this attachment. Replies to
this email may be monitored by COMODO for operational or business
reasons. Whilst every endeavour is taken to ensure that e-mails are free
from viruses, no liability can be accepted and the recipient is
requested to use their own virus checking software.


From mdounin at mdounin.ru Thu Oct 17 15:18:31 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 17 Oct 2013 19:18:31 +0400
Subject: [PATCH] Re: RSA+DSA+ECC bundles
In-Reply-To: <525FEF04.2090104@comodo.com>
References: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com>
<525FEE9D.5090302@comodo.com> <525FEF04.2090104@comodo.com>
Message-ID: <20131017151831.GJ2144@mdounin.ru>

Hello!

On Thu, Oct 17, 2013 at 03:07:00PM +0100, Rob Stradling wrote:

> Hmmm, I guess I should've posted this to nginx-devel. Reposting...

Answered there.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Fri Oct 18 13:07:21 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 18 Oct 2013 09:07:21 -0400
Subject: Fastcgi_cache + ngx_pagespeed
Message-ID: <ce08aa0a3b15504b19fdad93a241d853.NginxMailingListEnglish@forum.nginx.org>

Hi guys,

First of all, I am aware that this is not the place to get ngx_pagespeed
support. I am only coming here because I am this close to archieving the
performance I need, and here is the place where I had the most sucess with
my questions so far. Forgive me if it is out of place.

Second, I would like to know if there is a NGINX workaround for this
problem, not a ngx_pagespeed solution.

So I was able to get all the optimization I need (aggressive) from
ngx_pagespeed and offload my static assets (after optimization) to a CDN
(pull origin). The only performance problem I have now is when serving
content that HIT's the fastcgi_cache.

It seems that ngx_pagespeed has to do its thing on rendered output html
everytime a request is made to the page. I through fastcgi_cache cached
content was already ngx_pagespeed optimized versions. It seems like
ngx_pagespeed runs in front of nginx/fastcgi_cache. It is ngx_pagespeed that
fetches the html from fastcgi_cache and passes it on.

Are you guys aware of any way to make fastcgi_cache cache optimized output,
after everything is set and done?

I think there is a solution if I use varnish, but I do not want to add
another moving part to my setup.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243824,243824#msg-243824


From rainer at ultra-secure.de Fri Oct 18 15:34:22 2013
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Fri, 18 Oct 2013 17:34:22 +0200
Subject: Strange proxy_pass problem
Message-ID: <20131018173422.7aa1e66b@suse3>

Hi,


I recently upgraded a server from nginx 1.2.8 to 1.4.3 (on FreeBSD
amd64).

nginx is a reverse-proxy to apache, intended to serve static files
directly and pass all php requests zu apache - with one exception: the
default vhost on both nginx and apache.

It looks like this (on apache):

<VirtualHost _default_:8080>

Alias /phpmyadmin "/usr/local/www/phpMyAdmin/"


FastCgiExternalServer /home/www/fastcgi/www.server -socket www.sock
-flush -idle-timeout 120 Alias /php.fcgi /home/www/fastcgi/www.server

<Directory "/usr/local/www/phpMyAdmin">
AllowOverride None
Options FollowSymLinks
Order allow,deny
Allow from all
</Directory>

(php is chrooted with php-fpm)

and then there's a "normal" vhost like this:

<VirtualHost *:8080>
ServerAdmin rdudemotest2 at bla
ServerName rdudemotest2.bla
DocumentRoot /home/rdudemotest2/FTPROOT/htdocs/
CustomLog /home/rdudemotest2/logs/nginx_access_log combined
ErrorLog /home/rduedmotest2/logs/error_log
<Directory /home/rdudemotest2/FTPROOT/htdocs/>
AllowOverride All
</Directory>
FastCgiExternalServer /home/www/fastcgi/rdudemotest2.server -socket
rdudemotest2.sock
Alias /php.fcgi /home//www/fastcgi/rdudemotest2.server </VirtualHost>

For nginx, I have in nginx.conf:

upstream apache8080 {
server 127.0.0.1:8080;
keepalive 16;
}


and then a default vhost like this:

server {
listen 80 default_server;
access_log /home/nginx/logs/default-access_log;
error_log /home/nginx/logs/default-error_log ;
location / {
include proxy.conf ;
proxy_pass http://127.0.0.1:8080;
}
location /phpmyadmin/ {
include proxy.conf ;
proxy_pass http://127.0.0.1:8080;
}
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}

and the vhost from above:

server {
listen our.ip;
server_name rdudemotest2.bla ;
access_log /home/rdudemotest2/logs/nginx_access_log;
error_log /home/rdudemotest2/logs/nginx_error_log;
root /home/rdudemotest2/FTPROOT/htdocs;
location ~*
^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|woff|mp3)$
{ expires 24h; }
location / {
include proxy.conf;
proxy_pass http://apache8080;
}
}


Now, the problem is that while I can in principle access phpmyadmin,
(via http://our.ip/phpmyadmin/ - I can login, databases are displayed
etc.) the images aren't found anymore, because the requests for the
images end up at the non-default vhost rdudemotest2.

I haven't checked for a while, but I'm pretty sure this worked
previously.


Can anyone shed some light on this?




From nginx-forum at nginx.us Fri Oct 18 16:38:29 2013
From: nginx-forum at nginx.us (ddutra)
Date: Fri, 18 Oct 2013 12:38:29 -0400
Subject: Fastcgi_cache + ngx_pagespeed
In-Reply-To: <ce08aa0a3b15504b19fdad93a241d853.NginxMailingListEnglish@forum.nginx.org>
References: <ce08aa0a3b15504b19fdad93a241d853.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <a245e013b6fa4e31345f6651132753c0.NginxMailingListEnglish@forum.nginx.org>

I just looked a little bit more on the topic and it is not possible I
believe.

I would have to put something in front of nginx (another nginx) or Varnish -
but that is a shame since nginx fastcgi_cache works so fine.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243824,243834#msg-243834


From francis at daoine.org Fri Oct 18 16:50:23 2013
From: francis at daoine.org (Francis Daly)
Date: Fri, 18 Oct 2013 17:50:23 +0100
Subject: Strange proxy_pass problem
In-Reply-To: <20131018173422.7aa1e66b@suse3>
References: <20131018173422.7aa1e66b@suse3>
Message-ID: <20131018165023.GA2204@craic.sysops.org>

On Fri, Oct 18, 2013 at 05:34:22PM +0200, Rainer Duffner wrote:

Hi there,

> server {
> listen 80 default_server;

> server {
> listen our.ip;

If your nginx.conf has only those two server{} blocks with only those two
listen directives, then I would expect that every request that connects
to our.ip will be handled by the second block; and every request that
connects to any other IP address on the machine will be handled by the
first block.

> Now, the problem is that while I can in principle access phpmyadmin,
> (via http://our.ip/phpmyadmin/ - I can login, databases are displayed
> etc.) the images aren't found anymore, because the requests for the
> images end up at the non-default vhost rdudemotest2.

What happens if you remove the line "listen our.ip;", or replace it with
"listen 80;"?

If that doesn't fix everything, can you describe one http request which
does not do what you expect it to? And, in case it is not clear, describe
how what it does do is not what you expect.

Best would probably be if you can show the "curl -i" or "curl -v" output.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Fri Oct 18 21:12:41 2013
From: nginx-forum at nginx.us (grosser)
Date: Fri, 18 Oct 2013 17:12:41 -0400
Subject: Weak ETags and on-the-fly gzipping
In-Reply-To: <20130616020820.GA72282@mdounin.ru>
References: <20130616020820.GA72282@mdounin.ru>
Message-ID: <ff677e78a4f445df28d8189055ed3efa.NginxMailingListEnglish@forum.nginx.org>

Please someone implement weak etags or a "do_not_strip_etags" options for
this,
it's a big hit on our performance / page response time and we cannot simply
replace that with Last-Modified.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240120,243845#msg-243845


From nginx-forum at nginx.us Fri Oct 18 21:19:42 2013
From: nginx-forum at nginx.us (grosser)
Date: Fri, 18 Oct 2013 17:19:42 -0400
Subject: Weak ETags and on-the-fly gzipping
In-Reply-To: <CAN5dZtZbye04B_XDQuUiU037AdMsKywGUpbnw4OPwbK4BPDWZA@mail.gmail.com>
References: <CAN5dZtZbye04B_XDQuUiU037AdMsKywGUpbnw4OPwbK4BPDWZA@mail.gmail.com>
Message-ID: <573776711195e2a11f3d7fad64f54b89.NginxMailingListEnglish@forum.nginx.org>

There take this patch and just apply it :)


--- nginx-1.3.8/src/http/modules/ngx_http_gzip_filter_module.c
2012-07-07 17:22:27.000000000 -0400
+++
nginx-1.3.8-weak-etags-shorter/src/http/modules/ngx_http_gzip_filter_module.c2012-11-21
17:05:12.758389000 -0500
@@ -306,7 +306,15 @@

ngx_http_clear_content_length(r);
ngx_http_clear_accept_ranges(r);
- ngx_http_clear_etag(r);
+
+ /* Clear etags unless they're marked as weak (prefixed with 'W/') */
+ h = r->headers_out.etag;
+ if (h && !(h->value.len >= 3 &&
+ h->value.data[0] == 'W' &&
+ h->value.data[1] == '/' &&
+ h->value.data[2] == '"')) {
+ ngx_http_clear_etag(r);
+ }

return ngx_http_next_header_filter(r);
}

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240120,243846#msg-243846


From mailtohemantkumar at gmail.com Sat Oct 19 02:09:37 2013
From: mailtohemantkumar at gmail.com (Hemant Kumar)
Date: Fri, 18 Oct 2013 19:09:37 -0700
Subject: Issue in configuring nginx for libwebsocket
Message-ID: <CAKdgOZ=5JQCeQM+HhCZf4fxFBp_vVcpFAFAU-OKY-Lvdoc5f2g@mail.gmail.com>

Hi All

I am a newbie with nginx server. I am trying to get websocket configuration
working on cent-os.(linux 2.6.32-358.18.1.el6.x86_64). Following is my
nginx config file;

http {
include mime.types;
default_type application/octet-stream;

#log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';

#access_log logs/access.log main;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;

#gzip on;

server {
listen 80;
server_name localhost;

#charset koi8-r;

#access_log logs/host.access.log main;

location / {
root html;
index index.html index.htm;
* proxy_pass http://localhost:80*;
proxy_http_version 1.1;
*proxy_set_header Upgrade $http_upgrade;*
* proxy_set_header Connection "upgrade";*
* proxy_set_header Host $host;*
proxy_read_timeout 3600;
proxy_send_timeout 3600;
}

#error_page 404 /404.html;
location / {
# root html;
# index index.html index.htm;
# }
#}


# HTTPS server
#
#server {
# listen 443;
# server_name localhost;

# ssl on;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;

# ssl_session_timeout 5m;

# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;

# location / {
# root html;
# index index.html index.htm;
# }
#}

}


With above configuration, and with proxy_pass set, it gices 502 error. When
I try to access
ws://127.0.0.1:80 using chrome websocket client.

I will highly appreciate if someone can give right pointer to resolving
this.

Thanks a ton

Hemant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131018/563535d3/attachment.html>

From mailtohemantkumar at gmail.com Sat Oct 19 02:32:35 2013
From: mailtohemantkumar at gmail.com (Hemant Kumar)
Date: Fri, 18 Oct 2013 19:32:35 -0700
Subject: Issue in configuring nginx for libwebsocket
In-Reply-To: <CAKdgOZ=5JQCeQM+HhCZf4fxFBp_vVcpFAFAU-OKY-Lvdoc5f2g@mail.gmail.com>
References: <CAKdgOZ=5JQCeQM+HhCZf4fxFBp_vVcpFAFAU-OKY-Lvdoc5f2g@mail.gmail.com>
Message-ID: <CAKdgOZ=WXm0Q1frk7gFb-XYkXQL1BtpUgT0Z8EPdTKomnOARcg@mail.gmail.com>

I changed the configuration to :

location /hello {
hello;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}



With a helllo module compiled in and when I send the GET request with
upgrade connection for protocol swicth,
although I do not get 502 error for BAD_GATEWAY but I end up having normal
200k instead of
protocol switch 101 response.

Please suggest, where I am going wrong.

Thanks
Hemant



On Fri, Oct 18, 2013 at 7:09 PM, Hemant Kumar
<mailtohemantkumar at gmail.com>wrote:

> Hi All
>
> I am a newbie with nginx server. I am trying to get websocket
> configuration working on cent-os.(linux 2.6.32-358.18.1.el6.x86_64).
> Following is my nginx config file;
>
> http {
> include mime.types;
> default_type application/octet-stream;
>
> #log_format main '$remote_addr - $remote_user [$time_local]
> "$request" '
> # '$status $body_bytes_sent "$http_referer" '
> # '"$http_user_agent" "$http_x_forwarded_for"';
>
> #access_log logs/access.log main;
>
> sendfile on;
> #tcp_nopush on;
>
> #keepalive_timeout 0;
> keepalive_timeout 65;
>
> #gzip on;
>
> server {
> listen 80;
> server_name localhost;
>
> #charset koi8-r;
>
> #access_log logs/host.access.log main;
>
> location / {
> root html;
> index index.html index.htm;
> * proxy_pass http://localhost:80*;
> proxy_http_version 1.1;
> *proxy_set_header Upgrade $http_upgrade;*
> * proxy_set_header Connection "upgrade";*
> * proxy_set_header Host $host;*
> proxy_read_timeout 3600;
> proxy_send_timeout 3600;
> }
>
> #error_page 404 /404.html;
> location / {
> # root html;
> # index index.html index.htm;
> # }
> #}
>
>
> # HTTPS server
> #
> #server {
> # listen 443;
> # server_name localhost;
>
> # ssl on;
> # ssl_certificate cert.pem;
> # ssl_certificate_key cert.key;
>
> # ssl_session_timeout 5m;
>
> # ssl_protocols SSLv2 SSLv3 TLSv1;
> # ssl_ciphers HIGH:!aNULL:!MD5;
> # ssl_prefer_server_ciphers on;
>
> # location / {
> # root html;
> # index index.html index.htm;
> # }
> #}
>
> }
>
>
> With above configuration, and with proxy_pass set, it gices 502 error.
> When I try to access
> ws://127.0.0.1:80 using chrome websocket client.
>
> I will highly appreciate if someone can give right pointer to resolving
> this.
>
> Thanks a ton
>
> Hemant
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131018/f7da9d2b/attachment-0001.html>

From nginx-forum at nginx.us Sat Oct 19 09:36:54 2013
From: nginx-forum at nginx.us (hcmnttan)
Date: Sat, 19 Oct 2013 05:36:54 -0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <20131017120346.GF2144@mdounin.ru>
References: <20131017120346.GF2144@mdounin.ru>
Message-ID: <23a970d6aebbcca2f3db603a428ce771.NginxMailingListEnglish@forum.nginx.org>

Thanks Max,

I could config NGINX work for POP3,
But in SMTP, I just could do auth login only, when send a test email, an
error message appear ( using telnet )
--------------------------------------------------
telnet 192.168.1.15 25

220 mailproxy ESMTP ready
auth login
334 VXNlcm5hbWU6
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
334 UGFzc3dvcmQ6
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
235 2.0.0 OK
mail from: nt.tan at abc.com.vn
250 2.1.0 Sender OK
rcpt to: nt.tan at abc.com.vn
250 2.1.5 Recipient OK
data
354 Start mail input; end with <CRLF>.<CRLF>
subject: Test mail
test
.
550 5.7.1 Client does not have permissions to send as this sender
--------------------------------------------------

I could do a test telnet from nginx to backend SMTP server. Could you help
??
Below is my config file


--------------------------------------------------
nginx.conf

user nobody;
worker_processes 1;
error_log logs/error.log info;
pid run/nginx.pid;

events {
worker_connections 1024;
multi_accept on;
}

http {
perl_modules perl/lib;
perl_require mailauth.pm;

server {
location /auth {
perl mailauth::handler;
}
}
}

mail {
auth_http 127.0.0.1:80/auth;

pop3_capabilities "TOP" "USER";
smtp_capabilities "PIPELINING" "SIZE 10240000" "VRFY" "ETRN"
"ENHANCEDSTATUSCODES" "8BITMIME" "DSN";
smtp_auth LOGIN ;
xclient off;

server {
listen 110;
protocol pop3;
proxy on;
}

server {
listen 25;
protocol smtp;
proxy on;
}
}
--------------------------------------------------

mailauth.pm

package mailauth;
use nginx;

our $auth_ok;
our $protocol_ports={};
$cas="172.16.3.22";
$protocol_ports->{'pop3'}=110;
$protocol_ports->{'smtp'}=25;

sub handler {
my $r = shift;
$r->header_out("Auth-Status", "OK") ;
$r->header_out("Auth-Server", $cas );
$r->header_out("Auth-Port",
$protocol_ports->{$r->header_in("Auth-Protocol")});
$r->send_http_header("text/html");
return OK;
}
1;
__END__

--------------------------------------------------


Thanks in advance
Tan.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243743,243856#msg-243856


From mdounin at mdounin.ru Sat Oct 19 10:04:53 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Sat, 19 Oct 2013 14:04:53 +0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <23a970d6aebbcca2f3db603a428ce771.NginxMailingListEnglish@forum.nginx.org>
References: <20131017120346.GF2144@mdounin.ru>
<23a970d6aebbcca2f3db603a428ce771.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131019100453.GQ2144@mdounin.ru>

Hello!

On Sat, Oct 19, 2013 at 05:36:54AM -0400, hcmnttan wrote:

> Thanks Max,
>
> I could config NGINX work for POP3,
> But in SMTP, I just could do auth login only, when send a test email, an
> error message appear ( using telnet )
> --------------------------------------------------
> telnet 192.168.1.15 25
>
> 220 mailproxy ESMTP ready
> auth login
> 334 VXNlcm5hbWU6
> xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> 334 UGFzc3dvcmQ6
> xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> 235 2.0.0 OK
> mail from: nt.tan at abc.com.vn
> 250 2.1.0 Sender OK
> rcpt to: nt.tan at abc.com.vn
> 250 2.1.5 Recipient OK
> data
> 354 Start mail input; end with <CRLF>.<CRLF>
> subject: Test mail
> test
> .
> 550 5.7.1 Client does not have permissions to send as this sender
> --------------------------------------------------
>
> I could do a test telnet from nginx to backend SMTP server. Could you help
> ??
> Below is my config file

It's an error from your backend server. Please note that nginx
doesn't try to authenticate against SMTP backends. Instead, it
uses xclient to pass username to a backend, but in your config
it's switched off.

--
Maxim Dounin
http://nginx.org/en/donation.html


From jgehrcke at googlemail.com Sat Oct 19 11:55:54 2013
From: jgehrcke at googlemail.com (Jan-Philip Gehrcke)
Date: Sat, 19 Oct 2013 13:55:54 +0200
Subject: Quick performance deterioration when No. of clients increases
In-Reply-To: <525EBBD5.40601@noa.gr>
References: <5257B1BA.1050702@noa.gr> <5257B45F.504@greengecko.co.nz>
<52595306.8030100@noa.gr> <20131014144737.GB21524@spruce.wiehl.oeko.net>
<525E6B57.30509@noa.gr> <525EB9C1.7070900@noa.gr>
<51BECB3D-2EB3-497E-B097-A1FCF5745371@elevated-dev.com>
<525EBBD5.40601@noa.gr>
Message-ID: <5262734A.8040209@googlemail.com>

Hi Nikolaos,

just a small follow-up on this. In your initial mail you stated

> The new VM (using Nginx) currently is in testing mode and it only has
> 1-core CPU

as well as

> When this performance deterioration occurs, we don't see very high CPU
> load (Unix load peaks 2.5)

These numbers already tell you that your initial tests were CPU bound. A
simple way to describe the situation would be that you have loaded your
system with 2.5 as much as it was able to handle "simultaneously". On
average, 1.5 processes were in the run queue of the scheduler just
"waiting" for a slice of CPU time.

In this configuration, you observed

> You can see at the load graph that as the load approaches 250 clients,
> the response time increases very much and is already unacceptable

Later on, you wrote

> In the meantime, we have increased CPU power to 4 cores and the behavior
> of the server is much better.

and

> Now my problem is that there seems to be a limit of performance to
> around 1200 req/sec

Do you see that the rate increased by about factor 4? No coincidence, I
think these numbers clarify where the major bottleneck was in your
initial setup.

Also, there was this part of the discussion:

> On 16/10/2013 7:10 ??, Scott Ribe wrote:
>
>> Have you considered not having vastly more worker processes than you
>> have cores? (IIRC, you have configured things that way...)
>
> I have (4 CPU cores and):
>
> worker_processes 4;


Obviously, here you also need to consider the PHP-FPM and possibly other
processes involved in your web stack.

Eventually, what you want at all times is to have a load average below
the actual number of cores in your machine (N) , because you want your
machine to stay responsive, at least to internal events.

If you run more processes than N that potentially create huge CPU load,
the load average is easily pushed beyond this limit. Via a large request
rate, your users can then drive your machine to its knees. If you don't
spawn more than N worker processes in the first place, this helps
already a lot in preventing such a user-driven lockup situation.

Cheers,

Jan-Philip











On 16.10.2013 18:16, Nikolaos Milas wrote:
> On 16/10/2013 7:10 ??, Scott Ribe wrote:
>
>> Have you considered not having vastly more worker processes than you
>> have cores? (IIRC, you have configured things that way...)
>
> I have (4 CPU cores and):
>
> worker_processes 4;
> worker_rlimit_nofile 400000;
>
> events {
> worker_connections 8192;
> multi_accept on;
> use epoll;
> }
>
> Any ideas will be appreciated!
>
> Nick
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


From steve at greengecko.co.nz Sat Oct 19 18:10:29 2013
From: steve at greengecko.co.nz (Steve Holdoway)
Date: Sun, 20 Oct 2013 07:10:29 +1300
Subject: Quick performance deterioration when No. of clients increases
Message-ID: <9nh8adqgdcwe0brddll2x32t.1382206229271@email.android.com>

This is a slight oversimplification as processes in waitio are also adding to the load average. Use of programs like top, iotop, mytop etc will give you a clearer idea of what is going on, and where your bottleneck lies.

Steve

Jan-Philip Gehrcke <jgehrcke at googlemail.com> wrote:

>Hi Nikolaos,
>
>just a small follow-up on this. In your initial mail you stated
>
> > The new VM (using Nginx) currently is in testing mode and it only has
> > 1-core CPU
>
>as well as
>
> > When this performance deterioration occurs, we don't see very high CPU
> > load (Unix load peaks 2.5)
>
>These numbers already tell you that your initial tests were CPU bound. A
>simple way to describe the situation would be that you have loaded your
>system with 2.5 as much as it was able to handle "simultaneously". On
>average, 1.5 processes were in the run queue of the scheduler just
>"waiting" for a slice of CPU time.
>
>In this configuration, you observed
>
> > You can see at the load graph that as the load approaches 250 clients,
> > the response time increases very much and is already unacceptable
>
>Later on, you wrote
>
> > In the meantime, we have increased CPU power to 4 cores and the behavior
> > of the server is much better.
>
>and
>
> > Now my problem is that there seems to be a limit of performance to
> > around 1200 req/sec
>
>Do you see that the rate increased by about factor 4? No coincidence, I
>think these numbers clarify where the major bottleneck was in your
>initial setup.
>
>Also, there was this part of the discussion:
>
> > On 16/10/2013 7:10 ??, Scott Ribe wrote:
> >
> >> Have you considered not having vastly more worker processes than you
> >> have cores? (IIRC, you have configured things that way...)
> >
> > I have (4 CPU cores and):
> >
> > worker_processes 4;
>
>
>Obviously, here you also need to consider the PHP-FPM and possibly other
>processes involved in your web stack.
>
>Eventually, what you want at all times is to have a load average below
>the actual number of cores in your machine (N) , because you want your
>machine to stay responsive, at least to internal events.
>
>If you run more processes than N that potentially create huge CPU load,
>the load average is easily pushed beyond this limit. Via a large request
>rate, your users can then drive your machine to its knees. If you don't
>spawn more than N worker processes in the first place, this helps
>already a lot in preventing such a user-driven lockup situation.
>
>Cheers,
>
>Jan-Philip
>
>
>
>
>
>
>
>
>
>
>
>On 16.10.2013 18:16, Nikolaos Milas wrote:
>> On 16/10/2013 7:10 ??, Scott Ribe wrote:
>>
>>> Have you considered not having vastly more worker processes than you
>>> have cores? (IIRC, you have configured things that way...)
>>
>> I have (4 CPU cores and):
>>
>> worker_processes 4;
>> worker_rlimit_nofile 400000;
>>
>> events {
>> worker_connections 8192;
>> multi_accept on;
>> use epoll;
>> }
>>
>> Any ideas will be appreciated!
>>
>> Nick
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>_______________________________________________
>nginx mailing list
>nginx at nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx
>

From list_nginx at bluerosetech.com Sun Oct 20 00:07:56 2013
From: list_nginx at bluerosetech.com (Darren Pilgrim)
Date: Sat, 19 Oct 2013 17:07:56 -0700
Subject: Any rough ETA on SPDY/3 & push?
In-Reply-To: <2ECA7C2F-AFC0-402A-A6EF-46B79B6A3C9D@nginx.com>
References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
<2ECA7C2F-AFC0-402A-A6EF-46B79B6A3C9D@nginx.com>
Message-ID: <52631EDC.1000602@bluerosetech.com>

On 10/14/2013 9:37 AM, Andrew Alexeev wrote:
> On Oct 14, 2013, at 8:01 PM, codemonkey <nginx-forum at nginx.us> wrote:
>
>> Contemplating switching my site over to Jetty to take advantage of spdy/3
>> and push, but would rather stay with nginx really...
>>
>> Is there a "rough" ETA on spdy3 in nginx? 1 month? 6 months? 2 years?
>>
>> Thanks, sorry if this is a frequent request...
>
> It is!
>
> Considering it, but frankly, for a better ETA wouldn't reject a corporate sponsor
> if there's anybody here who'd be open to sponsoring an implementation similar to
>
> http://barry.wordpress.com/2012/06/16/nginx-spdy-and-automattic/

How much capital would you need to do this? I'd contribute to a
crowd-funding campaign for this and I can likely get work to match or
beat what I put in.


From nginx-forum at nginx.us Sun Oct 20 10:00:04 2013
From: nginx-forum at nginx.us (talkingnews)
Date: Sun, 20 Oct 2013 06:00:04 -0400
Subject: Any sources for Saucy Salamander? (Ubuntu 13.10).
Message-ID: <e48b8a66df61883f71341421e2b07f9a.NginxMailingListEnglish@forum.nginx.org>

I seem to be "stuck" on the 5 month old nginx 1.4.1 on my Ubuntu 13.04. But
reading the 13.10 (Saucy Salamander) Ubuntu repo notes, I see it's STILL
that version.

So I found this post:
http://www.devcu.com/forums/topic/633-upgrade-nginx-to-latest-stable-release-ubuntu-1204/

It says to replace "raring" (or whatever) with the latest release.

But if I look at http://nginx.org/packages/ubuntu/dists/ I see that only
raring is the latest.

So, how can I best ensure that I can apt-get update nginx without having to
completely remove and re-install?

Thanks!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243865,243865#msg-243865


From richard at kearsley.me Sun Oct 20 13:36:59 2013
From: richard at kearsley.me (Richard Kearsley)
Date: Sun, 20 Oct 2013 14:36:59 +0100
Subject: Any sources for Saucy Salamander? (Ubuntu 13.10).
In-Reply-To: <e48b8a66df61883f71341421e2b07f9a.NginxMailingListEnglish@forum.nginx.org>
References: <e48b8a66df61883f71341421e2b07f9a.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <5263DC7B.1050509@kearsley.me>

On 20/10/13 11:00, talkingnews wrote:
>
> It says to replace "raring" (or whatever) with the latest release.
>
> But if I look at http://nginx.org/packages/ubuntu/dists/ I see that only
> raring is the latest.
>
> So, how can I best ensure that I can apt-get update nginx without having to
> completely remove and re-install?

Hi
I'm not sure if there's a way within APT to see the version before
installing it, but if you look directly in the 'Packages' file it says :

Package: nginx
Version: 1.4.3-1~raring
...


From nginx-forum at nginx.us Sun Oct 20 14:23:42 2013
From: nginx-forum at nginx.us (talkingnews)
Date: Sun, 20 Oct 2013 10:23:42 -0400
Subject: Any sources for Saucy Salamander? (Ubuntu 13.10).
In-Reply-To: <5263DC7B.1050509@kearsley.me>
References: <5263DC7B.1050509@kearsley.me>
Message-ID: <1415ea59737e4e579d875838d0d5a8bb.NginxMailingListEnglish@forum.nginx.org>

Richard Kearsley Wrote:
-------------------------------------------------------
> On 20/10/13 11:00, talkingnews wrote:
> I'm not sure if there's a way within APT to see the version before
> installing it, but if you look directly in the 'Packages' file it says
> :
>
> Package: nginx
> Version: 1.4.3-1~raring

Ah, does this mean it's OK to just put the raring(13.04) repo/source into
the sources list of a saucy(13.10) sources.lst?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243865,243868#msg-243868


From nginx-forum at nginx.us Sun Oct 20 16:02:55 2013
From: nginx-forum at nginx.us (agriz)
Date: Sun, 20 Oct 2013 12:02:55 -0400
Subject: Which 404 file does nginx calls?
Message-ID: <2eeb0cfaa09c9273e72aa8bfa2722859.NginxMailingListEnglish@forum.nginx.org>

Somewhere something happened.
I am not able to fix it.

error_page 404 404.html
During the test, i had this line in the site.conf file.

404.html had "File not found."

Later i created a proper error page and changed the commands like this.

error_page 404 404.php
But nginx is still throwing "File not found." but i have already deleted
that 404.html file from server

When i check for error log, I am getting "FastCGI sent in stderr: "Primary
script unknown" while reading response header from upstream"

How do i fix it?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243869,243869#msg-243869


From nginx-forum at nginx.us Sun Oct 20 19:11:04 2013
From: nginx-forum at nginx.us (agriz)
Date: Sun, 20 Oct 2013 15:11:04 -0400
Subject: Which 404 file does nginx calls?
In-Reply-To: <2eeb0cfaa09c9273e72aa8bfa2722859.NginxMailingListEnglish@forum.nginx.org>
References: <2eeb0cfaa09c9273e72aa8bfa2722859.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <9ce5399096956b6a86c60ce78b75d1f8.NginxMailingListEnglish@forum.nginx.org>

server_name .site.com;

root /var/www/site.com;
error_page 404 /404.php;
access_log /var/log/nginx/site.access.log;
index index.html index.php;



if ($http_host != "www.site.com") {
rewrite ^ http://www.site.com$request_uri permanent;
}

location ~* \.php$ {
# try_files $uri = 404;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:xxxx;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 4k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_read_timeout 240;

include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
# fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}

after adding error_page 404 /404.php

it works for other files which are not php. It calls the error page.
But if it is a php file, it shows "File not found."

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243869,243870#msg-243870


From nginx-forum at nginx.us Sun Oct 20 20:17:34 2013
From: nginx-forum at nginx.us (dalmolin)
Date: Sun, 20 Oct 2013 16:17:34 -0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
Message-ID: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>

I have set up an Nginx as a front end to manage secure connection and
authorization for the Radicale calendar server which I use to synch my
Lightning calendar on my desktop, laptop and Android phone (it uses the
Caldav Synch app). It all works fine after a long and steep learning curve
for me at least. One strange thing I have noticed is that I keep getting an
error in the Nginx error log for the Android device, here are the
abbreviated logs from Nginx's access.log and error.log:

access.log entries:
199.7.156.144 - - [20/Oct/2013:15:38:49 -0400] "OPTIONS
/USERID/testcalhttps/ HTTP/1.1" 401 194 "-" "CalDAV-Sync (Android) (like
iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"

199.1.1.1 - MYUSERID [20/Oct/2013:15:38:50 -0400] "OPTIONS
/USERID/testcalhttps/ HTTP/1.1" 200 5 "-" "CalDAV-Sync (Android) (like
iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"

199.1.1.1 - MYUSERID [20/Oct/2013:15:38:50 -0400] "PROPFIND
/USERID/testcalhttps/ HTTP/1.1" 207 1646 "-" "CalDAV-Sync (Android) (like
iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"

199.1.1.1 - MYUSERID [20/Oct/2013:15:38:51 -0400] "REPORT
/USERID/testcalhttps/ HTTP/1.1" 207 8525 "-" "CalDAV-Sync (Android) (like
iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"

199.1.1.1 - MYUSERID [20/Oct/2013:15:38:53 -0400] "PUT
/USERID/testcalhttps/e024b939-a58c-4b42-8f65-77d75942541c.ics HTTP/1.1" 201
5 "-" "CalDAV-Sync (Android) (like iOS/5.0.1 (9A405) dataaccessd/1.0) gzip"
"-"

error.log entries:
2013/10/20 15:38:49 [error] 6797#0: *241 no user/password was provided for
basic authentication, client: 199.1.1.1, server: , request: "OPTIONS
/USERID/testcalhttps/ HTTP/1.1", host: "mysserver.com:1905"

The userid and password were correctly entered during setup of the calendar
on the client and as I said it all works fine... I can add, delete, modify
calendar entries and synch across all my devices. But I keep getting this
pesky error when using my phone.

I noticed that in the access.log the first access corresponds to the error
message in the error log.... and then there are four more access messages
with my USERID appended to the IP address. So it looks like the
userid/password are processed after the first request somehow. Also, it may
be helpful to know that the phone is connecting to Nginx via the internet
and portforwarding via my router to the server. Might this error message
simply be the result of the way I am accessing the server... as I don't get
the same error when I access the server via the LAN and my laptop.

Thanks in advance!

Joseph

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243871,243871#msg-243871


From francis at daoine.org Sun Oct 20 20:32:57 2013
From: francis at daoine.org (Francis Daly)
Date: Sun, 20 Oct 2013 21:32:57 +0100
Subject: Which 404 file does nginx calls?
In-Reply-To: <9ce5399096956b6a86c60ce78b75d1f8.NginxMailingListEnglish@forum.nginx.org>
References: <2eeb0cfaa09c9273e72aa8bfa2722859.NginxMailingListEnglish@forum.nginx.org>
<9ce5399096956b6a86c60ce78b75d1f8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131020203257.GB2204@craic.sysops.org>

On Sun, Oct 20, 2013 at 03:11:04PM -0400, agriz wrote:

> after adding error_page 404 /404.php
>
> it works for other files which are not php. It calls the error page.
> But if it is a php file, it shows "File not found."

http://nginx.org/r/fastcgi_intercept_errors

f
--
Francis Daly francis at daoine.org


From francis at daoine.org Sun Oct 20 20:45:36 2013
From: francis at daoine.org (Francis Daly)
Date: Sun, 20 Oct 2013 21:45:36 +0100
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131020204536.GC2204@craic.sysops.org>

On Sun, Oct 20, 2013 at 04:17:34PM -0400, dalmolin wrote:

Hi there,

> One strange thing I have noticed is that I keep getting an
> error in the Nginx error log for the Android device, here are the
> abbreviated logs from Nginx's access.log and error.log:
>
> access.log entries:
> 199.7.156.144 - - [20/Oct/2013:15:38:49 -0400] "OPTIONS
> /USERID/testcalhttps/ HTTP/1.1" 401 194 "-" "CalDAV-Sync (Android) (like
> iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"
>
> 199.1.1.1 - MYUSERID [20/Oct/2013:15:38:50 -0400] "OPTIONS
> /USERID/testcalhttps/ HTTP/1.1" 200 5 "-" "CalDAV-Sync (Android) (like
> iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"

That indicates that the client made a http request, was told "need auth",
and then repeated the request with authentication credentials which
were accepted.

That's pretty much how things are supposed to work.

The only odd thing I see is that the source IP address changed between
the two requests.

> error.log entries:
> 2013/10/20 15:38:49 [error] 6797#0: *241 no user/password was provided for
> basic authentication, client: 199.1.1.1, server: , request: "OPTIONS
> /USERID/testcalhttps/ HTTP/1.1", host: "mysserver.com:1905"

That matches what access.log shows.

> The userid and password were correctly entered during setup of the calendar
> on the client and as I said it all works fine... I can add, delete, modify
> calendar entries and synch across all my devices. But I keep getting this
> pesky error when using my phone.

Either accept that the phone is correct, and all of the other clients
are sending your password before they were asked for it; or change the
phone app to take the same shortcut.

It probably isn't a config setting in the phone app.

> So it looks like the
> userid/password are processed after the first request somehow.

Yes; the phone doesn't send the userid/password on the first request.

> Also, it may
> be helpful to know that the phone is connecting to Nginx via the internet
> and portforwarding via my router to the server. Might this error message
> simply be the result of the way I am accessing the server... as I don't get
> the same error when I access the server via the LAN and my laptop.

Probably not. Does the laptop via the internet show the same behaviour?

f
--
Francis Daly francis at daoine.org


From mdounin at mdounin.ru Sun Oct 20 20:51:36 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 21 Oct 2013 00:51:36 +0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131020205136.GB7074@mdounin.ru>

Hello!

On Sun, Oct 20, 2013 at 04:17:34PM -0400, dalmolin wrote:

> I have set up an Nginx as a front end to manage secure connection and
> authorization for the Radicale calendar server which I use to synch my
> Lightning calendar on my desktop, laptop and Android phone (it uses the
> Caldav Synch app). It all works fine after a long and steep learning curve
> for me at least. One strange thing I have noticed is that I keep getting an
> error in the Nginx error log for the Android device, here are the
> abbreviated logs from Nginx's access.log and error.log:
>
> access.log entries:
> 199.7.156.144 - - [20/Oct/2013:15:38:49 -0400] "OPTIONS
> /USERID/testcalhttps/ HTTP/1.1" 401 194 "-" "CalDAV-Sync (Android) (like
> iOS/5.0.1 (9A405) dataaccessd/1.0) gzip" "-"

[...]

> error.log entries:
> 2013/10/20 15:38:49 [error] 6797#0: *241 no user/password was provided for
> basic authentication, client: 199.1.1.1, server: , request: "OPTIONS
> /USERID/testcalhttps/ HTTP/1.1", host: "mysserver.com:1905"
>
> The userid and password were correctly entered during setup of the calendar
> on the client and as I said it all works fine... I can add, delete, modify
> calendar entries and synch across all my devices. But I keep getting this
> pesky error when using my phone.
>
> I noticed that in the access.log the first access corresponds to the error
> message in the error log.... and then there are four more access messages
> with my USERID appended to the IP address. So it looks like the
> userid/password are processed after the first request somehow. Also, it may
> be helpful to know that the phone is connecting to Nginx via the internet
> and portforwarding via my router to the server. Might this error message
> simply be the result of the way I am accessing the server... as I don't get
> the same error when I access the server via the LAN and my laptop.

It looks like your client doesn't provide auth credentials on
first request, but does so on subsequent ones after a 401
response. It's likely coded to work this way.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Sun Oct 20 21:11:23 2013
From: nginx-forum at nginx.us (agriz)
Date: Sun, 20 Oct 2013 17:11:23 -0400
Subject: Which 404 file does nginx calls?
In-Reply-To: <20131020203257.GB2204@craic.sysops.org>
References: <20131020203257.GB2204@craic.sysops.org>
Message-ID: <7ac108bb1bcd1b73a7801c00c60ff73d.NginxMailingListEnglish@forum.nginx.org>

Thanks a ton for you!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243869,243875#msg-243875


From wmark+nginx at hurrikane.de Sun Oct 20 21:11:45 2013
From: wmark+nginx at hurrikane.de (W-Mark Kubacki)
Date: Sun, 20 Oct 2013 23:11:45 +0200
Subject: "A" Grade SSL/TLS with Nginx and StartSSL
In-Reply-To: <dbb403bf42c4898bf52343e9be5024aa@linuxwall.info>
References: <edcdcea48aa956b9c30c4a900ecf4ca6@linuxwall.info>
<CADMhe6f82e+asyc6W6-WP2tkdqJyiGX3XH_wcfgTM7QmLopf6A@mail.gmail.com>
<dbb403bf42c4898bf52343e9be5024aa@linuxwall.info>
Message-ID: <CAHw5cr+b5RfF5JMeyf2jUk+G9yBZotr8MVPgxEhicQ==+Hq3Tw@mail.gmail.com>

2013-10-15 Piotr Sikora <piotr at cloudflare.com>
has cited Julien Vehent <julien at linuxwall.info>:
>
> ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';

Why did you sort the ciphers in this particular order?

If you wanted to prefer AES128 over AES256 over RC4 you could write:
# ssl_ciphers 'AES128:AES256:RC4+SHA:!aNULL:!PSK:!SRP';
See the output of:
# openssl ciphers list -v 'AES128:AES256:RC4+SHA:!aNULL:!PSK'
OpenSSL will order the combinations by strength and include new modes
by default.

Why do you include the weak RC4?
You don't use SSLv3. The subset of outdated clients not able to
use TLSv1.1 *and* AES properly is diminishing. (They would have been
not been patched for about more than two years and need to repeatedly
(think: millions of times) request the same binary data without Nginx
changing the response?)

Given that AES256 boils down to 2**99.5 bits attack (time/step)
complexity [1] and AES128 to 2**100 if you agree with [2] I would
suggest this:
# ssl_ciphers 'AES128:!aNULL:!PSK:!SRP'
? Include PSK and/or SRP if you need them, which almost none webserver
operator does. Optionally with !ECDH if you don't trust the origin of
the random seed values for NIST curves.

--
Mark
http://mark.ossdl.de/

[1] http://eprint.iacr.org/2009/317
[2] http://eprint.iacr.org/2002/044


From reallfqq-nginx at yahoo.fr Sun Oct 20 21:17:37 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sun, 20 Oct 2013 17:17:37 -0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <20131020205136.GB7074@mdounin.ru>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
<20131020205136.GB7074@mdounin.ru>
Message-ID: <CALqce=0SRkg5HJvwKA7jr9Lgpfk6jMW=7ozDw=bPAp3wxY61MA@mail.gmail.com>

It's something a lot of people are bumping on.

401 HTTP covers both failed and missing authentication but isn't possible
for Nginx to differentiate those states and thus only generate an error
message on a failed (ie not empty credentials, either user or password
containing something) attempt?
That would make the error log more efficient as parsing it would provide
more directly failed attempt to access a particular resource.

Is it the standard way of doing things or is it your own?
Are there some use cases or reasons against differentiating 401 answers?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131020/acd04e3c/attachment.html>

From nginx-forum at nginx.us Mon Oct 21 02:30:07 2013
From: nginx-forum at nginx.us (hcmnttan)
Date: Sun, 20 Oct 2013 22:30:07 -0400
Subject: Config Mail Proxy for POP3/SMTP microsoft exchange
In-Reply-To: <20131019100453.GQ2144@mdounin.ru>
References: <20131019100453.GQ2144@mdounin.ru>
Message-ID: <7986248cc7b70d2df948ef02939277b8.NginxMailingListEnglish@forum.nginx.org>

Thanks,
It works for me now.
The error "550 5.7.1 Client does not have permissions to send as this
sender" is because our SMTP back-end did not accept SMTP relay from NGINX.
Configure SMTP backend allow NGINX relay fix my error.

I used xclient-> on, when I try to auth login, SMTP backend return error
with code "500 5.3.3 Unrecognized command" like my SMTP does not support
xclient ( I guess)

Thanks
Tan

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243743,243879#msg-243879


From nginx-forum at nginx.us Mon Oct 21 03:58:06 2013
From: nginx-forum at nginx.us (Jonah)
Date: Sun, 20 Oct 2013 23:58:06 -0400
Subject: Make 1 root file accessible, and serve it for all requests
Message-ID: <6e92b1c8cbad465a271ad6ecab4a67e7.NginxMailingListEnglish@forum.nginx.org>

I have a version of this working, but I suspect my solution is not the best
one.

Please suggest any improvements I can make to my conf file. I am attempting
to do the following:

1. If any file is requested from the root, we should always serve
"index.html". No other file should be accessible, and requesting anything
else should be treated as if you requested "index.html". Currently I'm
using rewrite, but a redirect would be okay too, and possibly preferable.

2. Any file under "/css" or "/js" can be requested, and requesting files
from those directories that don't exist should return a 404.

Here's my current working conf file:

--- BEGIN CODE ----
server {
listen 80;
server_name www.example.com;
client_max_body_size 50M;

root /var/www/mysite;

location = /index.html {
}

# map everything in base dir to one file
location ~ ^/[^/]*$ {
rewrite ^/[^/]*$ /index.html;
}

location ~ ^/css/ {
}

location ~ ^/js/ {
}
}
--- END CODE ----

Thanks!
Jonah

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243881,243881#msg-243881


From reallfqq-nginx at yahoo.fr Mon Oct 21 05:01:44 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 21 Oct 2013 01:01:44 -0400
Subject: Make 1 root file accessible, and serve it for all requests
In-Reply-To: <6e92b1c8cbad465a271ad6ecab4a67e7.NginxMailingListEnglish@forum.nginx.org>
References: <6e92b1c8cbad465a271ad6ecab4a67e7.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CALqce=2t17K3tVjcx9HCRcgn0=WtnTKNkPx4crNMkXJ8aSbzhA@mail.gmail.com>

server {
listen 80;
server_name www.example.com;

root /var/www/mysite;

location / {
# Default location, request will fallback here if none other
location block matches
rewrite ^.*$ / permanent; # Permanent (HTTP 301) redirection to
'root' location '/'
}

location = / {
# 'root' location, will be served by the index directive (which
defaults to the 'index.html' value)
# You could also use '/index.html' and thus not using the index
directive (don't forget to change the rewrite in previous location block)
}

location /css/ {
# Will serve the corresponding directory. Avoid using regex
whenever possible if you can, that'll hopefully save some CPU
}

location /js/ {
# Idem as previous block
}
}

Note 1:
Your configuration includes a
'client_max_body_size<http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size>'
directive.
Its documentation is not very clear as it seems to mix up 'client request
body size' with the 'Content-Length' HTTP header field which represents the
'answer body size', the size of the file sent after the answer header.
I just checked my own configuration to be sure (I serve files bigger than 1
MiB), but I guess you don't need this directive to serve any file, whatever
its size.

Note 2:
Check the directives in the parent 'http' block to ensure that no
'autoindex<http://nginx.org/en/docs/http/ngx_http_autoindex_module.html#autoindex>'
directive is to be found or if it is, that it is set to 'off'. When not
set, it defaults to 'off'.
Check also there that the
'index<http://nginx.org/en/docs/http/ngx_http_index_module.html#index>'
directive is not set since all you want is the default 'index.html' value.
You can override it in your server block if you need another value at the
'http' block level.
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131021/44d29a8f/attachment.html>

From nginx-forum at nginx.us Mon Oct 21 05:13:56 2013
From: nginx-forum at nginx.us (andrewc)
Date: Mon, 21 Oct 2013 01:13:56 -0400
Subject: Modules behaving differently on 32-bit and 64-bit systems?
Message-ID: <93f8cec27df53fc477f9b2fcd8d2b30b.NginxMailingListEnglish@forum.nginx.org>

Hi there,

I have built nginx 1.5.6 from source, with a 3rd party module (xtoken -
http://code.google.com/p/nginx-xtoken-module/).

I have it working fine on a 32-bit Debian Squeeze system. An identical build
on a 64-bit Centos 6.4 system, with an identical configuration file results
in the error: "nginx: [crit] ngx_slab_alloc() failed: no memory" on
startup.

I have narrowed the problem to the xtoken module, in as much as removing
references to it on the 64-bit system results in nginx starting correctly.

I have had a quick look at the module source code, and can't see anything
that is obviously 32-bit -centric, other than a couple of variables that
have been declared as uint32_t.

Is it correct to assume that a properly written module will work correctly
on both 32 and 64-bit systems?
Is there any additional nginx configuration that needs to be performed on
64-bit systems?

Thanks,

Andrew

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243883,243883#msg-243883


From artemrts at ukr.net Mon Oct 21 09:38:30 2013
From: artemrts at ukr.net (wishmaster)
Date: Mon, 21 Oct 2013 12:38:30 +0300
Subject: nginx and GeoLite2
Message-ID: <1382348034.316319762.brau78gn@frv34.ukr.net>

Hi
I am planning to use GeoLite with nginx. On the MaxMind website there is an announcement:

Announcement
Free access to the latest in IP geolocation databases is now available in our GeoLite2 Databases

I've used this db but nginx returned the error. Is it possible to use GeoLite2?

Nginx version - latest devel.


From nginx-forum at nginx.us Mon Oct 21 09:52:57 2013
From: nginx-forum at nginx.us (bogdanb)
Date: Mon, 21 Oct 2013 05:52:57 -0400
Subject: Expires headers for url rewrite rules
Message-ID: <34030c848d6b87babbaf976288b569d4.NginxMailingListEnglish@forum.nginx.org>

In my config I have some url rewrite rules for images as seen below:

location / {
rewrite ^/custom/path/(.*)/(.*)-(.*).jpg$
/media/images/products/$1/$3.jpg last;
}

They work just fine. I'm also trying to set Expire headers for all static
resources (images, css, js). I've added the following block for that:

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1y;
}

This works fine for everything except for the images that have the url
rewrite rules (which return 404 Not found). Anyone know what I'm doing wrong
here?

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243888,243888#msg-243888


From francis at daoine.org Mon Oct 21 09:58:06 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 21 Oct 2013 10:58:06 +0100
Subject: Expires headers for url rewrite rules
In-Reply-To: <34030c848d6b87babbaf976288b569d4.NginxMailingListEnglish@forum.nginx.org>
References: <34030c848d6b87babbaf976288b569d4.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131021095806.GD2204@craic.sysops.org>

On Mon, Oct 21, 2013 at 05:52:57AM -0400, bogdanb wrote:

Hi there,

> location / {

> location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {

> This works fine for everything except for the images that have the url
> rewrite rules (which return 404 Not found). Anyone know what I'm doing wrong
> here?

One request is handled in one location.

Put all of the configuration that you want to apply to a request, within
the one location{} block that handles that request.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Mon Oct 21 10:23:53 2013
From: nginx-forum at nginx.us (Jonah)
Date: Mon, 21 Oct 2013 06:23:53 -0400
Subject: Make 1 root file accessible, and serve it for all requests
In-Reply-To: <CALqce=2t17K3tVjcx9HCRcgn0=WtnTKNkPx4crNMkXJ8aSbzhA@mail.gmail.com>
References: <CALqce=2t17K3tVjcx9HCRcgn0=WtnTKNkPx4crNMkXJ8aSbzhA@mail.gmail.com>
Message-ID: <dc7757ae468fa182634bcf2217a97dca.NginxMailingListEnglish@forum.nginx.org>

B.R.,

Thanks very much! That was incredibly helpful. I had to make a couple
minor tweaks to get it working, and I wanted a 302 instead of a 301, but the
following is what I ended up with, and it had the added benefit of cutting
response time almost in half when I did a load test:

server {
listen 80;
server_name example.com;

root /var/www/register;

location = /index.html {
}

# Default location, request will fallback here if none other
# location block matches
location / {
rewrite ^.*$ /index.html redirect; # 'root' location '/'
}

location /css/ {
}

location /js/ {
}

}

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243881,243893#msg-243893


From nginx-forum at nginx.us Mon Oct 21 10:52:08 2013
From: nginx-forum at nginx.us (bogdanb)
Date: Mon, 21 Oct 2013 06:52:08 -0400
Subject: Expires headers for url rewrite rules
In-Reply-To: <20131021095806.GD2204@craic.sysops.org>
References: <20131021095806.GD2204@craic.sysops.org>
Message-ID: <af931393f17175047ea39df49011e416.NginxMailingListEnglish@forum.nginx.org>

Moving the rewrite rules inside the " ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
" block did the trick.

Thanks a lot for the quick reply.

Francis Daly Wrote:
-------------------------------------------------------
> On Mon, Oct 21, 2013 at 05:52:57AM -0400, bogdanb wrote:
>
> Hi there,
>
> > location / {
>
> > location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
>
> > This works fine for everything except for the images that have the
> url
> > rewrite rules (which return 404 Not found). Anyone know what I'm
> doing wrong
> > here?
>
> One request is handled in one location.
>
> Put all of the configuration that you want to apply to a request,
> within
> the one location{} block that handles that request.
>
> f
> --
> Francis Daly francis at daoine.org
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243888,243895#msg-243895


From mdounin at mdounin.ru Mon Oct 21 11:14:20 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 21 Oct 2013 15:14:20 +0400
Subject: Modules behaving differently on 32-bit and 64-bit systems?
In-Reply-To: <93f8cec27df53fc477f9b2fcd8d2b30b.NginxMailingListEnglish@forum.nginx.org>
References: <93f8cec27df53fc477f9b2fcd8d2b30b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131021111420.GC7074@mdounin.ru>

Hello!

On Mon, Oct 21, 2013 at 01:13:56AM -0400, andrewc wrote:

> Hi there,
>
> I have built nginx 1.5.6 from source, with a 3rd party module (xtoken -
> http://code.google.com/p/nginx-xtoken-module/).
>
> I have it working fine on a 32-bit Debian Squeeze system. An identical build
> on a 64-bit Centos 6.4 system, with an identical configuration file results
> in the error: "nginx: [crit] ngx_slab_alloc() failed: no memory" on
> startup.
>
> I have narrowed the problem to the xtoken module, in as much as removing
> references to it on the 64-bit system results in nginx starting correctly.
>
> I have had a quick look at the module source code, and can't see anything
> that is obviously 32-bit -centric, other than a couple of variables that
> have been declared as uint32_t.
>
> Is it correct to assume that a properly written module will work correctly
> on both 32 and 64-bit systems?

Yes.

> Is there any additional nginx configuration that needs to be performed on
> 64-bit systems?

In some cases, additional configuration may be required due to
different data sizes.

Quick looks suggests that the problem in xtoken module is likely
here:

https://code.google.com/p/nginx-xtoken-module/source/browse/trunk/ngx_http_xtoken_module.c#660

It tries to estimate size of shared memory zone needed to keep
it's data, but the estimate likely fails on 64-bit platforms due
to internal structures of slab allocator being bigger on these
platforms.

The same code may also unexpectedly fail in the future on internal
slab allocator changes.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Mon Oct 21 11:53:46 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 21 Oct 2013 15:53:46 +0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <CALqce=0SRkg5HJvwKA7jr9Lgpfk6jMW=7ozDw=bPAp3wxY61MA@mail.gmail.com>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
<20131020205136.GB7074@mdounin.ru>
<CALqce=0SRkg5HJvwKA7jr9Lgpfk6jMW=7ozDw=bPAp3wxY61MA@mail.gmail.com>
Message-ID: <20131021115346.GD7074@mdounin.ru>

Hello!

On Sun, Oct 20, 2013 at 05:17:37PM -0400, B.R. wrote:

> It's something a lot of people are bumping on.
>
> 401 HTTP covers both failed and missing authentication but isn't possible
> for Nginx to differentiate those states and thus only generate an error
> message on a failed (ie not empty credentials, either user or password
> containing something) attempt?
> That would make the error log more efficient as parsing it would provide
> more directly failed attempt to access a particular resource.
>
> Is it the standard way of doing things or is it your own?
> Are there some use cases or reasons against differentiating 401 answers?

The difference is already here.

The message "no user/password was provided for basic
authentication", as in original message, means exactly that: there
are no credentials provided.

On failed authentication, the "user ...: password mismatch"
message is logged. On unknown user, the "user ... was not
found in ..." message is logged.

It might make sense to downgrade the "no user/password ..."
message severity. Not sure though.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Mon Oct 21 13:12:51 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 21 Oct 2013 17:12:51 +0400
Subject: nginx and GeoLite2
In-Reply-To: <1382348034.316319762.brau78gn@frv34.ukr.net>
References: <1382348034.316319762.brau78gn@frv34.ukr.net>
Message-ID: <20131021131251.GF7074@mdounin.ru>

Hello!

On Mon, Oct 21, 2013 at 12:38:30PM +0300, wishmaster wrote:

> Hi
> I am planning to use GeoLite with nginx. On the MaxMind website there is an announcement:
>
> Announcement
> Free access to the latest in IP geolocation databases is now available in our GeoLite2 Databases
>
> I've used this db but nginx returned the error. Is it possible to use GeoLite2?

GeoLite2 databases use different format and different libraries
are needed to access them. They are new and not supported by
nginx.

Use GeoLite databases instead (without "2"). Or use a CSV
version, a perl script to convert CSV files from MaxMind to nginx
ngx_http_geo_module configuration is in contrib.

--
Maxim Dounin
http://nginx.org/en/donation.html


From rainer at ultra-secure.de Mon Oct 21 13:55:48 2013
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Mon, 21 Oct 2013 15:55:48 +0200
Subject: nginx and GeoLite2
In-Reply-To: <20131021131251.GF7074@mdounin.ru>
References: <1382348034.316319762.brau78gn@frv34.ukr.net>
<20131021131251.GF7074@mdounin.ru>
Message-ID: <20131021155548.47e0ec31@suse3>

Am Mon, 21 Oct 2013 17:12:51 +0400
schrieb Maxim Dounin <mdounin at mdounin.ru>:

> Hello!
>
> On Mon, Oct 21, 2013 at 12:38:30PM +0300, wishmaster wrote:
>
> > Hi
> > I am planning to use GeoLite with nginx. On the MaxMind website
> > there is an announcement:
> >
> > Announcement
> > Free access to the latest in IP geolocation databases is now
> > available in our GeoLite2 Databases
> >
> > I've used this db but nginx returned the error. Is it possible to
> > use GeoLite2?
>
> GeoLite2 databases use different format and different libraries
> are needed to access them. They are new and not supported by
> nginx.
>
> Use GeoLite databases instead (without "2").



Does anyone know, will the GeoLite databases still be around?
Or will we all be forced to go the CSV-export route?


From reallfqq-nginx at yahoo.fr Mon Oct 21 14:15:51 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 21 Oct 2013 10:15:51 -0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <20131021115346.GD7074@mdounin.ru>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
<20131020205136.GB7074@mdounin.ru>
<CALqce=0SRkg5HJvwKA7jr9Lgpfk6jMW=7ozDw=bPAp3wxY61MA@mail.gmail.com>
<20131021115346.GD7074@mdounin.ru>
Message-ID: <CALqce=27Z69GkagQvWQHp1yLAeK+k-Jg9mBh_d=e5ympHSRhnA@mail.gmail.com>

Thanks Maxim!

I didn't really pay attention to the difference in the error messages.
Thanks for remembering them.

The question of the severity of the last message has no simple answer I'm
afraid, since it depends on the use case.
Maybe someone wishes to log any attempt to access a protected ressource ?
Not sending credential is then considered as an 'error'
Maybe someone only consider a 'void' attempt as an error if it's not the
1st access in a short time. The problem I see here is there is that HTTP
having no memory (stateless), there is no way has knowing such thing as
'1st access' or not.
More than that, I think that's user-centric definition, not really an
'error' as such.

The main problem here is that this message is generated when the
credentials are asked for, which is a normal flow of a use-case scenario
for a standard protected resource.
Filtering for errors against a specific resources will then generated loads
of unmeaningful entries, potentially hiding interesting ones.

1?) Since cancelling sending credentials when requested generates 403,
there must be a way for Nginx to differentiate 1st connection attempt to
the followings: can't that be used to avoid logging an error message on 1st
attempt (and log it in access log instead)? Downgrading this message
severity for this case.
2?) As an extra feature, is there a way for Nginx to remember (at cost of
memory) access attempts on (conf defined) short time, logging errors only
if a trigger made of (conf defined) multiple attempts is reached?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131021/0a579d40/attachment.html>

From reallfqq-nginx at yahoo.fr Mon Oct 21 14:22:59 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 21 Oct 2013 10:22:59 -0400
Subject: Make 1 root file accessible, and serve it for all requests
In-Reply-To: <dc7757ae468fa182634bcf2217a97dca.NginxMailingListEnglish@forum.nginx.org>
References: <CALqce=2t17K3tVjcx9HCRcgn0=WtnTKNkPx4crNMkXJ8aSbzhA@mail.gmail.com>
<dc7757ae468fa182634bcf2217a97dca.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CALqce=1ge-9t7vaRkoaUjQxjDwdLsHQ0W7BDbE987R2x50uW5Q@mail.gmail.com>

I'm glad it helped. Maybe there are still some improvements, I'll let
others pointing them out...

You must have good reasons for preferring temporary redirects other than
permanent ones, but if you really wants to redirect all traffic to the
index and you don't serve existing files using their direct URI later,
using the permanent redirection mechanism takes advantage of the browser
cache to avoid getting hit with the same request later... Saves bandwith +
requests processing time.

However, if the client is in some way authenticated and may access those
protected resources later, then that may be a problem indeed... ^^

I guess you know what you are doing already. Just sayin'.
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131021/2b061520/attachment.html>

From francis at daoine.org Mon Oct 21 14:34:51 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 21 Oct 2013 15:34:51 +0100
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <CALqce=27Z69GkagQvWQHp1yLAeK+k-Jg9mBh_d=e5ympHSRhnA@mail.gmail.com>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
<20131020205136.GB7074@mdounin.ru>
<CALqce=0SRkg5HJvwKA7jr9Lgpfk6jMW=7ozDw=bPAp3wxY61MA@mail.gmail.com>
<20131021115346.GD7074@mdounin.ru>
<CALqce=27Z69GkagQvWQHp1yLAeK+k-Jg9mBh_d=e5ympHSRhnA@mail.gmail.com>
Message-ID: <20131021143451.GE2204@craic.sysops.org>

On Mon, Oct 21, 2013 at 10:15:51AM -0400, B.R. wrote:

Hi there,

> Maybe someone wishes to log any attempt to access a protected ressource ?
> Not sending credential is then considered as an 'error'

Access log includes $status = 401.

> Maybe someone only consider a 'void' attempt as an error if it's not the
> 1st access in a short time.

Access log includes $status = 401, and the user analysing the log can
choose to ignore the first of a set in a short time.

> The main problem here is that this message is generated when the
> credentials are asked for, which is a normal flow of a use-case scenario
> for a standard protected resource.

Access log includes $status = 401 and $remote_user = -, probably.

> Filtering for errors against a specific resources will then generated loads
> of unmeaningful entries, potentially hiding interesting ones.

Access log includes $status = 401 and $remote_user != -, probably.

(There are cases when an "Authorization: Basic" header will include a
value that shows as "-", but they probably aren't worth worrying about
if you are looking for "normal" bad authentication attempts.)

> 1?) Since cancelling sending credentials when requested generates 403,

No, it doesn't.

Cancelling sending credentials will usually get the browser to show
you the 401 page that nginx sent on the previous request, which had
invalid credentials.

> there must be a way for Nginx to differentiate 1st connection attempt to
> the followings: can't that be used to avoid logging an error message on 1st
> attempt (and log it in access log instead)? Downgrading this message
> severity for this case.

HTTP is stateless.

Each request includes appropriate credentials, or it doesn't.

You can drop the first-log-line-in-a-sequence in your analysis program,
where you decide exactly what you mean by sequence. nginx should not
decide what you consider a sequence.

> 2?) As an extra feature, is there a way for Nginx to remember (at cost of
> memory) access attempts on (conf defined) short time, logging errors only
> if a trigger made of (conf defined) multiple attempts is reached?

A patch would probably be considered; but I suspect that it's going to
be easier for whatever is reading the full log file to be told which
lines to heed and which to ignore.

f
--
Francis Daly francis at daoine.org


From reallfqq-nginx at yahoo.fr Mon Oct 21 14:50:15 2013
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 21 Oct 2013 10:50:15 -0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <20131021143451.GE2204@craic.sysops.org>
References: <b88e6f6a277cfaf0d8bf10a949f5b340.NginxMailingListEnglish@forum.nginx.org>
<20131020205136.GB7074@mdounin.ru>
<CALqce=0SRkg5HJvwKA7jr9Lgpfk6jMW=7ozDw=bPAp3wxY61MA@mail.gmail.com>
<20131021115346.GD7074@mdounin.ru>
<CALqce=27Z69GkagQvWQHp1yLAeK+k-Jg9mBh_d=e5ympHSRhnA@mail.gmail.com>
<20131021143451.GE2204@craic.sysops.org>
Message-ID: <CALqce=01zd3DsTsDAQRdY9cbzkgoxCxuU=Rd9m_juy_OJ9YypQ@mail.gmail.com>

On Mon, Oct 21, 2013 at 10:34 AM, Francis Daly <francis at daoine.org> wrote:

> On Mon, Oct 21, 2013 at 10:15:51AM -0400, B.R. wrote:
>
> Hi there,
>
> > Maybe someone wishes to log any attempt to access a protected ressource ?
> > Not sending credential is then considered as an 'error'
>
> Access log includes $status = 401.
>
> > Maybe someone only consider a 'void' attempt as an error if it's not the
> > 1st access in a short time.
>
> Access log includes $status = 401, and the user analysing the log can
> choose to ignore the first of a set in a short time.
>
> > The main problem here is that this message is generated when the
> > credentials are asked for, which is a normal flow of a use-case scenario
> > for a standard protected resource.
>
> Access log includes $status = 401 and $remote_user = -, probably.
>
> > Filtering for errors against a specific resources will then generated
> loads
> > of unmeaningful entries, potentially hiding interesting ones.
>
> Access log includes $status = 401 and $remote_user != -, probably.
>
> (There are cases when an "Authorization: Basic" header will include a
> value that shows as "-", but they probably aren't worth worrying about
> if you are looking for "normal" bad authentication attempts.)
>

?Thanks for all those tips. The access log has all the required information
contained in it already, then.
?


> > 1?) Since cancelling sending credentials when requested generates 403,
>
> No, it doesn't.
>
> Cancelling sending credentials will usually get the browser to show
> you the 401 page that nginx sent on the previous request, which had
> invalid credentials.
>

?Sorry for this, that was pure conjectures. Thanks for the explanation.
?


> > there must be a way for Nginx to differentiate 1st connection attempt to
> > the followings: can't that be used to avoid logging an error message on
> 1st
> > attempt (and log it in access log instead)? Downgrading this message
> > severity for this case.
>
> HTTP is stateless.
>
> Each request includes appropriate credentials, or it doesn't.
>

?I know, that's why I was talking about creating some 'memory' (mapping
against IP addresses?)

You can drop the first-log-line-in-a-sequence in your analysis program,
> where you decide exactly what you mean by sequence. nginx should not
> decide what you consider a sequence.
>

?Talking about what Nginx should or shoudn't decide for the user, ?
?what about the error log entry when a 401 is accessed? :o)
As you mentioned earlier, this is already logged in the access log. What
about Maxim was suggesting: downgrading the message importance? Maybe that
would mean stopping logging that in the error logfile.
?


> > 2?) As an extra feature, is there a way for Nginx to remember (at cost of
> > memory) access attempts on (conf defined) short time, logging errors only
> > if a trigger made of (conf defined) multiple attempts is reached?
>
> A patch would probably be considered; but I suspect that it's going to
> be easier for whatever is reading the full log file to be told which
> lines to heed and which to ignore.
>

?As stated, that's extra, and that's a new feature. More important trouble
must eat your time first.
That's not really adressing our issue best, so for now, a redifinition of
what Nginx should decide by itself or not ?
?would probably be more effectual.?

What about Maxim's proposition then? Is there something we missed about
mechanics in accessing protected resource?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131021/a9004b0c/attachment-0001.html>

From cubicdaiya at gmail.com Mon Oct 21 16:31:17 2013
From: cubicdaiya at gmail.com (cubicdaiya)
Date: Tue, 22 Oct 2013 01:31:17 +0900
Subject: correct notation of nginx
Message-ID: <CABWmZaOcFLpZP4nYPfcKPAiH7swn+uEyUp8NXh+AQ4yFn7X0=A@mail.gmail.com>

Hello.

Though I'm sometimes stumped about how to describing the notation of nginx
in text,
is there an official notation of nginx?

In detail, I'm sometimes stumped about whether selecting from the
followings.

* nginx
* Nginx
* NGINX

--
Tatsuhiko Kubo

E-Mail : cubicdaiya at gmail.com
HP : http://cccis.jp/index_en.html
Blog : http://cubicdaiya.github.com/blog/en/
Twitter : http://twitter.com/cubicdaiya
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131022/655fc03e/attachment.html>

From artemrts at ukr.net Mon Oct 21 16:51:23 2013
From: artemrts at ukr.net (wishmaster)
Date: Mon, 21 Oct 2013 19:51:23 +0300
Subject: nginx and GeoLite2
In-Reply-To: <20131021131251.GF7074@mdounin.ru>
References: <1382348034.316319762.brau78gn@frv34.ukr.net>
<20131021131251.GF7074@mdounin.ru>
Message-ID: <1382374057.182717639.998c7c0x@frv34.ukr.net>


Hi,

--- Original message ---
From: "Maxim Dounin" <mdounin at mdounin.ru>
Date: 21 October 2013, 16:13:01


> Hello!
>
> On Mon, Oct 21, 2013 at 12:38:30PM +0300, wishmaster wrote:
>
> > Hi
> > I am planning to use GeoLite with nginx. On the MaxMind website there is an announcement:
> >
> > Announcement
> > Free access to the latest in IP geolocation databases is now available in our GeoLite2 Databases
> >
> > I've used this db but nginx returned the error. Is it possible to use GeoLite2?
>
> GeoLite2 databases use different format and different libraries
> are needed to access them. They are new and not supported by
> nginx.
>
> Use GeoLite databases instead (without "2"). Or use a CSV
> version, a perl script to convert CSV files from MaxMind to nginx
> ngx_http_geo_module configuration is in contrib.
>
Unfortunately, v2 of this db is not shipped in CSV format. Binary format only.
See http://dev.maxmind.com/geoip/geoip2/geolite2/

Max, are you planning to add support of GeoLite2 in nginx in nearest future?


From mdounin at mdounin.ru Mon Oct 21 17:53:42 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 21 Oct 2013 21:53:42 +0400
Subject: nginx and GeoLite2
In-Reply-To: <1382374057.182717639.998c7c0x@frv34.ukr.net>
References: <1382348034.316319762.brau78gn@frv34.ukr.net>
<20131021131251.GF7074@mdounin.ru>
<1382374057.182717639.998c7c0x@frv34.ukr.net>
Message-ID: <20131021175341.GJ7074@mdounin.ru>

Hello!

On Mon, Oct 21, 2013 at 07:51:23PM +0300, wishmaster wrote:

>
> Hi,
>
> --- Original message ---
> From: "Maxim Dounin" <mdounin at mdounin.ru>
> Date: 21 October 2013, 16:13:01
>
>
> > Hello!
> >
> > On Mon, Oct 21, 2013 at 12:38:30PM +0300, wishmaster wrote:
> >
> > > Hi
> > > I am planning to use GeoLite with nginx. On the MaxMind website there is an announcement:
> > >
> > > Announcement
> > > Free access to the latest in IP geolocation databases is now available in our GeoLite2 Databases
> > >
> > > I've used this db but nginx returned the error. Is it possible to use GeoLite2?
> >
> > GeoLite2 databases use different format and different libraries
> > are needed to access them. They are new and not supported by
> > nginx.
> >
> > Use GeoLite databases instead (without "2"). Or use a CSV
> > version, a perl script to convert CSV files from MaxMind to nginx
> > ngx_http_geo_module configuration is in contrib.
> >
> Unfortunately, v2 of this db is not shipped in CSV format. Binary format only.
> See http://dev.maxmind.com/geoip/geoip2/geolite2/

As far as I can tell, the only difference between GeoLite and
GeoLite2 is format.

At least the http://www.maxmind.com/en/opensource page makes me
think so, as a link to the GeoLite2 page annotated as "(New
format)".

> Max, are you planning to add support of GeoLite2 in nginx in nearest future?

No.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Mon Oct 21 18:03:34 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 21 Oct 2013 22:03:34 +0400
Subject: correct notation of nginx
In-Reply-To: <CABWmZaOcFLpZP4nYPfcKPAiH7swn+uEyUp8NXh+AQ4yFn7X0=A@mail.gmail.com>
References: <CABWmZaOcFLpZP4nYPfcKPAiH7swn+uEyUp8NXh+AQ4yFn7X0=A@mail.gmail.com>
Message-ID: <20131021180334.GK7074@mdounin.ru>

Hello!

On Tue, Oct 22, 2013 at 01:31:17AM +0900, cubicdaiya wrote:

> Hello.
>
> Though I'm sometimes stumped about how to describing the notation of nginx
> in text,
> is there an official notation of nginx?
>
> In detail, I'm sometimes stumped about whether selecting from the
> followings.
>
> * nginx
> * Nginx
> * NGINX

Preferred variant is "nginx". Sometimes "NGINX" is used, too. Use
of "Nginx" is discouraged as Igor thinks it looks ugly.

But, actually, most of us don't really care.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Mon Oct 21 18:40:25 2013
From: nginx-forum at nginx.us (dalmolin)
Date: Mon, 21 Oct 2013 14:40:25 -0400
Subject: Authentication error or maybe it isn't? - no user/password was
provided
In-Reply-To: <20131020204536.GC2204@craic.sysops.org>
References: <20131020204536.GC2204@craic.sysops.org>
Message-ID: <ced97125fb73cc10f176ffa494dcc74d.NginxMailingListEnglish@forum.nginx.org>

Thank you Francis... it all makes sense! By the way, I modified the IP
addresses before posting which explains why they changed... :-)

Joseph

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243871,243927#msg-243927


From nginx-forum at nginx.us Mon Oct 21 20:12:56 2013
From: nginx-forum at nginx.us (agriz)
Date: Mon, 21 Oct 2013 16:12:56 -0400
Subject: Do i need mod_security for nginx?
Message-ID: <58a3ef11192aaa72382c65f72d6db883.NginxMailingListEnglish@forum.nginx.org>

Today i found one particular IP address which was trying a lot of things in
my server.

For a second, it was sending atleast 50 requests.
It was keep on accessing my admin login page with post and get request
That IP tried proxy GET http://...
It tried to inject something in the script with -d parameter.

i added "limit_req_zone $binary_remote_addr zone=app:10m rate=2r/s; " in
http block and
location / {
limit_req zone=app burst=50;
}

I believe it will block too many connections per second from a ip.
How do i secure the server from other attacks?

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243933,243933#msg-243933


From nginx-list at puzzled.xs4all.nl Mon Oct 21 21:17:23 2013
From: nginx-list at puzzled.xs4all.nl (Patrick Lists)
Date: Mon, 21 Oct 2013 23:17:23 +0200
Subject: Do i need mod_security for nginx?
In-Reply-To: <58a3ef11192aaa72382c65f72d6db883.NginxMailingListEnglish@forum.nginx.org>
References: <58a3ef11192aaa72382c65f72d6db883.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <526599E3.3050005@puzzled.xs4all.nl>

On 10/21/2013 10:12 PM, agriz wrote:
> Today i found one particular IP address which was trying a lot of things in
> my server.
>
> For a second, it was sending atleast 50 requests.
> It was keep on accessing my admin login page with post and get request
> That IP tried proxy GET http://...
> It tried to inject something in the script with -d parameter.
>
> i added "limit_req_zone $binary_remote_addr zone=app:10m rate=2r/s; " in
> http block and
> location / {
> limit_req zone=app burst=50;
> }
>
> I believe it will block too many connections per second from a ip.
> How do i secure the server from other attacks?

Have a look at fail2ban.

Regards,
Patrick


From nginx-forum at nginx.us Mon Oct 21 21:41:13 2013
From: nginx-forum at nginx.us (agriz)
Date: Mon, 21 Oct 2013 17:41:13 -0400
Subject: Do i need mod_security for nginx?
In-Reply-To: <526599E3.3050005@puzzled.xs4all.nl>
References: <526599E3.3050005@puzzled.xs4all.nl>
Message-ID: <c89b3367ce0244a5644431de947f9ec9.NginxMailingListEnglish@forum.nginx.org>

[nginx-auth]
enabled = true
filter = nginx-auth
action = iptables-multiport[name=NoAuthFailures, port="http,https"]
logpath = /var/log/nginx*/*error*.log
bantime = 600 # 10 minutes
maxretry = 6

[nginx-login]
enabled = true
filter = nginx-login
action = iptables-multiport[name=NoLoginFailures, port="http,https"]
logpath = /var/log/nginx*/*access*.log
bantime = 600 # 10 minutes
maxretry = 6

[nginx-badbots]
enabled = true
filter = apache-badbots
action = iptables-multiport[name=BadBots, port="http,https"]
logpath = /var/log/nginx*/*access*.log
bantime = 86400 # 1 day
maxretry = 1

[nginx-noscript]
enabled = true
action = iptables-multiport[name=NoScript, port="http,https"]
filter = nginx-noscript
logpath = /var/log/nginx*/*access*.log
maxretry = 6
bantime = 86400 # 1 day

[nginx-proxy]
enabled = true
action = iptables-multiport[name=NoProxy, port="http,https"]
filter = nginx-proxy
logpath = /var/log/nginx*/*access*.log
maxretry = 0
bantime = 86400 # 1 day


filters.d/nginx-proxy.conf
[Definition]
failregex = ^<HOST> -.*GET http.*
ignoreregex =


nginx-noscript.conf

[Definition]
failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi)
ignoreregex =

nginx-auth.conf

[Definition]

failregex = no user/password was provided for basic authentication.*client:
<HOST>
user .* was not found in.*client: <HOST>
user .* password mismatch.*client: <HOST>

ignoreregex =

nginx-login.conf

[Definition]
failregex = ^<HOST> -.*POST /sessions HTTP/1\.." 200
ignoreregex =


I m using nginx with php fpm.
I tried to look at the fail2ban apache config files and created them with
the help of internet search.

I still am having a dount on

failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi)

Do i really need to have .php in this regex?
I havent restart the fail2ban service.

Or am i good to restart the fail2ban service?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243933,243936#msg-243936


From nginx-forum at nginx.us Tue Oct 22 00:27:01 2013
From: nginx-forum at nginx.us (mzabani)
Date: Mon, 21 Oct 2013 20:27:01 -0400
Subject: Running code in nginx's master process
Message-ID: <ea032561a015d275252fa77955fa2084.NginxMailingListEnglish@forum.nginx.org>

Is it possible to have a module run code in nginx's master process, i.e. not
in a worker process?
I know init_master is just a stub for now. Are there any chances this gets
implemented in the near future?

Thanks in advance,
Marcelo.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243942,243942#msg-243942


From nginx-forum at nginx.us Tue Oct 22 02:35:06 2013
From: nginx-forum at nginx.us (andrewc)
Date: Mon, 21 Oct 2013 22:35:06 -0400
Subject: Modules behaving differently on 32-bit and 64-bit systems?
In-Reply-To: <20131021111420.GC7074@mdounin.ru>
References: <20131021111420.GC7074@mdounin.ru>
Message-ID: <29b9f5d37018c88a01d36b3edcb9ef8f.NginxMailingListEnglish@forum.nginx.org>

Maxim Dounin Wrote:
-------------------------------------------------------

> Quick looks suggests that the problem in xtoken module is likely
> here:
>
> https://code.google.com/p/nginx-xtoken-module/source/browse/trunk/ngx_
> http_xtoken_module.c#660
>
> It tries to estimate size of shared memory zone needed to keep
> it's data, but the estimate likely fails on 64-bit platforms due
> to internal structures of slab allocator being bigger on these
> platforms.

Thanks for the tip, Maxim. On line 13 of ngx_http_token_module.c I
replaced:

#define XTOKEN_SHM_SIZE (sizeof(ngx_http_xtoken_shctx_t))

with:

#define XTOKEN_SHM_SIZE 65536

and rebuilt. It seems to work fine now.

65536 is purely arbitrary, and obviously not the most efficient use of
memory. I'll do some reading and see if I can rework that memory allocation
line.

Thanks,

Andrew

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243883,243943#msg-243943


From cubicdaiya at gmail.com Tue Oct 22 03:07:05 2013
From: cubicdaiya at gmail.com (cubicdaiya)
Date: Tue, 22 Oct 2013 12:07:05 +0900
Subject: correct notation of nginx
Message-ID: <CABWmZaM-8WmoH6ppkjzJN6kGN25KiVGyf3MaX-+mYhk5BUOUiA@mail.gmail.com>

Hello!

2013/10/22 Maxim Dounin <mdounin at mdounin.ru>

> Hello!
>
> On Tue, Oct 22, 2013 at 01:31:17AM +0900, cubicdaiya wrote:
>
> > Hello.
> >
> > Though I'm sometimes stumped about how to describing the notation of
> nginx
> > in text,
> > is there an official notation of nginx?
> >
> > In detail, I'm sometimes stumped about whether selecting from the
> > followings.
> >
> > * nginx
> > * Nginx
> > * NGINX
>
> Preferred variant is "nginx". Sometimes "NGINX" is used, too. Use
> of "Nginx" is discouraged as Igor thinks it looks ugly.
>
> But, actually, most of us don't really care.
>



Thanks. I'll use "nginx" from now.


--
Tatsuhiko Kubo

E-Mail : cubicdaiya at gmail.com
HP : http://cccis.jp/index_en.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131022/ae9085d7/attachment-0001.html>

From ru at nginx.com Tue Oct 22 07:35:59 2013
From: ru at nginx.com (Ruslan Ermilov)
Date: Tue, 22 Oct 2013 11:35:59 +0400
Subject: nginx and GeoLite2
In-Reply-To: <20131021175341.GJ7074@mdounin.ru>
References: <1382348034.316319762.brau78gn@frv34.ukr.net>
<20131021131251.GF7074@mdounin.ru>
<1382374057.182717639.998c7c0x@frv34.ukr.net>
<20131021175341.GJ7074@mdounin.ru>
Message-ID: <20131022073559.GF89843@lo0.su>

On Mon, Oct 21, 2013 at 09:53:42PM +0400, Maxim Dounin wrote:
> On Mon, Oct 21, 2013 at 07:51:23PM +0300, wishmaster wrote:
>
> >
> > Hi,
> >
> > --- Original message ---
> > From: "Maxim Dounin" <mdounin at mdounin.ru>
> > Date: 21 October 2013, 16:13:01
> >
> >
> > > Hello!
> > >
> > > On Mon, Oct 21, 2013 at 12:38:30PM +0300, wishmaster wrote:
> > >
> > > > Hi
> > > > I am planning to use GeoLite with nginx. On the MaxMind website there is an announcement:
> > > >
> > > > Announcement
> > > > Free access to the latest in IP geolocation databases is now available in our GeoLite2 Databases
> > > >
> > > > I've used this db but nginx returned the error. Is it possible to use GeoLite2?
> > >
> > > GeoLite2 databases use different format and different libraries
> > > are needed to access them. They are new and not supported by
> > > nginx.
> > >
> > > Use GeoLite databases instead (without "2"). Or use a CSV
> > > version, a perl script to convert CSV files from MaxMind to nginx
> > > ngx_http_geo_module configuration is in contrib.
> > >
> > Unfortunately, v2 of this db is not shipped in CSV format. Binary format only.
> > See http://dev.maxmind.com/geoip/geoip2/geolite2/
>
> As far as I can tell, the only difference between GeoLite and
> GeoLite2 is format.
>
> At least the http://www.maxmind.com/en/opensource page makes me
> think so, as a link to the GeoLite2 page annotated as "(New
> format)".

v2 databases require access via v2 API:
https://github.com/maxmind/libmaxminddb/blob/master/doc/libmaxminddb.md

http://dev.maxmind.com/geoip/geoip2/whats-new-in-geoip2/
http://dev.maxmind.com/geoip/geoip2/geolite2/

> > Max, are you planning to add support of GeoLite2 in nginx in nearest future?
>
> No.

I'd expand on this: not until the new C API is released.
(Currently, it's declared beta and subject to change.)


From list_nginx at bluerosetech.com Tue Oct 22 07:37:00 2013
From: list_nginx at bluerosetech.com (Darren Pilgrim)
Date: Tue, 22 Oct 2013 00:37:00 -0700
Subject: Any rough ETA on SPDY/3 & push?
In-Reply-To: <52631EDC.1000602@bluerosetech.com>
References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
<2ECA7C2F-AFC0-402A-A6EF-46B79B6A3C9D@nginx.com>
<52631EDC.1000602@bluerosetech.com>
Message-ID: <52662B1C.7010402@bluerosetech.com>

On 10/19/2013 5:07 PM, Darren Pilgrim wrote:
> On 10/14/2013 9:37 AM, Andrew Alexeev wrote:
>> http://barry.wordpress.com/2012/06/16/nginx-spdy-and-automattic/
>
> How much capital would you need to do this? I'd contribute to a
> crowd-funding campaign for this and I can likely get work to match or
> beat what I put in.

I was quite serious about this offer. Is it some kind of faux pas
throwing money at an open source project?


From andrew at nginx.com Tue Oct 22 08:27:41 2013
From: andrew at nginx.com (Andrew Alexeev)
Date: Tue, 22 Oct 2013 12:27:41 +0400
Subject: Any rough ETA on SPDY/3 & push?
In-Reply-To: <52662B1C.7010402@bluerosetech.com>
References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
<2ECA7C2F-AFC0-402A-A6EF-46B79B6A3C9D@nginx.com>
<52631EDC.1000602@bluerosetech.com> <52662B1C.7010402@bluerosetech.com>
Message-ID: <406AFE54-3305-4D14-9012-D336AB0258EC@nginx.com>

On Oct 22, 2013, at 11:37 AM, Darren Pilgrim <list_nginx at bluerosetech.com> wrote:

> On 10/19/2013 5:07 PM, Darren Pilgrim wrote:
>> On 10/14/2013 9:37 AM, Andrew Alexeev wrote:
>>> http://barry.wordpress.com/2012/06/16/nginx-spdy-and-automattic/
>>
>> How much capital would you need to do this? I'd contribute to a
>> crowd-funding campaign for this and I can likely get work to match or
>> beat what I put in.
>
> I was quite serious about this offer. Is it some kind of faux pas throwing money at an open source project?

It isn't. Let's move if off the list, though, as it's a topic which is separate from
technical discussions :)



From valjohn1647 at gmail.com Tue Oct 22 09:09:19 2013
From: valjohn1647 at gmail.com (val john)
Date: Tue, 22 Oct 2013 14:39:19 +0530
Subject: Ngginx reverse proxy issue
Message-ID: <CALUd1bMmKz69wMU+Z8QwyPs4Or0ywiMRDY-KEA7xcQfepYfM_A@mail.gmail.com>

Hi

we are using nginx as proxy the request to back-end t web application , but
some times when users accessing webapp via proxy , they faced some unusual
behaviors like buttons are not clicking , Dropdown lists are not listing
data .

this is my nginx reverse proxy configuration , is there any modification
that i need to do to avoid such a issues , please advice

location /webapp {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.16:8090/webapp;
proxy_redirect off;
}

Thank You
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131022/36d1658d/attachment.html>

From nginx-forum at nginx.us Tue Oct 22 10:03:40 2013
From: nginx-forum at nginx.us (cyrus_the_great)
Date: Tue, 22 Oct 2013 06:03:40 -0400
Subject: phpbb problems
Message-ID: <b6c301816c260ba69bebcd66aac7fa1f.NginxMailingListEnglish@forum.nginx.org>

I am having some problems with phpbb loading the style sheets for the admin
area, and also the URLs look strange which could be related. I am using an
nginx server config I pulled off the web which is suppossed to be equivalent
of the Apache httpd rewrite rules, but they don't seem to be enough.

server {
server_name forumobfuscated.onion;
root /var/www/sites/obfuscated_forum;
access_log /home/obfuse/access_log;
error_log /home/obfuse/error_log;
index index.php index.html index.htm;

location ~
/(config\.php|common\.php|cache|files|images/avatars/upload|includes|store)
{
deny all;
return 403;
}

location ~* \.(gif|jpeg|jpg|png|css)$ {
expires 30d;
}

location ~ \.php$ {
fastcgi_pass 127.0.0.1:9002;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
/var/www/sites/obfuscated_forum$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_read_timeout 360;

include fastcgi_params;
}
}

That is what I am using, URLs appear strange.

This is what a link to the admin area looks like:
http://obfuscatedforum.onion/adm/index.php/index.php?sid=e74213251b1867be1ca25ba667c4cb6b

It does load and I can use the pages but the CSS stylesheet isn't coming
through, nor are the images. It would be good if phpbb was mentioned in the
nginx wiki like drupal and other sites.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243973,243973#msg-243973


From mdounin at mdounin.ru Tue Oct 22 12:02:19 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 22 Oct 2013 16:02:19 +0400
Subject: Running code in nginx's master process
In-Reply-To: <ea032561a015d275252fa77955fa2084.NginxMailingListEnglish@forum.nginx.org>
References: <ea032561a015d275252fa77955fa2084.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131022120219.GT7074@mdounin.ru>

Hello!

On Mon, Oct 21, 2013 at 08:27:01PM -0400, mzabani wrote:

> Is it possible to have a module run code in nginx's master process, i.e. not
> in a worker process?
> I know init_master is just a stub for now. Are there any chances this gets
> implemented in the near future?

I wouldn't recommend running a code in master process, but if you
have to for some reason, you may try using init_module callback.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 22 12:04:20 2013
From: nginx-forum at nginx.us (gaspy)
Date: Tue, 22 Oct 2013 08:04:20 -0400
Subject: Generating/Updating .gz files for gzip_static
Message-ID: <7da1e7d71fad5215985cfcd71c6c2db1.NginxMailingListEnglish@forum.nginx.org>

I'm new to nginx. I love the gzip_static option and I;ve been thinking about
the best way to generate, update and delete these files.
I wrote an article here:
http://www.richnetapps.com/generation-of-gzip-files-for-nginx/

My method uses inotifywait.

I hope it's useful and that I didn't make any glaring errors.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243987,243987#msg-243987


From nginx-forum at nginx.us Tue Oct 22 13:41:23 2013
From: nginx-forum at nginx.us (jfountain)
Date: Tue, 22 Oct 2013 09:41:23 -0400
Subject: Rewrite Rule Assistance
Message-ID: <6419534c6e0752bd42caea5d70f51a27.NginxMailingListEnglish@forum.nginx.org>

I am trying to create a rewrite rule that will append a JPG to dynamically
created URL but have the browser still go to the original URL.

IE:

http://server/user/info/image to http://server/user/info/image.jpg but still
go to http://server/user/info/image

Thanks for any tips!
-J

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243993,243993#msg-243993


From contact at jpluscplusm.com Tue Oct 22 15:12:15 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Tue, 22 Oct 2013 16:12:15 +0100
Subject: Rewrite Rule Assistance
In-Reply-To: <6419534c6e0752bd42caea5d70f51a27.NginxMailingListEnglish@forum.nginx.org>
References: <6419534c6e0752bd42caea5d70f51a27.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAKsTx7AYPf2QnNP-27cTNvcwJOKtDvLQ3FOWL25u7hQArH87MQ@mail.gmail.com>

On 22 October 2013 14:41, jfountain <nginx-forum at nginx.us> wrote:
> I am trying to create a rewrite rule that will append a JPG to dynamically
> created URL but have the browser still go to the original URL.

Why don't you show us where you've managed to get to, what you
expected to happen, and what doesn't work as you expected.
Then we can help you learn, without just doing your job for you ;-)

Jonathan
--
Jonathan Matthews
Oxford, London, UK
http://www.jpluscplusm.com/contact.html


From artemrts at ukr.net Tue Oct 22 15:34:26 2013
From: artemrts at ukr.net (wishmaster)
Date: Tue, 22 Oct 2013 18:34:26 +0300
Subject: Generating/Updating .gz files for gzip_static
In-Reply-To: <7da1e7d71fad5215985cfcd71c6c2db1.NginxMailingListEnglish@forum.nginx.org>
References: <7da1e7d71fad5215985cfcd71c6c2db1.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <1382455855.301006086.3knmmyyx@frv34.ukr.net>



--- Original message ---
From: "gaspy" <nginx-forum at nginx.us>
Date: 22 October 2013, 15:12:54


> I'm new to nginx. I love the gzip_static option and I;ve been thinking about
> the best way to generate, update and delete these files.
> I wrote an article here:
> http://www.richnetapps.com/generation-of-gzip-files-for-nginx/
>
> My method uses inotifywait.
>
> I hope it's useful and that I didn't make any glaring errors.

Nice implementation. But it is for Linux. What about FreeBSD? May be do you know any solution? (I think this is kqueue-based).



From nginx-forum at nginx.us Tue Oct 22 19:05:56 2013
From: nginx-forum at nginx.us (jfountain)
Date: Tue, 22 Oct 2013 15:05:56 -0400
Subject: Rewrite Rule Assistance
In-Reply-To: <6419534c6e0752bd42caea5d70f51a27.NginxMailingListEnglish@forum.nginx.org>
References: <6419534c6e0752bd42caea5d70f51a27.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <183f172c19d0d822b0d24bb0b8006c41.NginxMailingListEnglish@forum.nginx.org>

Disregard, I figured it out Thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243993,244009#msg-244009


From francis at daoine.org Tue Oct 22 20:23:49 2013
From: francis at daoine.org (Francis Daly)
Date: Tue, 22 Oct 2013 21:23:49 +0100
Subject: phpbb problems
In-Reply-To: <b6c301816c260ba69bebcd66aac7fa1f.NginxMailingListEnglish@forum.nginx.org>
References: <b6c301816c260ba69bebcd66aac7fa1f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131022202349.GI2204@craic.sysops.org>

On Tue, Oct 22, 2013 at 06:03:40AM -0400, cyrus_the_great wrote:

Hi there,

> I am having some problems with phpbb loading the style sheets for the admin
> area, and also the URLs look strange which could be related.

Can you identify the exact urls of the style sheets that do not respond
the way you want?

Can you tell where the strange-looking URLs come from -- nginx config
or phpbb config or something else?

> root /var/www/sites/obfuscated_forum;

> location ~
> /(config\.php|common\.php|cache|files|images/avatars/upload|includes|store)
> {
> deny all;
> return 403;
> }

So, if the url includes any of those strings, it is denied.

> location ~* \.(gif|jpeg|jpg|png|css)$ {
> expires 30d;
> }

If the url ends in one of those strings, it should be served directly
from /var/www/sites/obfuscated_forum.

Does that include your style sheet url?

Is the matching file present?

> location ~ \.php$ {
> fastcgi_pass 127.0.0.1:9002;

> fastcgi_param SCRIPT_FILENAME
> /var/www/sites/obfuscated_forum$fastcgi_script_name;

> }

If the url ends in .php, it gets sent to the fastcgi server.

> This is what a link to the admin area looks like:
> http://obfuscatedforum.onion/adm/index.php/index.php?sid=e74213251b1867be1ca25ba667c4cb6b

The repeated /index.php looks odd. Did you type that? Or did something link to it?

> It does load and I can use the pages but the CSS stylesheet isn't coming
> through, nor are the images.

What do the nginx log files say about the CSS stylesheet and images requests?

> It would be good if phpbb was mentioned in the
> nginx wiki like drupal and other sites.

http://wiki.nginx.org/Configuration links to a suggested config file for phpBB3.

It is not identical to what you have here.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Wed Oct 23 03:42:35 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Tue, 22 Oct 2013 23:42:35 -0400
Subject: limit_req_zone: How to apply only to some requests containing some
string in the URL?
Message-ID: <f4e573dfc9ce01bcfb8531f09840ef3c.NginxMailingListEnglish@forum.nginx.org>

Hi,

I'm using the limit_req_zone module. I would like it to act only on some
requests that have a certain string in one variable in the query string of
the URL.For example, lets say that I'm providing a IP geolocation service,
and that in the URL there is a query string like this:

http://api.acme.com/ipgeolocation/locate?key=NANDSBFHGWHWN2X&ip=146.105.11.59

I would like the rule to detect when the "key" parameter ends with "2X", and
in such case to apply the limitation.
What I really need is to give NGINX a secret message. The "key" parameter
would end in "01X", "02X", "03X" (etc). This would indicate Nginx the
limitation of queries per minute, and Nginx would apply a different rate for
each request, depending on the "message".

Is there a way to do that?

Thanks in advance!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244015#msg-244015


From valjohn1647 at gmail.com Wed Oct 23 04:33:08 2013
From: valjohn1647 at gmail.com (val john)
Date: Wed, 23 Oct 2013 10:03:08 +0530
Subject: Ngginx reverse proxy issue
In-Reply-To: <CALUd1bMmKz69wMU+Z8QwyPs4Or0ywiMRDY-KEA7xcQfepYfM_A@mail.gmail.com>
References: <CALUd1bMmKz69wMU+Z8QwyPs4Or0ywiMRDY-KEA7xcQfepYfM_A@mail.gmail.com>
Message-ID: <CALUd1bPYA3Fsy2uLCthOftuDm6JkBOyTBz7WF0VHUkjqF-5HSQ@mail.gmail.com>

Hi..

Any expert have any idea what may causing this issue

Thank You
John


On 22 October 2013 14:39, val john <valjohn1647 at gmail.com> wrote:

> Hi
>
> we are using nginx as proxy the request to back-end t web application ,
> but some times when users accessing webapp via proxy , they faced some
> unusual behaviors like buttons are not clicking , Dropdown lists are not
> listing data .
>
> this is my nginx reverse proxy configuration , is there any modification
> that i need to do to avoid such a issues , please advice
>
> location /webapp {
> proxy_set_header X-Forwarded-Host $host;
> proxy_set_header X-Forwarded-Server $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_pass http://192.168.1.16:8090/webapp;
> proxy_redirect off;
> }
>
> Thank You
> John
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131023/ea5f1f7b/attachment.html>

From francis at daoine.org Wed Oct 23 08:09:10 2013
From: francis at daoine.org (Francis Daly)
Date: Wed, 23 Oct 2013 09:09:10 +0100
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <f4e573dfc9ce01bcfb8531f09840ef3c.NginxMailingListEnglish@forum.nginx.org>
References: <f4e573dfc9ce01bcfb8531f09840ef3c.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131023080910.GK2204@craic.sysops.org>

On Tue, Oct 22, 2013 at 11:42:35PM -0400, Brian08275660 wrote:

Hi there,

> I'm using the limit_req_zone module. I would like it to act only on some
> requests that have a certain string in one variable in the query string of
> the URL.

> http://api.acme.com/ipgeolocation/locate?key=NANDSBFHGWHWN2X&ip=146.105.11.59
>
> I would like the rule to detect when the "key" parameter ends with "2X", and
> in such case to apply the limitation.

http://nginx.org/r/limit_req_zone

See $variable.

Use, for example, "map" to make your variable have a value when you want
the limit to apply, and be empty when you don't.

> What I really need is to give NGINX a secret message. The "key" parameter
> would end in "01X", "02X", "03X" (etc). This would indicate Nginx the
> limitation of queries per minute, and Nginx would apply a different rate for
> each request, depending on the "message".

For that, I think you'd need a different limit_req zone for each rate.

After you've got the first part working, it shouldn't be too hard to
set up a test system to see if you can get this part to work too.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Wed Oct 23 09:46:35 2013
From: nginx-forum at nginx.us (PieterVI)
Date: Wed, 23 Oct 2013 05:46:35 -0400
Subject: Nginx lua capture_multi fire and forget
Message-ID: <00663ce8f58a9c2387a5d86d341d4a9f.NginxMailingListEnglish@forum.nginx.org>

Hi all,

We're currently using the lua capture_multi to send production request to
test systems.
And sometimes we also kind of 'fork' these request to multiple test
systems.

But if one of the test systems is slow to respond the lua code waits till it
gets all responses.
And this is something we actually don't like to have.

Is there a way to have the lua module launch the requests without waiting
for the response?
It should fire the requests and continue directly afterwards and forget
about the response handling.
I could try and modify the lua module, but I'm not really familiar with
C/C++ coding.

Here's a config sample:
http://pastebin.com/U5qgXCFh


Thanks in advance,
Pieter

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244032,244032#msg-244032


From nginx-forum at nginx.us Wed Oct 23 12:56:33 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Wed, 23 Oct 2013 08:56:33 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <20131023080910.GK2204@craic.sysops.org>
References: <20131023080910.GK2204@craic.sysops.org>
Message-ID: <266829ba72a6c93fd23abcc01af0acef.NginxMailingListEnglish@forum.nginx.org>

Could you please give me an example? A few lines of code would be great!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244035#msg-244035


From nginx-forum at nginx.us Wed Oct 23 14:13:36 2013
From: nginx-forum at nginx.us (timofrenzel)
Date: Wed, 23 Oct 2013 10:13:36 -0400
Subject: NGINX and php-fpm -> HTML runs, php gives 502
Message-ID: <765cc1e3ea2d125a9a227137872a4139.NginxMailingListEnglish@forum.nginx.org>

hi,
apparently have some problems with it so I am not better .
Unfortunately, past attempts have brought nothing to use the information
from other posts .
And so , I really hope that this one can help ^ ^

So, I have NGINX installed and php - fpm .

I call on the index.html , everything works .
I call on the index.php , I get back only n 502.

For various reasons which I will not name here on, is not the www directory
www in / var / www but in /
then there are a host folder.
so :

/ www / devubuntu.loc
/ www / devubuntu.loc / index.html
/ www / devubuntu.loc / index_test.php


My default conf looks like this:

[code]
server {
listen 80;
server_name devubuntu.loc;

access_log /www/log/access/devubuntu.loc.access.log;
error_log /www/log/error/devubuntu.loc.error.log;

location / {
root /www/devubuntu.loc;
index index.html index.htm, index.php;
}

location ~ \.php$ {

root /www/devubuntu.loc;
#fastcgi_pass 127.0.0.1:9000;
#fastcgi_pass unix:/var/run/php**5**-fpm.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
[/code]


Does anyone here have an idea ?
Timo

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244039,244039#msg-244039


From paulnpace at gmail.com Wed Oct 23 14:30:19 2013
From: paulnpace at gmail.com (Paul N. Pace)
Date: Wed, 23 Oct 2013 07:30:19 -0700
Subject: Passing / denying PHP requests
Message-ID: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>

Hello-

I am trying to allow only the PHP files required for a given PHP
package to function correctly, then deny access to all other PHP files
to prevent people from snooping on the site's configuration. I have
created the location block, but I'm not so good with regular
expressions and the block is assembled mostly through copy & paste.

location /installdirectory/ {
# from nginx pitfalls page
location ~*
(installdirectory/file_a|installdirectory/file_b|installdirectory/file_c)\.php$
{
include global-configs/php.conf;
}
location ~* installdirectory/.*\.php$ {
deny all;
}
}

If someone can let me know if I am at least on the right track, I
would appreciate it.

Thanks!

Paul


From francis at daoine.org Wed Oct 23 14:38:29 2013
From: francis at daoine.org (Francis Daly)
Date: Wed, 23 Oct 2013 15:38:29 +0100
Subject: NGINX and php-fpm -> HTML runs, php gives 502
In-Reply-To: <765cc1e3ea2d125a9a227137872a4139.NginxMailingListEnglish@forum.nginx.org>
References: <765cc1e3ea2d125a9a227137872a4139.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131023143829.GM2204@craic.sysops.org>

On Wed, Oct 23, 2013 at 10:13:36AM -0400, timofrenzel wrote:

Hi there,

> I call on the index.html , everything works .
> I call on the index.php , I get back only n 502.

When you do

curl -i http://devubuntu.loc/index.php

what response do you get, and what do the nginx log files say happened?

Copy-paste is best.

> fastcgi_pass unix:/var/run/php5-fpm.sock;

Is there a fastcgi server listening on that socket? What does its log
files say happened?

f
--
Francis Daly francis at daoine.org


From francis at daoine.org Wed Oct 23 16:42:25 2013
From: francis at daoine.org (Francis Daly)
Date: Wed, 23 Oct 2013 17:42:25 +0100
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <266829ba72a6c93fd23abcc01af0acef.NginxMailingListEnglish@forum.nginx.org>
References: <20131023080910.GK2204@craic.sysops.org>
<266829ba72a6c93fd23abcc01af0acef.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131023164225.GN2204@craic.sysops.org>

On Wed, Oct 23, 2013 at 08:56:33AM -0400, Brian08275660 wrote:

Hi there,

> Could you please give me an example? A few lines of code would be great!

Completely untested, but something like:

map $arg_key $key_ends_in_02X {
default "";
~02X$ "A";
}

(where "A" might be, for example, $binary_remote_addr, if you want to
limit the requests per client IP, rather than enforce the limit across
all clients where "key" ends "02X")

and then

limit_req_zone $key_ends_in_02X zone=two:10m rate=2r/s;

and

limit_req zone=two;

should probably do what you asked for.

If anything is unclear or broken, and the documentation doesn't make
it clear how it should be, then feel free to respond pointing out which
specific parts need fixing.

Cheers,

f
--
Francis Daly francis at daoine.org


From francis at daoine.org Wed Oct 23 16:49:20 2013
From: francis at daoine.org (Francis Daly)
Date: Wed, 23 Oct 2013 17:49:20 +0100
Subject: Passing / denying PHP requests
In-Reply-To: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>
References: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>
Message-ID: <20131023164920.GO2204@craic.sysops.org>

On Wed, Oct 23, 2013 at 07:30:19AM -0700, Paul N. Pace wrote:

Hi there,

> created the location block, but I'm not so good with regular
> expressions and the block is assembled mostly through copy & paste.

If you don't like regex, don't use regex.

location = /installdirectory/file_a.php {
include global-configs/php.conf;
};
location = /installdirectory/file_b.php {
include global-configs/php.conf;
};
location = /installdirectory/file_c.php {
include global-configs/php.conf;
};

You probably want another location{} to "deny", and that might be
"location ~ php$ {}", or it might be that nested inside

location ^~ /installdirectory/ {}

depending on what else you want in the server config.

http://nginx.org/r/location for how the one location{} is chosen to
handle a request.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Wed Oct 23 17:38:28 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Wed, 23 Oct 2013 13:38:28 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <20131023164225.GN2204@craic.sysops.org>
References: <20131023164225.GN2204@craic.sysops.org>
Message-ID: <1cedf6f362cbeb9925f48be3364ec6f8.NginxMailingListEnglish@forum.nginx.org>

Thanks a lot Francis! Now I just have to learn how to use the custom
variables and the "map" directive. I haven't ever used them before, never
needed them.
By the way, it will be easier than what I thought. I decided to ask my users
to add an extra parameter, something like "&capacity=3X" instead of "hiding"
some characters in the key that would indicate what to do. So at least I
wont have to make a lot of effort with the regular expressions, the
parameter will be very clear. My logic will chech its value and apply the
respective limit.

Its interesting how weird is the configuration of Nginx. No XML tags, but
weird directives that are not really straightforward.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244049#msg-244049


From nginx-forum at nginx.us Wed Oct 23 18:15:48 2013
From: nginx-forum at nginx.us (hemant_psu)
Date: Wed, 23 Oct 2013 14:15:48 -0400
Subject: Whats is the correct message sequence for nginx server?
Message-ID: <3ad4f5e46b9bbadeac37deb460fcfc22.NginxMailingListEnglish@forum.nginx.org>

I am running nginx-1.5 server and sending chrome browser simple websocket
request. My nginx server is confgiured with hello module which is supposed
to send "Hello world" in response.

As per RFC for websocket, a sever is supposed to send "switching protocol
message "as ACK for connection upgrade request message.
In my wireshark capture, I do see connection upgrade request message but in
response, I see HTTP 1.1 200 0K message with "Hello World"
text on data portion of HTTP payload.

I am confused in terms of response, thinking my nginx server should have
first sent connection upgrade response acknowledgement before responding
with text data. Can anyone please suggest the right behaviour of nginx
server when it gets a Connection upgrade request message.


My NGinx config is :
server {
listen 80;

#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;

server_name localhost;

# prevents 502 bad gateway error
large_client_header_buffers 8 64k;

location /hello {
hello;
#root html;
#index index.html index.htm;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_intercept_errors on;
proxy_buffering on;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 256k;
proxy_read_timeout 300;

#proxy_pass http://backend;
proxy_redirect off;

# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}




Also when I enable #proxy_pass http://backend , server stop responding and
gives HTTP1.1/400 bAD REQUEST saying
cookie size is large. I am unable to understand , why enabling this line
leads to this error , when the request is same in both the cases.


Any help will be appreciated?



Thanks
Hemant

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244050,244050#msg-244050


From paulnpace at gmail.com Wed Oct 23 18:32:33 2013
From: paulnpace at gmail.com (Paul N. Pace)
Date: Wed, 23 Oct 2013 11:32:33 -0700
Subject: Passing / denying PHP requests
In-Reply-To: <20131023164920.GO2204@craic.sysops.org>
References: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>
<20131023164920.GO2204@craic.sysops.org>
Message-ID: <CAHUM0kBcbs3MnG6N4i3sUR42x=_LYV7+cM_UsLcQmFHYpk0hfQ@mail.gmail.com>

Thank you, Francis.

On Wed, Oct 23, 2013 at 9:49 AM, Francis Daly <francis at daoine.org> wrote:
> If you don't like regex, don't use regex.
>
> You probably want another location{} to "deny", and that might be
> "location ~ php$ {}", or it might be that nested inside
>
> location ^~ /installdirectory/ {}
>
> depending on what else you want in the server config.

"location ~ php$ { deny all; }" does not deny access to any php files,
even when nested in "location ^~ /installdirectory/ {}". The previous
configuration "location ~* installdirectory/.*\.php$ { deny all; }"
did block access to all php files. The ".*\." - is that why one works
and the other doesn't?

> http://nginx.org/r/location for how the one location{} is chosen to
> handle a request.

I read through the nginx.org explanation of the location directive,
but it isn't helping me with understanding how to build the deny
statement.


From francis at daoine.org Wed Oct 23 18:41:26 2013
From: francis at daoine.org (Francis Daly)
Date: Wed, 23 Oct 2013 19:41:26 +0100
Subject: Passing / denying PHP requests
In-Reply-To: <CAHUM0kBcbs3MnG6N4i3sUR42x=_LYV7+cM_UsLcQmFHYpk0hfQ@mail.gmail.com>
References: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>
<20131023164920.GO2204@craic.sysops.org>
<CAHUM0kBcbs3MnG6N4i3sUR42x=_LYV7+cM_UsLcQmFHYpk0hfQ@mail.gmail.com>
Message-ID: <20131023184126.GP2204@craic.sysops.org>

On Wed, Oct 23, 2013 at 11:32:33AM -0700, Paul N. Pace wrote:
> On Wed, Oct 23, 2013 at 9:49 AM, Francis Daly <francis at daoine.org> wrote:

Hi there,

> "location ~ php$ { deny all; }" does not deny access to any php files,
> even when nested in "location ^~ /installdirectory/ {}". The previous
> configuration "location ~* installdirectory/.*\.php$ { deny all; }"
> did block access to all php files. The ".*\." - is that why one works
> and the other doesn't?

I suspect not.

What "location" lines do you have in the appropriate server{} block in
your config file?

What one request do you make?

>From that, which one location{} block is used to handle this one request?

> > http://nginx.org/r/location for how the one location{} is chosen to
> > handle a request.
>
> I read through the nginx.org explanation of the location directive,
> but it isn't helping me with understanding how to build the deny
> statement.

Do whatever it takes to have these requests handled in a known location{}
block.

Put the config you want inside that block.

If you enable the debug log, you will see lots of output, but it will tell
you exactly which block is used, if it isn't clear from the "location"
documentation.

Cheers,

f
--
Francis Daly francis at daoine.org


From mdounin at mdounin.ru Wed Oct 23 21:28:16 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 24 Oct 2013 01:28:16 +0400
Subject: Whats is the correct message sequence for nginx server?
In-Reply-To: <3ad4f5e46b9bbadeac37deb460fcfc22.NginxMailingListEnglish@forum.nginx.org>
References: <3ad4f5e46b9bbadeac37deb460fcfc22.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131023212816.GI7074@mdounin.ru>

Hello!

On Wed, Oct 23, 2013 at 02:15:48PM -0400, hemant_psu wrote:

> I am running nginx-1.5 server and sending chrome browser simple websocket
> request. My nginx server is confgiured with hello module which is supposed
> to send "Hello world" in response.
>
> As per RFC for websocket, a sever is supposed to send "switching protocol
> message "as ACK for connection upgrade request message.
> In my wireshark capture, I do see connection upgrade request message but in
> response, I see HTTP 1.1 200 0K message with "Hello World"
> text on data portion of HTTP payload.
>
> I am confused in terms of response, thinking my nginx server should have
> first sent connection upgrade response acknowledgement before responding
> with text data. Can anyone please suggest the right behaviour of nginx
> server when it gets a Connection upgrade request message.

You can't make an arbitrary nginx module to talk via WebSocket
protocol, this isn't going to work.

What you can is to use nginx to proxy WebSocket connections to
some WebSocket backend server using proxy_pass, as documented
here:

http://nginx.org/en/docs/http/websocket.html

Note that you need an actual backend server handling WebSocket
connections for proxying to work.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Thu Oct 24 12:37:03 2013
From: nginx-forum at nginx.us (mzabani)
Date: Thu, 24 Oct 2013 08:37:03 -0400
Subject: Running code in nginx's master process
In-Reply-To: <20131022120219.GT7074@mdounin.ru>
References: <20131022120219.GT7074@mdounin.ru>
Message-ID: <65ba6081611a55b32c82f2c5fc1b6800.NginxMailingListEnglish@forum.nginx.org>

It seems like the init_module callback is not executed in nginx's master
process. At least, I've printed a call to getpid() and the pid returned was
that of a process that wasn't running, which makes me think it is executed
in a process that forks the master process itself.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243942,244069#msg-244069


From nginx-forum at nginx.us Thu Oct 24 13:17:58 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Thu, 24 Oct 2013 09:17:58 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <20131023164225.GN2204@craic.sysops.org>
References: <20131023164225.GN2204@craic.sysops.org>
Message-ID: <83edf8e751ce72b170efb5eca1f3a615.NginxMailingListEnglish@forum.nginx.org>

Hi Francis,

Thanks a lot! You saved me probably a couple of days of research. It is
working now!

I did this:

The user will send my a "capacity" parameter, with a value of 2X or 3X or 4X
or......(etc)

map $arg_capacity $2X_key{~*2X $http_x_forwarded_for;default "";}
map $arg_capacity $3X_key{~*3X $http_x_forwarded_for;default "";}
map $arg_capacity $4X_key{~*4X $http_x_forwarded_for;default "";}
map $arg_capacity $5X_key{~*5X $http_x_forwarded_for;default "";}
map $arg_capacity $6X_key{~*6X $http_x_forwarded_for;default "";}
map $arg_capacity $7X_key{~*7X $http_x_forwarded_for;default "";}
map $arg_capacity $8X_key{~*8X $http_x_forwarded_for;default "";}
map $arg_capacity $9X_key{~*9X $http_x_forwarded_for;default "";}
map $arg_capacity $10X_key{~*10X $http_x_forwarded_for;default "";}

limit_req_zone $2X_key zone=2X:1m rate=600r/m;
limit_req_zone $3X_key zone=3X:1m rate=900r/m;
limit_req_zone $4X_key zone=4X:1m rate=1200r/m;
limit_req_zone $5X_key zone=5X:1m rate=1500r/m;
limit_req_zone $6X_key zone=6X:1m rate=1800r/m;
limit_req_zone $7X_key zone=7X:1m rate=2100r/m;
limit_req_zone $8X_key zone=8X:1m rate=2400r/m;
limit_req_zone $9X_key zone=9X:1m rate=2700r/m;
limit_req_zone $10X_key zone=10X:1m rate=3000r/m;

limit_req zone=2X burst=600;
limit_req zone=3X burst=900;
limit_req zone=4X burst=1200;
limit_req zone=5X burst=1500;
limit_req zone=6X burst=1800;
limit_req zone=7X burst=2100;
limit_req zone=8X burst=2400;
limit_req zone=9X burst=2700;
limit_req zone=10X burst=3000;

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244064#msg-244064


From nginx-forum at nginx.us Thu Oct 24 14:17:03 2013
From: nginx-forum at nginx.us (mohamed0205)
Date: Thu, 24 Oct 2013 10:17:03 -0400
Subject: Nginx and iis for sharepoint
Message-ID: <44f984b720092f452364bad852813a37.NginxMailingListEnglish@forum.nginx.org>

Hi all,

I am attempting to implement nginx as a reverse proxy to a
sharepoint server with SSL and user-authentication. The problem I am
having is nginx does not appear to pass the credentials to the real
server.

I'm wondering if anybody can help me for this issue.

Thanks a lot.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244072,244072#msg-244072


From mdounin at mdounin.ru Thu Oct 24 16:49:16 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 24 Oct 2013 20:49:16 +0400
Subject: Running code in nginx's master process
In-Reply-To: <65ba6081611a55b32c82f2c5fc1b6800.NginxMailingListEnglish@forum.nginx.org>
References: <20131022120219.GT7074@mdounin.ru>
<65ba6081611a55b32c82f2c5fc1b6800.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131024164916.GP7074@mdounin.ru>

Hello!

On Thu, Oct 24, 2013 at 08:37:03AM -0400, mzabani wrote:

> It seems like the init_module callback is not executed in nginx's master
> process. At least, I've printed a call to getpid() and the pid returned was
> that of a process that wasn't running, which makes me think it is executed
> in a process that forks the master process itself.

Yes, the init_module callback is called after configuration parsing,
and on initial startup this happens before demonization, hence
there is no master process yet. On reconfiguration it's executed
in the master process itself.

--
Maxim Dounin
http://nginx.org/en/donation.html


From philipp.kraus at tu-clausthal.de Fri Oct 25 05:57:01 2013
From: philipp.kraus at tu-clausthal.de (Philipp Kraus)
Date: Fri, 25 Oct 2013 07:57:01 +0200
Subject: proxy pass with rewrite
Message-ID: <917B0AF2-65B4-4F85-A6FA-DBBA34933480@tu-clausthal.de>

Hello,

I would like to configure ngix with jenkins, nginx should be a proxy for the jenkins instance. I have configuration the proxy pass options in this way:

location /jenkins {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

My jenkins instance uses the port 8080 and I would like to pass the data from the URL http://mydomain/jenkins to the jerkins instance.
If I change the location to / everything works fine, but with the subdirectory alias I get an error with the URL. Jenkins result pages uses only
URLs to http://mydomain/<jenkins part> but in my case it should be http://mydomain/jenkins/<jenkins data>
I have modified the Jenkins URL (in the admin panel) to http://mydomain/jenkins/ but it seems to be an error on the reverse data call.
I must substitute all URL, which comes from the proxy_pass URL with http://mydomain/jenkins
How can I do this in a correct way?

Thanks

Phil

From francis at daoine.org Fri Oct 25 08:01:19 2013
From: francis at daoine.org (Francis Daly)
Date: Fri, 25 Oct 2013 09:01:19 +0100
Subject: proxy pass with rewrite
In-Reply-To: <917B0AF2-65B4-4F85-A6FA-DBBA34933480@tu-clausthal.de>
References: <917B0AF2-65B4-4F85-A6FA-DBBA34933480@tu-clausthal.de>
Message-ID: <20131025080119.GA4365@craic.sysops.org>

On Fri, Oct 25, 2013 at 07:57:01AM +0200, Philipp Kraus wrote:

Hi there,

> I would like to configure ngix with jenkins, nginx should be a proxy for the jenkins instance. I have configuration the proxy pass options in this way:
>

https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
looks like it may have useful information.

> If I change the location to / everything works fine, but with the subdirectory alias I get an error with the URL. Jenkins result pages uses only
> URLs to http://mydomain/<jenkins part> but in my case it should be http://mydomain/jenkins/<jenkins data>

Fix jenkins so that it knows it is below /jenkins/, not below /.

> I must substitute all URL, which comes from the proxy_pass URL with http://mydomain/jenkins
> How can I do this in a correct way?

The best way in general is to arrange the back-end so that it doesn't
need doing.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Fri Oct 25 09:46:46 2013
From: nginx-forum at nginx.us (youreright)
Date: Fri, 25 Oct 2013 05:46:46 -0400
Subject: proxy_pass Cannot GET
Message-ID: <df14fca3b8dbffab1ad8fcfb7d3ec15f.NginxMailingListEnglish@forum.nginx.org>

nginx is working, but, when doing proxy_pass, gives me: Cannot GET /nodejs

here are the lines I've added to nginx.conf

ProxyPass nodejs localhost:2345
ProxyPassReverse nodejs localhost:2345

2345 is where my node server is listening

I'm pointing chrome browser here:
http://localhost/nodejs

any ideas to fix it? web.js here is my node server:
http://plnkr.co/edit/ZdQqulPp4ovZqR9zikYm?p=preview

with apache ProxyPass[Reverse] this was working, but I'm trying nginx to see
performance gains

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244100,244100#msg-244100


From contact at jpluscplusm.com Fri Oct 25 09:52:12 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Fri, 25 Oct 2013 10:52:12 +0100
Subject: proxy_pass Cannot GET
In-Reply-To: <df14fca3b8dbffab1ad8fcfb7d3ec15f.NginxMailingListEnglish@forum.nginx.org>
References: <df14fca3b8dbffab1ad8fcfb7d3ec15f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAKsTx7AkW7=z3eWTdxmgA3vPe8xknQhWhGZci2C+Z1yS4VxiSA@mail.gmail.com>

On 25 Oct 2013 10:47, "youreright" <nginx-forum at nginx.us> wrote:
>
> nginx is working, but, when doing proxy_pass, gives me: Cannot GET /nodejs
>
> here are the lines I've added to nginx.conf
>
> ProxyPass nodejs localhost:2345
> ProxyPassReverse nodejs localhost:2345

You need to read the documentation, as those aren't valid nginx config
statements.

http://nginx.org/r/proxy_pass

J
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131025/43dea889/attachment.html>

From nginx-forum at nginx.us Fri Oct 25 09:56:27 2013
From: nginx-forum at nginx.us (youreright)
Date: Fri, 25 Oct 2013 05:56:27 -0400
Subject: proxy_pass Cannot GET
In-Reply-To: <CAKsTx7AkW7=z3eWTdxmgA3vPe8xknQhWhGZci2C+Z1yS4VxiSA@mail.gmail.com>
References: <CAKsTx7AkW7=z3eWTdxmgA3vPe8xknQhWhGZci2C+Z1yS4VxiSA@mail.gmail.com>
Message-ID: <06597dc2354dbad633d88c5e06190375.NginxMailingListEnglish@forum.nginx.org>

sorry, copy/paste error: I have this:

#proxy_pass http://localhost:2345/;
#proxy_pass_reverse http://localhost:2345/;

location /nodejs {
proxy_pass http://localhost:2345/;
proxy_pass_reverse http://localhost:2345/;
}

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244100,244102#msg-244102


From nginx-forum at nginx.us Fri Oct 25 10:27:16 2013
From: nginx-forum at nginx.us (youreright)
Date: Fri, 25 Oct 2013 06:27:16 -0400
Subject: proxy_pass Cannot GET
In-Reply-To: <df14fca3b8dbffab1ad8fcfb7d3ec15f.NginxMailingListEnglish@forum.nginx.org>
References: <df14fca3b8dbffab1ad8fcfb7d3ec15f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <cfa98b5215b9b7f2b298ded8e1ab689b.NginxMailingListEnglish@forum.nginx.org>

Nevermind, solved. I was looking at my httpd.conf instead of nginx.conf and
then was able to fix from there.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244100,244103#msg-244103


From jan.algermissen at nordsc.com Fri Oct 25 10:30:34 2013
From: jan.algermissen at nordsc.com (Jan Algermissen)
Date: Fri, 25 Oct 2013 12:30:34 +0200
Subject: Handler invokation after upstream server being picked
Message-ID: <0191AA74-18E6-4D74-9C98-4850CCFE2820@nordsc.com>

Hi,

I am writing a module that needs to add/change the HTTP Authorization server for an upstream request.

Since I am using a signature based authentication scheme where the signature base string includes the request host and port the header can only be added *after* the upstream module has determined which server to send the request to (e.g. after applying round-robin).

Is it possible to hook my module into that 'phase' and if so - what is the preferred way to do that?

I saw that I at least can access the target host and port set by the proxy module by reading the proxy module variables. However, that (of course) does only give the server group name to be used by the upstream module in the next step.


Jan

From ar at xlrs.de Fri Oct 25 10:37:21 2013
From: ar at xlrs.de (Axel)
Date: Fri, 25 Oct 2013 12:37:21 +0200
Subject: Too many open files and unix
Message-ID: <23c73f29bffccf8a031ac62e1bbdfbaa@xlrs.de>

Hello,

today I had too many open files and I need some advice to investigate in
this issue.

I googled around and found a lot information how to solve this issue
(e.g. http://forum.nginx.org/read.php?2,234191,234191)
I finally was able to get rid of the error.

But I wondered about nginx behaviour with unix sockets and open files on
my machine, because every worker process on my machine opened 15892
sockets which seems to be very much compared to other machines I took a
look at. On other machines I only have as much open sockets as worker
processes.

Perhaps someone can point me to the right direction to debug and solve
this.

The system has 16 cores and 16 GB Ram (10 GB RAM used, Load average:
0.3) Ulimits are set to

> su - www-data -s /bin/bash
> $ ulimit -Hn
> 32768
> $ ulimit -Sn
> 16384

In nginx.conf I set

> user www-data;
> worker_processes 12;
> worker_rlimit_nofile 65536; # raised from 16384
> worker_connections 65536; # raised from 16384

Checking the open files of the master process returns

> root at nginx:~ # lsof -p 31022 | wc -l
> 16066

> root at nginx:~ # lsof -p 31022 | grep socket | wc -l
> 15904

Checking the open sockets returns for every single worker process 15892
I restarted nginx several times, but this does not change anything.

This is what I get when I raise errorlog level to debug:

2013/10/25 12:31:40 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:31:40 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:31:40 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:31:40 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
00000000029B2B80:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113558 accept: 78.46.79.55
fd:21
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
000000000276D520:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113558 event timer add: 21:
60000:1382697160518
2013/10/25 12:31:40 [debug] 45534#0: *257113558 reusable connection: 1
2013/10/25 12:31:40 [debug] 45534#0: *257113558 epoll add event: fd:21
op:1 ev:80000001
2013/10/25 12:31:40 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113558 post event
00007F7BB9FCEB70
2013/10/25 12:31:40 [debug] 45534#0: *257113558 delete posted event
00007F7BB9FCEB70
2013/10/25 12:31:40 [debug] 45534#0: *257113558 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113558 malloc:
0000000002BD1B50:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113558 recv: fd:21 39 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113558 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113558 posix_memalign:
0000000002B2BE80:4096 @16
2013/10/25 12:31:40 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
0000000002A78700:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113587 accept: 78.46.79.55
fd:19
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
0000000002716BF0:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113587 event timer add: 19:
60000:1382697160776
2013/10/25 12:31:40 [debug] 45534#0: *257113587 reusable connection: 1
2013/10/25 12:31:40 [debug] 45534#0: *257113587 epoll add event: fd:19
op:1 ev:80000001
2013/10/25 12:31:40 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113587 post event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113587 delete posted event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113587 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113587 malloc:
0000000002B89FC0:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113587 recv: fd:19 39 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113587 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113587 posix_memalign:
0000000002571760:4096 @16
2013/10/25 12:31:40 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
00000000025F13C0:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113589 accept: 91.118.111.100
fd:19
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
0000000002658200:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113589 event timer add: 19:
60000:1382697160795
2013/10/25 12:31:40 [debug] 45534#0: *257113589 reusable connection: 1
2013/10/25 12:31:40 [debug] 45534#0: *257113589 epoll add event: fd:19
op:1 ev:80000001
2013/10/25 12:31:40 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113589 post event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 delete posted event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113589 malloc:
0000000002927660:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 recv: fd:19 27 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 posix_memalign:
0000000002571760:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002BCE320:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 accept: 83.169.27.46
fd:47
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002B75DB0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 event timer add: 47:
60000:1382697161017
2013/10/25 12:31:41 [debug] 45534#0: *257113643 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113643 epoll add event: fd:47
op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113589 post event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 delete posted event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113589 malloc:
0000000002927660:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 recv: fd:19 27 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 posix_memalign:
0000000002571760:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002BCE320:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 accept: 83.169.27.46
fd:47
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002B75DB0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 event timer add: 47:
60000:1382697161017
2013/10/25 12:31:41 [debug] 45534#0: *257113643 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113643 epoll add event: fd:47
op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45534#0: *257113643 post event
00007F7BB9FD0028
2013/10/25 12:31:41 [debug] 45534#0: *257113643 delete posted event
00007F7BB9FD0028
2013/10/25 12:31:41 [debug] 45534#0: *257113643 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113643 malloc:
0000000002B39900:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113643 recv: fd:47 27 of 1024
2013/10/25 12:31:41 [debug] 45534#0: *257113643 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113643 posix_memalign:
0000000003359E40:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002A40C80:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113751 accept: 77.75.254.73
fd:16014
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000029F4AE0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113751 event timer add: 16014:
60000:1382697161443
2013/10/25 12:31:41 [debug] 45534#0: *257113751 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113751 epoll add event:
fd:16014 op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45534#0: *257113751 post event
00007F7BB9FD0C58
2013/10/25 12:31:41 [debug] 45534#0: *257113751 delete posted event
00007F7BB9FD0C58
2013/10/25 12:31:41 [debug] 45534#0: *257113751 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113751 malloc:
0000000002BB46F0:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113751 recv: fd:16014 27 of
1024
2013/10/25 12:31:41 [debug] 45534#0: *257113751 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113751 posix_memalign:
0000000002ADDDD0:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000028658D0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113758 accept: 78.46.79.55
fd:15779
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000026528B0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113758 event timer add: 15779:
60000:1382697161489
2013/10/25 12:31:41 [debug] 45534#0: *257113758 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113758 epoll add event:
fd:15779 op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45534#0: *257113758 post event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113758 delete posted event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113758 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113758 malloc:
0000000002BB46F0:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113758 recv: fd:15779 39 of
1024
2013/10/25 12:31:41 [debug] 45534#0: *257113758 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113758 posix_memalign:
0000000002ADDDD0:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
000000000276C050:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113763 accept: 134.0.76.216
fd:51
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000029EE590:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113763 event timer add: 51:
60000:1382697161522
2013/10/25 12:31:41 [debug] 45534#0: *257113763 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113763 epoll add event: fd:51
op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:31:41 [debug] 45534#0: *257113763 post event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113763 delete posted event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113763 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113763 malloc:
00000000034A7F00:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113763 recv: fd:51 27 of 1024
2013/10/25 12:31:41 [debug] 45534#0: *257113763 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113763 posix_memalign:
000000000322E2C0:4096 @16
...
...
...
2013/10/25 12:32:16 [debug] 45531#0: *257119454 epoll add event:
fd:14481 op:1 ev:80000001
2013/10/25 12:32:16 [debug] 45531#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:32:16 [debug] 45531#0: *257119454 post event
00007F7BB9FCFC18
2013/10/25 12:32:16 [debug] 45531#0: *257119454 delete posted event
00007F7BB9FCFC18
2013/10/25 12:32:16 [debug] 45531#0: *257119454 http wait request
handler
2013/10/25 12:32:16 [debug] 45531#0: *257119454 malloc:
0000000002BB42E0:1024
2013/10/25 12:32:16 [debug] 45531#0: *257119454 recv: fd:14481 27 of
1024
2013/10/25 12:32:16 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45531#0: *257119454 reusable connection: 0
2013/10/25 12:32:16 [debug] 45531#0: *257119454 posix_memalign:
000000000286B6D0:4096 @16
2013/10/25 12:32:16 [debug] 45531#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45535#0: post event 00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45535#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45535#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:32:16 [debug] 45535#0: posix_memalign:
0000000002AC72E0:256 @16
2013/10/25 12:32:16 [debug] 45535#0: *257119455 accept: 77.75.254.133
fd:15745
2013/10/25 12:32:16 [debug] 45535#0: posix_memalign:
000000000276A4D0:256 @16
2013/10/25 12:32:16 [debug] 45535#0: *257119455 event timer add: 15745:
60000:1382697196597
2013/10/25 12:32:16 [debug] 45535#0: *257119455 reusable connection: 1
2013/10/25 12:32:16 [debug] 45535#0: *257119455 epoll add event:
fd:15745 op:1 ev:80000001
2013/10/25 12:32:16 [debug] 45535#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:32:16 [debug] 45535#0: *257119455 post event
00007F7BB9FD1000
2013/10/25 12:32:16 [debug] 45535#0: *257119455 delete posted event
00007F7BB9FD1000
2013/10/25 12:32:16 [debug] 45535#0: *257119455 http wait request
handler
2013/10/25 12:32:16 [debug] 45535#0: *257119455 malloc:
00000000033F7A20:1024
2013/10/25 12:32:16 [debug] 45535#0: *257119455 recv: fd:15745 27 of
1024
2013/10/25 12:32:16 [debug] 45535#0: *257119455 reusable connection: 0
2013/10/25 12:32:16 [debug] 45535#0: *257119455 posix_memalign:
00000000026AF8C0:4096 @16
2013/10/25 12:32:16 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45535#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [info] 45879#0: Using 32768KiB of shared memory for
push module in /etc/nginx/nginx.conf:100
2013/10/25 12:32:16 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45535#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: post event 00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45536#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45536#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:32:16 [debug] 45536#0: posix_memalign:
000000000298FF90:256 @16
2013/10/25 12:32:16 [debug] 45536#0: *257119502 accept: 46.252.18.244
fd:16232
2013/10/25 12:32:16 [debug] 45536#0: posix_memalign:
0000000003CC5040:256 @16
2013/10/25 12:32:16 [debug] 45536#0: *257119502 event timer add: 16232:
60000:1382697196773
2013/10/25 12:32:16 [debug] 45536#0: *257119502 reusable connection: 1
2013/10/25 12:32:16 [debug] 45536#0: *257119502 epoll add event:
fd:16232 op:1 ev:80000001
2013/10/25 12:32:16 [debug] 45536#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:32:16 [debug] 45536#0: *257119502 post event
00007F7BB9FD3358
2013/10/25 12:32:16 [debug] 45536#0: *257119502 delete posted event
00007F7BB9FD3358
2013/10/25 12:32:16 [debug] 45536#0: *257119502 http wait request
handler
2013/10/25 12:32:16 [debug] 45536#0: *257119502 malloc:
000000000263BD90:1024
2013/10/25 12:32:16 [debug] 45536#0: *257119502 recv: fd:16232 27 of
1024
2013/10/25 12:32:16 [debug] 45536#0: *257119502 reusable connection: 0
2013/10/25 12:32:16 [debug] 45536#0: *257119502 posix_memalign:
000000000323C850:4096 @16
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45532#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45532#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45535#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000


Thanks in advance,
Axel


From yuanjianyong at huawei.com Fri Oct 25 11:05:22 2013
From: yuanjianyong at huawei.com (Yuanjianyong (George))
Date: Fri, 25 Oct 2013 11:05:22 +0000
Subject: help: How to cache video in nginx when dynamic link request
Message-ID: <916A264FBA40E948B1BE327829D0C7072178184C@szxeml557-mbx.china.huawei.com>

Hi, everybody,

Please give me a hand.

In VOD system, Nginx is reverse proxy and Lighttpd is application server with video files. And getting video files is the type "play.jsp?videoid=123456".

Now , I want to cache video files in Nginx server from dynamic link with "?". How to define the Nginx configure?



Thanks & Regards

George yuan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131025/dde3d08a/attachment.html>

From philipp.kraus at tu-clausthal.de Fri Oct 25 11:25:10 2013
From: philipp.kraus at tu-clausthal.de (Philipp Kraus)
Date: Fri, 25 Oct 2013 13:25:10 +0200
Subject: proxy pass with rewrite
In-Reply-To: <20131025080119.GA4365@craic.sysops.org>
References: <917B0AF2-65B4-4F85-A6FA-DBBA34933480@tu-clausthal.de>
<20131025080119.GA4365@craic.sysops.org>
Message-ID: <BC6AF3CF-6489-4C69-9423-49D83658BA73@tu-clausthal.de>

Hi,

Am 25.10.2013 um 10:01 schrieb Francis Daly <francis at daoine.org>:
> https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
> looks like it may have useful information.

I have used this how to, but I cannot create a working solution

> Fix jenkins so that it knows it is below /jenkins/, not below /.

I have set in Jenkins System panel the field "Jenkins URL" to "myserver/jenkins/"

Any URLs are working correct but other are working. CSS are not working because the /jenkins/ part
is not exists. So it seems that nginx removes on some URL the /jenkins/ part in the URL

Phil

From francis at daoine.org Fri Oct 25 11:43:17 2013
From: francis at daoine.org (Francis Daly)
Date: Fri, 25 Oct 2013 12:43:17 +0100
Subject: proxy pass with rewrite
In-Reply-To: <BC6AF3CF-6489-4C69-9423-49D83658BA73@tu-clausthal.de>
References: <917B0AF2-65B4-4F85-A6FA-DBBA34933480@tu-clausthal.de>
<20131025080119.GA4365@craic.sysops.org>
<BC6AF3CF-6489-4C69-9423-49D83658BA73@tu-clausthal.de>
Message-ID: <20131025114317.GB4365@craic.sysops.org>

On Fri, Oct 25, 2013 at 01:25:10PM +0200, Philipp Kraus wrote:
> Am 25.10.2013 um 10:01 schrieb Francis Daly <francis at daoine.org>:

Hi there,

> > https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
> > looks like it may have useful information.
>
> I have used this how to, but I cannot create a working solution

The config in this howto, and the config that you showed, are not the same.

Note particularly the "location" and the "proxy_pass" lines.

> > Fix jenkins so that it knows it is below /jenkins/, not below /.
>
> I have set in Jenkins System panel the field "Jenkins URL" to "myserver/jenkins/"

The howto say to do something else as well.

I don't know if it is correct -- I don't use Jenkins -- but I also don't
know what you've done, because you haven't said.

> Any URLs are working correct but other are working. CSS are not working because the /jenkins/ part
> is not exists. So it seems that nginx removes on some URL the /jenkins/ part in the URL

That could be due to proxy_pass.

http://nginx.org/r/proxy_pass

"specified with a URI" means "any slash after the host:port part".

If that doesn't fix things, then can you find one request which does
not work as you expect it to, and show what url nginx gets, and what
url jenkins gets, and see if that helps find where things go wrong?

Good luck with it,

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Fri Oct 25 12:28:27 2013
From: nginx-forum at nginx.us (youreright)
Date: Fri, 25 Oct 2013 08:28:27 -0400
Subject: performance testing
Message-ID: <270dad5267769b3270203fc7cd4bb348.NginxMailingListEnglish@forum.nginx.org>

I'm getting under 5000 requests per second hitting just the nginx welcome
page on a default configuration.

Any tips to improve this please?

setup is:
==
macbook pro retina 10.8.3
RoverMR:webserver rover$ nginx -v
nginx version: nginx/1.4.3

== Running ab test ==

RoverMR:webserver rover$ ab -n 1000 -c 100 http://127.0.0.1/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: nginx/1.4.3
Server Hostname: 127.0.0.1
Server Port: 80

Document Path: /
Document Length: 612 bytes

Concurrency Level: 100
Time taken for tests: 0.268 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 844000 bytes
HTML transferred: 612000 bytes
Requests per second: 3729.66 [#/sec] (mean)
Time per request: 26.812 [ms] (mean)
Time per request: 0.268 [ms] (mean, across all concurrent requests)
Transfer rate: 3074.06 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 12 3.1 13 17
Processing: 1 13 2.5 14 18
Waiting: 1 13 2.6 14 18
Total: 17 25 3.5 26 30

Percentage of the requests served within a certain time (ms)
50% 26
66% 27
75% 28
80% 28
90% 29
95% 29
98% 30
99% 30
100% 30 (longest request)
RoverMR:webserver rover$

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244110,244110#msg-244110


From hartz.geoffrey at gmail.com Fri Oct 25 12:32:34 2013
From: hartz.geoffrey at gmail.com (Geoffrey Hartz)
Date: Fri, 25 Oct 2013 14:32:34 +0200
Subject: performance testing
In-Reply-To: <270dad5267769b3270203fc7cd4bb348.NginxMailingListEnglish@forum.nginx.org>
References: <270dad5267769b3270203fc7cd4bb348.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAPU8t+p3w+3gk4GFtoWvhSmPcdTvBhgmShGqhPTCOvQYP=LySA@mail.gmail.com>

AB is not good enough for testing...

Use http://redmine.lighttpd.net/projects/weighttp/wiki

on debian-like install "libevent-dev"

AB didn't use multicore/thread..

With weighttp I was able to hit 100 000q/s

2013/10/25 youreright <nginx-forum at nginx.us>:
> I'm getting under 5000 requests per second hitting just the nginx welcome
> page on a default configuration.
>
> Any tips to improve this please?
>
> setup is:
> ==
> macbook pro retina 10.8.3
> RoverMR:webserver rover$ nginx -v
> nginx version: nginx/1.4.3
>
> == Running ab test ==
>
> RoverMR:webserver rover$ ab -n 1000 -c 100 http://127.0.0.1/
> This is ApacheBench, Version 2.3 <$Revision: 655654 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking 127.0.0.1 (be patient)
> Completed 100 requests
> Completed 200 requests
> Completed 300 requests
> Completed 400 requests
> Completed 500 requests
> Completed 600 requests
> Completed 700 requests
> Completed 800 requests
> Completed 900 requests
> Completed 1000 requests
> Finished 1000 requests
>
>
> Server Software: nginx/1.4.3
> Server Hostname: 127.0.0.1
> Server Port: 80
>
> Document Path: /
> Document Length: 612 bytes
>
> Concurrency Level: 100
> Time taken for tests: 0.268 seconds
> Complete requests: 1000
> Failed requests: 0
> Write errors: 0
> Total transferred: 844000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 3729.66 [#/sec] (mean)
> Time per request: 26.812 [ms] (mean)
> Time per request: 0.268 [ms] (mean, across all concurrent requests)
> Transfer rate: 3074.06 [Kbytes/sec] received
>
> Connection Times (ms)
> min mean[+/-sd] median max
> Connect: 1 12 3.1 13 17
> Processing: 1 13 2.5 14 18
> Waiting: 1 13 2.6 14 18
> Total: 17 25 3.5 26 30
>
> Percentage of the requests served within a certain time (ms)
> 50% 26
> 66% 27
> 75% 28
> 80% 28
> 90% 29
> 95% 29
> 98% 30
> 99% 30
> 100% 30 (longest request)
> RoverMR:webserver rover$
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244110,244110#msg-244110
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



--
Geoffrey HARTZ


From nginx-forum at nginx.us Fri Oct 25 12:37:32 2013
From: nginx-forum at nginx.us (youreright)
Date: Fri, 25 Oct 2013 08:37:32 -0400
Subject: performance testing
In-Reply-To: <CAPU8t+p3w+3gk4GFtoWvhSmPcdTvBhgmShGqhPTCOvQYP=LySA@mail.gmail.com>
References: <CAPU8t+p3w+3gk4GFtoWvhSmPcdTvBhgmShGqhPTCOvQYP=LySA@mail.gmail.com>
Message-ID: <3c77a76413e6f7fb79dc8ef4433d1797.NginxMailingListEnglish@forum.nginx.org>

And what were you up to with ab? Can you check both real quick?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244110,244112#msg-244112


From mdounin at mdounin.ru Fri Oct 25 14:06:47 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 25 Oct 2013 18:06:47 +0400
Subject: Handler invokation after upstream server being picked
In-Reply-To: <0191AA74-18E6-4D74-9C98-4850CCFE2820@nordsc.com>
References: <0191AA74-18E6-4D74-9C98-4850CCFE2820@nordsc.com>
Message-ID: <20131025140647.GD7074@mdounin.ru>

Hello!

On Fri, Oct 25, 2013 at 12:30:34PM +0200, Jan Algermissen wrote:

> Hi,
>
> I am writing a module that needs to add/change the HTTP
> Authorization server for an upstream request.
>
> Since I am using a signature based authentication scheme where
> the signature base string includes the request host and port the
> header can only be added *after* the upstream module has
> determined which server to send the request to (e.g. after
> applying round-robin).
>
> Is it possible to hook my module into that 'phase' and if so -
> what is the preferred way to do that?
>
> I saw that I at least can access the target host and port set by
> the proxy module by reading the proxy module variables. However,
> that (of course) does only give the server group name to be used
> by the upstream module in the next step.

A request to an upstream is created once, before a particular
server is known, and the same request is used for requests to all
upstream servers. That is, what you are trying to do isn't
something currently possible.

--
Maxim Dounin
http://nginx.org/en/donation.html


From biru at yahoo.com Fri Oct 25 16:35:46 2013
From: biru at yahoo.com (Aron)
Date: Fri, 25 Oct 2013 09:35:46 -0700 (PDT)
Subject: Too many open files and unix
In-Reply-To: <23c73f29bffccf8a031ac62e1bbdfbaa@xlrs.de>
References: <23c73f29bffccf8a031ac62e1bbdfbaa@xlrs.de>
Message-ID: <1382718946.86725.YahooMailNeo@web124903.mail.ne1.yahoo.com>

Hello,
Can you tell the output from this command ps -eLF |grep www-data |wc -l ?
And What does /proc/sys/fs/nr_open output ?


?
Regards
Aron


________________________________
From: Axel <ar at xlrs.de>
To: nginx at nginx.org
Sent: Friday, October 25, 2013 5:37 PM
Subject: Too many open files and unix


Hello,

today I had too many open files and I need some advice to investigate in
this issue.

I googled around and found a lot information how to solve this issue
(e.g. http://forum.nginx.org/read.php?2,234191,234191)
I finally was able to get rid of the error.

But I wondered about nginx behaviour with unix sockets and open files on
my machine, because every worker process on my machine opened 15892
sockets which seems to be very much compared to other machines I took a
look at. On other machines I only have as much open sockets as worker
processes.

Perhaps someone can point me to the right direction to debug and solve
this.

The system has 16 cores and 16 GB Ram (10 GB RAM used, Load average:
0.3) Ulimits are set to

> su - www-data -s /bin/bash
> $ ulimit -Hn
> 32768
> $ ulimit -Sn
> 16384

In nginx.conf I set

> user www-data;
> worker_processes 12;
> worker_rlimit_nofile 65536; # raised from 16384
> worker_connections? 65536; # raised from 16384

Checking the open files of the master process returns

> root at nginx:~ # lsof -p 31022 | wc -l
> 16066

> root at nginx:~ # lsof -p 31022 | grep socket | wc -l
> 15904

Checking the open sockets returns for every single worker process 15892
I restarted nginx several times, but this does not change anything.

This is what I get when I raise errorlog level to debug:

? 2013/10/25 12:31:40 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:31:40 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:31:40 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:31:40 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
00000000029B2B80:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113558 accept: 78.46.79.55
fd:21
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
000000000276D520:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113558 event timer add: 21:
60000:1382697160518
2013/10/25 12:31:40 [debug] 45534#0: *257113558 reusable connection: 1
2013/10/25 12:31:40 [debug] 45534#0: *257113558 epoll add event: fd:21
op:1 ev:80000001
2013/10/25 12:31:40 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113558 post event
00007F7BB9FCEB70
2013/10/25 12:31:40 [debug] 45534#0: *257113558 delete posted event
00007F7BB9FCEB70
2013/10/25 12:31:40 [debug] 45534#0: *257113558 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113558 malloc:
0000000002BD1B50:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113558 recv: fd:21 39 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113558 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113558 posix_memalign:
0000000002B2BE80:4096 @16
2013/10/25 12:31:40 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
0000000002A78700:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113587 accept: 78.46.79.55
fd:19
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
0000000002716BF0:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113587 event timer add: 19:
60000:1382697160776
2013/10/25 12:31:40 [debug] 45534#0: *257113587 reusable connection: 1
2013/10/25 12:31:40 [debug] 45534#0: *257113587 epoll add event: fd:19
op:1 ev:80000001
2013/10/25 12:31:40 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113587 post event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113587 delete posted event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113587 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113587 malloc:
0000000002B89FC0:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113587 recv: fd:19 39 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113587 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113587 posix_memalign:
0000000002571760:4096 @16
2013/10/25 12:31:40 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:40 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
00000000025F13C0:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113589 accept: 91.118.111.100
fd:19
2013/10/25 12:31:40 [debug] 45534#0: posix_memalign:
0000000002658200:256 @16
2013/10/25 12:31:40 [debug] 45534#0: *257113589 event timer add: 19:
60000:1382697160795
2013/10/25 12:31:40 [debug] 45534#0: *257113589 reusable connection: 1
2013/10/25 12:31:40 [debug] 45534#0: *257113589 epoll add event: fd:19
op:1 ev:80000001
2013/10/25 12:31:40 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113589 post event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 delete posted event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113589 malloc:
0000000002927660:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 recv: fd:19 27 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 posix_memalign:
0000000002571760:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002BCE320:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 accept: 83.169.27.46
fd:47
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002B75DB0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 event timer add: 47:
60000:1382697161017
2013/10/25 12:31:41 [debug] 45534#0: *257113643 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113643 epoll add event: fd:47
op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:40 [debug] 45534#0: *257113589 post event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 delete posted event
00007F7BB9FCEAA0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 http wait request
handler
2013/10/25 12:31:40 [debug] 45534#0: *257113589 malloc:
0000000002927660:1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 recv: fd:19 27 of 1024
2013/10/25 12:31:40 [debug] 45534#0: *257113589 reusable connection: 0
2013/10/25 12:31:40 [debug] 45534#0: *257113589 posix_memalign:
0000000002571760:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002BCE320:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 accept: 83.169.27.46
fd:47
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002B75DB0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113643 event timer add: 47:
60000:1382697161017
2013/10/25 12:31:41 [debug] 45534#0: *257113643 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113643 epoll add event: fd:47
op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45534#0: *257113643 post event
00007F7BB9FD0028
2013/10/25 12:31:41 [debug] 45534#0: *257113643 delete posted event
00007F7BB9FD0028
2013/10/25 12:31:41 [debug] 45534#0: *257113643 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113643 malloc:
0000000002B39900:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113643 recv: fd:47 27 of 1024
2013/10/25 12:31:41 [debug] 45534#0: *257113643 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113643 posix_memalign:
0000000003359E40:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
0000000002A40C80:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113751 accept: 77.75.254.73
fd:16014
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000029F4AE0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113751 event timer add: 16014:
60000:1382697161443
2013/10/25 12:31:41 [debug] 45534#0: *257113751 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113751 epoll add event:
fd:16014 op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45534#0: *257113751 post event
00007F7BB9FD0C58
2013/10/25 12:31:41 [debug] 45534#0: *257113751 delete posted event
00007F7BB9FD0C58
2013/10/25 12:31:41 [debug] 45534#0: *257113751 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113751 malloc:
0000000002BB46F0:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113751 recv: fd:16014 27 of
1024
2013/10/25 12:31:41 [debug] 45534#0: *257113751 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113751 posix_memalign:
0000000002ADDDD0:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000028658D0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113758 accept: 78.46.79.55
fd:15779
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000026528B0:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113758 event timer add: 15779:
60000:1382697161489
2013/10/25 12:31:41 [debug] 45534#0: *257113758 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113758 epoll add event:
fd:15779 op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45534#0: *257113758 post event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113758 delete posted event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113758 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113758 malloc:
0000000002BB46F0:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113758 recv: fd:15779 39 of
1024
2013/10/25 12:31:41 [debug] 45534#0: *257113758 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113758 posix_memalign:
0000000002ADDDD0:4096 @16
2013/10/25 12:31:41 [debug] 45534#0: post event 00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:31:41 [debug] 45534#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
000000000276C050:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113763 accept: 134.0.76.216
fd:51
2013/10/25 12:31:41 [debug] 45534#0: posix_memalign:
00000000029EE590:256 @16
2013/10/25 12:31:41 [debug] 45534#0: *257113763 event timer add: 51:
60000:1382697161522
2013/10/25 12:31:41 [debug] 45534#0: *257113763 reusable connection: 1
2013/10/25 12:31:41 [debug] 45534#0: *257113763 epoll add event: fd:51
op:1 ev:80000001
2013/10/25 12:31:41 [debug] 45534#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:31:41 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:31:41 [debug] 45534#0: *257113763 post event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113763 delete posted event
00007F7BB9FD0918
2013/10/25 12:31:41 [debug] 45534#0: *257113763 http wait request
handler
2013/10/25 12:31:41 [debug] 45534#0: *257113763 malloc:
00000000034A7F00:1024
2013/10/25 12:31:41 [debug] 45534#0: *257113763 recv: fd:51 27 of 1024
2013/10/25 12:31:41 [debug] 45534#0: *257113763 reusable connection: 0
2013/10/25 12:31:41 [debug] 45534#0: *257113763 posix_memalign:
000000000322E2C0:4096 @16
...
...
...
2013/10/25 12:32:16 [debug] 45531#0: *257119454 epoll add event:
fd:14481 op:1 ev:80000001
2013/10/25 12:32:16 [debug] 45531#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:32:16 [debug] 45531#0: *257119454 post event
00007F7BB9FCFC18
2013/10/25 12:32:16 [debug] 45531#0: *257119454 delete posted event
00007F7BB9FCFC18
2013/10/25 12:32:16 [debug] 45531#0: *257119454 http wait request
handler
2013/10/25 12:32:16 [debug] 45531#0: *257119454 malloc:
0000000002BB42E0:1024
2013/10/25 12:32:16 [debug] 45531#0: *257119454 recv: fd:14481 27 of
1024
2013/10/25 12:32:16 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45531#0: *257119454 reusable connection: 0
2013/10/25 12:32:16 [debug] 45531#0: *257119454 posix_memalign:
000000000286B6D0:4096 @16
2013/10/25 12:32:16 [debug] 45531#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45535#0: post event 00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45535#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45535#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:32:16 [debug] 45535#0: posix_memalign:
0000000002AC72E0:256 @16
2013/10/25 12:32:16 [debug] 45535#0: *257119455 accept: 77.75.254.133
fd:15745
2013/10/25 12:32:16 [debug] 45535#0: posix_memalign:
000000000276A4D0:256 @16
2013/10/25 12:32:16 [debug] 45535#0: *257119455 event timer add: 15745:
60000:1382697196597
2013/10/25 12:32:16 [debug] 45535#0: *257119455 reusable connection: 1
2013/10/25 12:32:16 [debug] 45535#0: *257119455 epoll add event:
fd:15745 op:1 ev:80000001
2013/10/25 12:32:16 [debug] 45535#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:32:16 [debug] 45535#0: *257119455 post event
00007F7BB9FD1000
2013/10/25 12:32:16 [debug] 45535#0: *257119455 delete posted event
00007F7BB9FD1000
2013/10/25 12:32:16 [debug] 45535#0: *257119455 http wait request
handler
2013/10/25 12:32:16 [debug] 45535#0: *257119455 malloc:
00000000033F7A20:1024
2013/10/25 12:32:16 [debug] 45535#0: *257119455 recv: fd:15745 27 of
1024
2013/10/25 12:32:16 [debug] 45535#0: *257119455 reusable connection: 0
2013/10/25 12:32:16 [debug] 45535#0: *257119455 posix_memalign:
00000000026AF8C0:4096 @16
2013/10/25 12:32:16 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45535#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [info] 45879#0: Using 32768KiB of shared memory for
push module in /etc/nginx/nginx.conf:100
2013/10/25 12:32:16 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45535#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: post event 00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45536#0: delete posted event
00007F7BB9FCE010
2013/10/25 12:32:16 [debug] 45536#0: accept on 0.0.0.0:80, ready: 1
2013/10/25 12:32:16 [debug] 45536#0: posix_memalign:
000000000298FF90:256 @16
2013/10/25 12:32:16 [debug] 45536#0: *257119502 accept: 46.252.18.244
fd:16232
2013/10/25 12:32:16 [debug] 45536#0: posix_memalign:
0000000003CC5040:256 @16
2013/10/25 12:32:16 [debug] 45536#0: *257119502 event timer add: 16232:
60000:1382697196773
2013/10/25 12:32:16 [debug] 45536#0: *257119502 reusable connection: 1
2013/10/25 12:32:16 [debug] 45536#0: *257119502 epoll add event:
fd:16232 op:1 ev:80000001
2013/10/25 12:32:16 [debug] 45536#0: accept() not ready (11: Resource
temporarily unavailable)
2013/10/25 12:32:16 [debug] 45536#0: *257119502 post event
00007F7BB9FD3358
2013/10/25 12:32:16 [debug] 45536#0: *257119502 delete posted event
00007F7BB9FD3358
2013/10/25 12:32:16 [debug] 45536#0: *257119502 http wait request
handler
2013/10/25 12:32:16 [debug] 45536#0: *257119502 malloc:
000000000263BD90:1024
2013/10/25 12:32:16 [debug] 45536#0: *257119502 recv: fd:16232 27 of
1024
2013/10/25 12:32:16 [debug] 45536#0: *257119502 reusable connection: 0
2013/10/25 12:32:16 [debug] 45536#0: *257119502 posix_memalign:
000000000323C850:4096 @16
2013/10/25 12:32:16 [debug] 45530#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45532#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45530#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45532#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45535#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45536#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45535#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45534#0: epoll add event: fd:97 op:1
ev:00000001
2013/10/25 12:32:16 [debug] 45536#0: epoll del event: fd:97 op:2
ev:00000000
2013/10/25 12:32:16 [debug] 45534#0: epoll del event: fd:97 op:2
ev:00000000


Thanks in advance,
Axel

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131025/780c6acd/attachment-0001.html>

From nginx-forum at nginx.us Fri Oct 25 17:31:25 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Fri, 25 Oct 2013 13:31:25 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <83edf8e751ce72b170efb5eca1f3a615.NginxMailingListEnglish@forum.nginx.org>
References: <20131023164225.GN2204@craic.sysops.org>
<83edf8e751ce72b170efb5eca1f3a615.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <545a1f43c733b5e598d3a930f0c1acaa.NginxMailingListEnglish@forum.nginx.org>

Hi Francis,

Now I need to create a limit that acts if the parameter ("capacity") is
empty (not provided). How do I do that? I can't find how to, at least not
using "map".
(If provided, the other rules will evaluate it and one of them will act
according to the value).

Thanks in advanced!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244123#msg-244123


From jaderhs5 at gmail.com Fri Oct 25 17:58:40 2013
From: jaderhs5 at gmail.com (Jader H. Silva)
Date: Fri, 25 Oct 2013 15:58:40 -0200
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <545a1f43c733b5e598d3a930f0c1acaa.NginxMailingListEnglish@forum.nginx.org>
References: <20131023164225.GN2204@craic.sysops.org>
<83edf8e751ce72b170efb5eca1f3a615.NginxMailingListEnglish@forum.nginx.org>
<545a1f43c733b5e598d3a930f0c1acaa.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAKy9XyC33qr8EaubMygCsHCgfLUjNXHT0KfT8VvJfBF1d1chTw@mail.gmail.com>

Hi,

you could use a map that matches all cases to empty string and default
value as non-empty

map $arg_capacity $my_default_key{~*([2-9]|10)X ""; default
$http_x_forwarded_for;}

If it matches 2X to 10X, $my_default_key will be empty.

Cheers
Jader H. Silva


2013/10/25 Brian08275660 <nginx-forum at nginx.us>

> Hi Francis,
>
> Now I need to create a limit that acts if the parameter ("capacity") is
> empty (not provided). How do I do that? I can't find how to, at least not
> using "map".
> (If provided, the other rules will evaluate it and one of them will act
> according to the value).
>
> Thanks in advanced!
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,244015,244123#msg-244123
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131025/6cae043e/attachment.html>

From nginx-forum at nginx.us Fri Oct 25 18:32:06 2013
From: nginx-forum at nginx.us (plan_danex@yahoo.co.id)
Date: Fri, 25 Oct 2013 14:32:06 -0400
Subject: help in nginx in the my paper
Message-ID: <d223978c719f0254f9bca11faea74ca1.NginxMailingListEnglish@forum.nginx.org>

when the instal in nginx in to my paper

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244126,244126#msg-244126


From pchychi at gmail.com Fri Oct 25 21:01:53 2013
From: pchychi at gmail.com (Payam Chychi)
Date: Fri, 25 Oct 2013 14:01:53 -0700
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <f4e573dfc9ce01bcfb8531f09840ef3c.NginxMailingListEnglish@forum.nginx.org>
References: <f4e573dfc9ce01bcfb8531f09840ef3c.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <D1E94091FEFB4226A015B032E3DBE809@gmail.com>

Maybe ive misunderstood but cant you very simply do this by injecting a cookie on the origina req page and then have nginx match, count it and apply rates? Or maybe im comicating it... If even possible

--
Payam Chychi
Network Engineer / Security Specialist


On Tuesday, October 22, 2013 at 8:42 PM, Brian08275660 wrote:

> Hi,
>
> I'm using the limit_req_zone module. I would like it to act only on some
> requests that have a certain string in one variable in the query string of
> the URL.For example, lets say that I'm providing a IP geolocation service,
> and that in the URL there is a query string like this:
>
> http://api.acme.com/ipgeolocation/locate?key=NANDSBFHGWHWN2X&ip=146.105.11.59
>
> I would like the rule to detect when the "key" parameter ends with "2X", and
> in such case to apply the limitation.
> What I really need is to give NGINX a secret message. The "key" parameter
> would end in "01X", "02X", "03X" (etc). This would indicate Nginx the
> limitation of queries per minute, and Nginx would apply a different rate for
> each request, depending on the "message".
>
> Is there a way to do that?
>
> Thanks in advance!
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244015#msg-244015
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131025/df7bd0b7/attachment.html>

From nginx-forum at nginx.us Fri Oct 25 21:14:16 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Fri, 25 Oct 2013 17:14:16 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <CAKy9XyC33qr8EaubMygCsHCgfLUjNXHT0KfT8VvJfBF1d1chTw@mail.gmail.com>
References: <CAKy9XyC33qr8EaubMygCsHCgfLUjNXHT0KfT8VvJfBF1d1chTw@mail.gmail.com>
Message-ID: <bb6ad770b2f234ba7d504a4af5bc3506.NginxMailingListEnglish@forum.nginx.org>

Hi Jader,

Thanks a lot, that looks like a nice solution!

I barely know how to build regex expressions, and I'm too lazy to learn just
right now. Just a final question: Actually they will send me any value from
2X to 25X, but that could even increase to more than 25. I would like a
simpler and more open REGEX, something like:

If it is something that includes an "X"

I will use it as this:

map $arg_capacity $my_default_key{ If it is something that includes an "X"
""; default $http_x_forwarded_for;}

Simple as that. This would catch any case, from 2X to 1000000000X. Actually
it would catch even illegal values. But that is not a problem for me, cause
in my java code that follows Nginx I'm validating the value they send me
anyway, so I don't mind if they send me a value like "-1X" because my app
would return an error status code and force them to send me a valid value
anyway. The important thing is that Nginx should apply the limit if the
received value is not empty, and asking if it contains an "X" covers that
boolean question.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244129#msg-244129


From nginx-forum at nginx.us Fri Oct 25 21:17:49 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Fri, 25 Oct 2013 17:17:49 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <D1E94091FEFB4226A015B032E3DBE809@gmail.com>
References: <D1E94091FEFB4226A015B032E3DBE809@gmail.com>
Message-ID: <c8b08fb14e94bb063d9e619523f793ce.NginxMailingListEnglish@forum.nginx.org>

Hi Payam,

I dont have that option. My users are not using real browsers, but objects
that model an HTTP client. Probably these object can't inject cookies. And I
don't want to ask them to so so, it would make things more complex to them,
whereas including an extra parameter in the query string is a piece of
cake.

Thanks anyway for the idea!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244130#msg-244130


From paulnpace at gmail.com Fri Oct 25 21:44:57 2013
From: paulnpace at gmail.com (Paul N. Pace)
Date: Fri, 25 Oct 2013 14:44:57 -0700
Subject: Passing / denying PHP requests
In-Reply-To: <20131023184126.GP2204@craic.sysops.org>
References: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>
<20131023164920.GO2204@craic.sysops.org>
<CAHUM0kBcbs3MnG6N4i3sUR42x=_LYV7+cM_UsLcQmFHYpk0hfQ@mail.gmail.com>
<20131023184126.GP2204@craic.sysops.org>
Message-ID: <CAHUM0kDaqcw7gddtiWkqscN02XXsAnFdu67eWDo4eUmuquJRDQ@mail.gmail.com>

Hi Francis, and again thanks for your help in this matter. I would
have responded sooner but the day I was planning to resolve this issue
I had an unseasonably long power outage.

On Wed, Oct 23, 2013 at 11:41 AM, Francis Daly <francis at daoine.org> wrote:
> On Wed, Oct 23, 2013 at 11:32:33AM -0700, Paul N. Pace wrote:
>> On Wed, Oct 23, 2013 at 9:49 AM, Francis Daly <francis at daoine.org> wrote:
>
> Hi there,
>
>> "location ~ php$ { deny all; }" does not deny access to any php files,
>> even when nested in "location ^~ /installdirectory/ {}". The previous
>> configuration "location ~* installdirectory/.*\.php$ { deny all; }"
>> did block access to all php files. The ".*\." - is that why one works
>> and the other doesn't?
>
> I suspect not.
>
> What "location" lines do you have in the appropriate server{} block in
> your config file?

hese are the location directives that would apply to the /forums/
directory, the /installdirectory/ of the server block that I'm
currently working on. This is an installation of Vanilla, but I'm
trying to come up with a general template that I can apply to other
packages (not a template as in one single file, but a way to apply
directives to each package I use):

server {

location = /forums/index.php {
include global-configs/php.conf;
fastcgi_split_path_info ^(.+\.php)(.*)$;
}

location ^~ forums/ {
location ~ php$ { deny all;}
}

#location ~* forums/.*\.php$ {
# deny all;
#}

location ~* ^/forums/uploads/.*.(html|htm|shtml|php)$ {
types { }
default_type text/plain;
}

location /forums/ {
try_files $uri $uri/ @forum;
location ~* /categories/([0-9]|[1-9][0-9]|[1-9][0-9][0-9])$ {
return 404;
}
}

location @forum {
rewrite ^/forums/(.+)$ /forums/index.php?p=$1 last;
}
}


>
> What one request do you make?
>
> From that, which one location{} block is used to handle this one request?
>
>> > http://nginx.org/r/location for how the one location{} is chosen to
>> > handle a request.
>>
>> I read through the nginx.org explanation of the location directive,
>> but it isn't helping me with understanding how to build the deny
>> statement.
>
> Do whatever it takes to have these requests handled in a known location{}
> block.
>
> Put the config you want inside that block.

Do you mean that I should single out each php file and create a
location block to deny access the file?

> If you enable the debug log, you will see lots of output, but it will tell
> you exactly which block is used, if it isn't clear from the "location"
> documentation.

I navigated to /forums/login.php. Here seems to be the pertinent part
of error.log:

2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "forums/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "phpmyadmin/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "forums"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "index.php"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
"/categories/([0-9]|[1-9][0-9]|[1-9][0-9][0-9])$"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "/\."
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "~$"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "piwik/config/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "piwik/core/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
"(piwik/index|piwik/piwik|piwik/js/index)\.php$"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
"^/forums/uploads/.*.(html|htm|shtml|php)$"
2013/10/25 21:39:19 [debug] 2771#0: *1 using configuration "/forums/"

I'm not sure which location block is "/forums/". The login.php file is
served as a downloadable file.

Thanks!


Paul


From nginx-forum at nginx.us Fri Oct 25 23:07:20 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Fri, 25 Oct 2013 19:07:20 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <bb6ad770b2f234ba7d504a4af5bc3506.NginxMailingListEnglish@forum.nginx.org>
References: <CAKy9XyC33qr8EaubMygCsHCgfLUjNXHT0KfT8VvJfBF1d1chTw@mail.gmail.com>
<bb6ad770b2f234ba7d504a4af5bc3506.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <def944e417b5ebf8c7099a125e5e7e14.NginxMailingListEnglish@forum.nginx.org>

I read some information about REGEX and think I found the way to express "X
or x, preceded with something before":

~*(.*)X

I think that the first two characters mean "match anycase", then the "(.*)"
would mean "any quantity of characters" and the "X" would mean that specific
letter.

Am I right?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244136#msg-244136


From ar at xlrs.de Sat Oct 26 00:06:39 2013
From: ar at xlrs.de (Axel)
Date: Sat, 26 Oct 2013 02:06:39 +0200
Subject: Too many open files and unix
In-Reply-To: <1382718946.86725.YahooMailNeo@web124903.mail.ne1.yahoo.com>
References: <23c73f29bffccf8a031ac62e1bbdfbaa@xlrs.de>
<1382718946.86725.YahooMailNeo@web124903.mail.ne1.yahoo.com>
Message-ID: <ca2b8e113aa12a729587f25c8a223fb6@xlrs.de>

Hello Aron,

Am 25.10.2013 18:35, schrieb Aron:
> Hello,
> Can you tell the output from this command ps -eLF |grep www-data |wc
> -l ?

Yes, it's 13

> And What does /proc/sys/fs/nr_open output ?

1048576

Thanks, Axel


From agentzh at gmail.com Sat Oct 26 06:09:04 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Fri, 25 Oct 2013 23:09:04 -0700
Subject: Nginx lua capture_multi fire and forget
In-Reply-To: <00663ce8f58a9c2387a5d86d341d4a9f.NginxMailingListEnglish@forum.nginx.org>
References: <00663ce8f58a9c2387a5d86d341d4a9f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAB4Tn6O225X67Gw0BJ1WNeW=XEDg9K0KMmwWF56S3ba9sjdz7Q@mail.gmail.com>

Hello!

On Wed, Oct 23, 2013 at 2:46 AM, PieterVI wrote:
> Is there a way to have the lua module launch the requests without waiting
> for the response?
> It should fire the requests and continue directly afterwards and forget
> about the response handling.
> I could try and modify the lua module, but I'm not really familiar with
> C/C++ coding.
>

Completely forget the response handling is not a really good thing to
do here. You can, however, make the upstream requests and response
handling run in the "background" without adding any latency to your
downstream requests. Basically, you can just use ngx_lua's timer API
and lua-resty-http-simple library for this:

https://github.com/chaoslawful/lua-nginx-module#ngxtimerat
https://github.com/bakins/lua-resty-http-simple

The only limitation (right now) is that lua-resty-http-simple does not
support https yet. Hopefully you don't use it.

BTW, it looks like you're trying to do something that tcpcopy may help you:

https://github.com/wangbin579/tcpcopy

BTW 2: you may want to join the openresty-en mailing list for such
questions so that you may get faster responses:

https://groups.google.com/group/openresty-en

This is also recommended in ngx_lua's official documentation:

https://github.com/chaoslawful/lua-nginx-module#community

Best regards,
-agentzh


From nginx-forum at nginx.us Sat Oct 26 12:09:18 2013
From: nginx-forum at nginx.us (mex)
Date: Sat, 26 Oct 2013 08:09:18 -0400
Subject: bad performance with static files + keepalive
Message-ID: <2d258c01c542d47cb1b507834cff701f.NginxMailingListEnglish@forum.nginx.org>

Hi List,

i have a strange performance-issue on a server that serves
static-files only (http + https), if files are bigger than 5k:

- rps drops from 6500 rps (empty file) to 13 rps when requesting a file >
5k
- perftest with location /perftest/ is at 8000 rps (https) / 15000 rps
(http)
- perftest with empty.html is 6500 rps (https) / 13000 rps (http)
- perftest with 5k script.js is 1500 rps (https) / 12000 rps (http)
- perftest with 30k script.js is 13 rps (https) / 300 rps (http)
- beside that bad performance we have a lot of complaints of
slow servers and i can confirm that loading of these resouces takes up to 15
seconds

- OS is SLES11.2, system is a kvm virtual-machine, 2 cores, 1GB ram, 270mb
free, 420mb cached
- fresh reboot
- no iowait
- no shortage of ram
- error_log/debug shows nothing.

what i played with so far, with no improvements:

- open_file_cache
- keepalive_requests 10.....100000
- keepalive_timeout
- sendfile/tcp_*
- various ssl_ciphers (PFS is not needed here)
- different nginx-version (os is 1.0.10, self-compiled is 1.4.2)

i'm scratching my head an am wondering: what did i missed?
there must be something ... on similar setup with debian/ssl
we receive an average of 4000 rps for static files with
PFS-algos on.


ssl/config
-------------------------------------------------
worker_processes 2;

worker_rlimit_nofile 10000;

#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;

#pid /var/run/nginx.pid;


events {
worker_connections 1000;
use epoll;
multi_accept on;
}



http {
include mime.types;
default_type application/octet-stream;

#log_format main '$remote_addr - $remote_user [$time_local] "$request"
'
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';


# access_log off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;

send_timeout 15s;


#keepalive_timeout 0s;
keepalive_timeout 15s;
keepalive_requests 10;

server_tokens off;

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;


}

server {
...

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1440m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers
ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM;
# ssl_ciphers RC4:HIGH:!aNULL:!MD5;
#ssl_ciphers
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECD
HE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA2
56:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-RC4-SHA:ECDHE-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:E
CDHE-RSA-AES256-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!CBC:!EDH:!kEDH:!PSK:!SRP:!kECDH;

ssl_prefer_server_ciphers on;

ssl_ciphers HIGH:!aNULL:!MD5:!kEDH:!kECDH;


location / {
root /srv/htdocs/domain;
expires 1w;

}

location /perftest {
return 200;
}


regards & thanx in advance

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244146,244146#msg-244146


From francis at daoine.org Sat Oct 26 12:48:40 2013
From: francis at daoine.org (Francis Daly)
Date: Sat, 26 Oct 2013 13:48:40 +0100
Subject: Passing / denying PHP requests
In-Reply-To: <CAHUM0kDaqcw7gddtiWkqscN02XXsAnFdu67eWDo4eUmuquJRDQ@mail.gmail.com>
References: <CAHUM0kDXTXkr=qtdUT52Sb7HsoqMTDNPUw=2XadvDLypmaBDig@mail.gmail.com>
<20131023164920.GO2204@craic.sysops.org>
<CAHUM0kBcbs3MnG6N4i3sUR42x=_LYV7+cM_UsLcQmFHYpk0hfQ@mail.gmail.com>
<20131023184126.GP2204@craic.sysops.org>
<CAHUM0kDaqcw7gddtiWkqscN02XXsAnFdu67eWDo4eUmuquJRDQ@mail.gmail.com>
Message-ID: <20131026124840.GC4365@craic.sysops.org>

On Fri, Oct 25, 2013 at 02:44:57PM -0700, Paul N. Pace wrote:
> On Wed, Oct 23, 2013 at 11:41 AM, Francis Daly <francis at daoine.org> wrote:
> > On Wed, Oct 23, 2013 at 11:32:33AM -0700, Paul N. Pace wrote:

Hi there,

> Hi Francis, and again thanks for your help in this matter. I would
> have responded sooner but the day I was planning to resolve this issue
> I had an unseasonably long power outage.

No worries, there's no rush on this.

> >> "location ~ php$ { deny all; }" does not deny access to any php files,
> >> even when nested in "location ^~ /installdirectory/ {}".
> >
> > What "location" lines do you have in the appropriate server{} block in
> > your config file?
>
> hese are the location directives that would apply to the /forums/
> directory, the /installdirectory/ of the server block that I'm
> currently working on.

nginx has very specific rules on how the one location to handle this
request is chosen. Until you understand those rules, you will be guessing
whether or not your config will work. It's simpler not to have to guess.

In this case, the request is "/forums/login.php".

> location = /forums/index.php {

It cannot match that, because the request and the location are not equal
(up to the first ? or # in the request).

> location ^~ forums/ {

It cannot match that (in fact, nothing normal can) because if the first
character of a prefix match is not "/" it is unlikely to be useful.

> location ~ php$ { deny all;}

It could match that, except it doesn't because only top-level location{}s
are considered initially.

> #location ~* forums/.*\.php$ {

It could have matched that, except it is commented out.

> location ~* ^/forums/uploads/.*.(html|htm|shtml|php)$ {

It does not match that, because the request does not include "/uploads".

> location /forums/ {

It can match that, because the location and the request have the same prefix.

> location ~* /categories/([0-9]|[1-9][0-9]|[1-9][0-9][0-9])$ {

It does not match that.

> location @forum {

It does not match that, because named locations are not considered for
an initial request.

So, of that set the only possible match is /forums/, where the configuration

> try_files $uri $uri/ @forum;

says "serve it if the file exists". The file exists, and nothing says
to proxy_pass or fastcgi_pass or anything, so the file content should
be sent as-is.

> > Do whatever it takes to have these requests handled in a known location{}
> > block.
> >
> > Put the config you want inside that block.
>
> Do you mean that I should single out each php file and create a
> location block to deny access the file?

You have one group of requests that you want handled in one way, and
another group handled in another. You need some way of distinguishing
those groups - either singly or using patterns.

> I navigated to /forums/login.php. Here seems to be the pertinent part
> of error.log:
>
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "forums/"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "/"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "phpmyadmin/"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "forums"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "/"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "index.php"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
> "/categories/([0-9]|[1-9][0-9]|[1-9][0-9][0-9])$"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "/\."
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "~$"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "piwik/config/"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "piwik/core/"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
> "(piwik/index|piwik/piwik|piwik/js/index)\.php$"
> 2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
> "^/forums/uploads/.*.(html|htm|shtml|php)$"

The bit after "location: " on each line should look familiar from the
complete list of location blocks in your real config file.

> 2013/10/25 21:39:19 [debug] 2771#0: *1 using configuration "/forums/"
>
> I'm not sure which location block is "/forums/". The login.php file is
> served as a downloadable file.

You've shown only one location block that has "/forums/" as its
uri. That's the one.

In that block, you say "send the file", so that's what nginx does.

I don't think I can explain it any better than the combination of nginx
documentation and previous list mails.

Perhaps if you put as the first line inside each location block 'return
200 "this is location N";', for varying values of N; then when you
use "curl" to access each request in turn, it will become clear which
location{} is chosen for each request, and it will become clear why that
location was chosen.

Then you'll be able to put the config you want in the location{} that
is chosen.

Good luck with it,

f
--
Francis Daly francis at daoine.org


From francis at daoine.org Sat Oct 26 12:53:13 2013
From: francis at daoine.org (Francis Daly)
Date: Sat, 26 Oct 2013 13:53:13 +0100
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <def944e417b5ebf8c7099a125e5e7e14.NginxMailingListEnglish@forum.nginx.org>
References: <CAKy9XyC33qr8EaubMygCsHCgfLUjNXHT0KfT8VvJfBF1d1chTw@mail.gmail.com>
<bb6ad770b2f234ba7d504a4af5bc3506.NginxMailingListEnglish@forum.nginx.org>
<def944e417b5ebf8c7099a125e5e7e14.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131026125313.GD4365@craic.sysops.org>

On Fri, Oct 25, 2013 at 07:07:20PM -0400, Brian08275660 wrote:

Hi there,

> ~*(.*)X
>
> I think that the first two characters mean "match anycase", then the "(.*)"
> would mean "any quantity of characters" and the "X" would mean that specific
> letter.
>
> Am I right?

Yes.

But unless you're going to do something with the bit before the X,
then in the context of a map{}, it is equivalent to ~*X, since all you
care about is whether it matches.

It will just as easily match x123 or 123x123.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Sat Oct 26 13:38:16 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Sat, 26 Oct 2013 09:38:16 -0400
Subject: limit_req_zone: How to apply only to some requests containing
some string in the URL?
In-Reply-To: <20131026125313.GD4365@craic.sysops.org>
References: <20131026125313.GD4365@craic.sysops.org>
Message-ID: <87e557e43e65f769781b93f92ec3158d.NginxMailingListEnglish@forum.nginx.org>

Oh, ok. Then it is similar to REGEX in Java.
Well, then I think I have a nice and elegant solution, Thanks!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244015,244149#msg-244149


From ar at xlrs.de Sat Oct 26 15:08:57 2013
From: ar at xlrs.de (Axel)
Date: Sat, 26 Oct 2013 17:08:57 +0200
Subject: Too many open files and unix
In-Reply-To: <ca2b8e113aa12a729587f25c8a223fb6@xlrs.de>
References: <23c73f29bffccf8a031ac62e1bbdfbaa@xlrs.de>
<1382718946.86725.YahooMailNeo@web124903.mail.ne1.yahoo.com>
<ca2b8e113aa12a729587f25c8a223fb6@xlrs.de>
Message-ID: <497f34c0aa60b4c116bb502f8c845b9b@xlrs.de>

Hi,

tonight I force-restarted nginx and everything seems to be fixed.
Checking the sockets shortly after restart returned 48 sockets for the
master process and 36 sockets for worker processes.

Now, 14 hours later sockets raise again - there are now 1128 sockets for
the master and 1116 for workers.

I noted the difference of 12 sockets between master and workers. Theese
seem to be those for communication between master and workers, but I
don't know how to classify this information.

Is this the intended behaviour of nginx or is it a bug in Ubuntus
ppa-package of nginx?

Regards, Axel


Am 26.10.2013 02:06, schrieb Axel:
> Hello Aron,
>
> Am 25.10.2013 18:35, schrieb Aron:
>> Hello,
>> Can you tell the output from this command ps -eLF |grep www-data |wc
>> -l ?
>
> Yes, it's 13
>
>> And What does /proc/sys/fs/nr_open output ?
>
> 1048576
>
> Thanks, Axel
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


From nginx-forum at nginx.us Sun Oct 27 03:43:29 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Sat, 26 Oct 2013 23:43:29 -0400
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
Message-ID: <a1e5d47552742f2299da04f6b8f2ca55.NginxMailingListEnglish@forum.nginx.org>

Hi,

I'm doing this: limit_req_status 429;

The module HttpLimitReqModule delivers "nginx/1.4.3" (at least with version
1.4.3) as a message for the HTTP status code 429. That is too generic and
not useful at all. Why doesn't it deliver a "Too Many Requests" message
instead of that? It is absurd, useless.

Not only that, but the HTML response that comes with an HTTP 503 status (the
default status when they exceed the limits) is something descriptive, that
informs the HTTP status code and message. But with the HTTP status code 429
we get an empty HTML response!

Can this be solved?

Brian

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244156,244156#msg-244156


From alex at zeitgeist.se Sun Oct 27 09:30:53 2013
From: alex at zeitgeist.se (Alex)
Date: Sun, 27 Oct 2013 10:30:53 +0100
Subject: Various debugging info not shown (
Message-ID: <0492EC89-7790-4DDF-B24E-204FD61D2905@mail.slogh.com>

Hi,

I am trying to debug handshakes and ticket reuse. Lot's of debugging
information is shown in my error_log, but some info is skipped.
Specifically, info from: /src/event/ngx_event_openssl.c. For example,

ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0,
"ssl new session: %08XD:%d:%d",
hash, sess->session_id_length, len);

ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0,
"ssl get session: %08XD:%d", hash, len);

Don't seem to be executed - i.e. neither is shown in my logs. On the
other hand

if (SSL_session_reused(c->ssl->connection)) {
ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0,
"SSL reused session");
}

get's executed, i.e. "SSL reused session" is shown in the logs.

Any ideas?

Thanks


---
My setup:

nginx 1.5.6, with --with-debug compiled

in nginx.conf, events { debug_connection: <myip> }

site uses SSL, with ssl_session_cache shared:SSL:10m; and
ssl_session_timeout 1680m; SSL configuration is working fine.


From francis at daoine.org Sun Oct 27 10:05:17 2013
From: francis at daoine.org (Francis Daly)
Date: Sun, 27 Oct 2013 10:05:17 +0000
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <a1e5d47552742f2299da04f6b8f2ca55.NginxMailingListEnglish@forum.nginx.org>
References: <a1e5d47552742f2299da04f6b8f2ca55.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131027100517.GE4365@craic.sysops.org>

On Sat, Oct 26, 2013 at 11:43:29PM -0400, Brian08275660 wrote:

Hi there,

> The module HttpLimitReqModule delivers "nginx/1.4.3" (at least with version
> 1.4.3) as a message for the HTTP status code 429. That is too generic and
> not useful at all.

Are you reporting that nginx generates a malformed response?

Or are you reporting that your http client handles a well-formed response
poorly?

The first sounds like something patchable in nginx. The second doesn't.

> But with the HTTP status code 429
> we get an empty HTML response!

http://nginx.org/r/error_page ?

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Sun Oct 27 13:40:11 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Sun, 27 Oct 2013 09:40:11 -0400
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <20131027100517.GE4365@craic.sysops.org>
References: <20131027100517.GE4365@craic.sysops.org>
Message-ID: <958df91a65ec0b908189e9efd955bde5.NginxMailingListEnglish@forum.nginx.org>

Hi Francis,

I think I'm actually reporting a malformed response, it is Nginx that send
"nginx/1.4.3" instead of something useful as "too many requests". I have
tested it with Firefox and there is no doubt. Also tested it with a java
HTTP client component, the same result. The status code received is correct
(429), but not the message that comes with it.

I had just discovered the "error_page" directive after I sent the email, I
will create it, thanks. It is easy to do it. But why doesn't nginx deliver a
default HTML response for 429, as it does for 503 and other codes? Why the
empty page? It would make more sense to deliver the typical simple response,
and that we could customize it with "error_page" if we wanted something more
special.

Brian

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244156,244162#msg-244162


From mdounin at mdounin.ru Sun Oct 27 14:10:13 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Sun, 27 Oct 2013 18:10:13 +0400
Subject: Various debugging info not shown (
In-Reply-To: <0492EC89-7790-4DDF-B24E-204FD61D2905@mail.slogh.com>
References: <0492EC89-7790-4DDF-B24E-204FD61D2905@mail.slogh.com>
Message-ID: <20131027141013.GS7074@mdounin.ru>

Hello!

On Sun, Oct 27, 2013 at 10:30:53AM +0100, Alex wrote:

> I am trying to debug handshakes and ticket reuse. Lot's of debugging
> information is shown in my error_log, but some info is skipped.
> Specifically, info from: /src/event/ngx_event_openssl.c. For example,
>
> ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "ssl new session: %08XD:%d:%d",
> hash, sess->session_id_length, len);
>
> ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "ssl get session: %08XD:%d", hash, len);
>
> Don't seem to be executed - i.e. neither is shown in my logs. On the
> other hand
>
> if (SSL_session_reused(c->ssl->connection)) {
> ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "SSL reused session");
> }
>
> get's executed, i.e. "SSL reused session" is shown in the logs.
>
> Any ideas?

What makes you think that "info is skipped", rather than assuming
the relevant code isn't executed for some reason?

--
Maxim Dounin
http://nginx.org/en/donation.html


From contact at jpluscplusm.com Sun Oct 27 14:23:30 2013
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Sun, 27 Oct 2013 14:23:30 +0000
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <958df91a65ec0b908189e9efd955bde5.NginxMailingListEnglish@forum.nginx.org>
References: <20131027100517.GE4365@craic.sysops.org>
<958df91a65ec0b908189e9efd955bde5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CAKsTx7BHYEmWq-yt9bZfcQidmS7_103VugcM6RW1VxZGFmTQ_Q@mail.gmail.com>

On 27 October 2013 13:40, Brian08275660 <nginx-forum at nginx.us> wrote:
> I think I'm actually reporting a malformed response, it is Nginx that send
> "nginx/1.4.3" instead of something useful as "too many requests".

Please show the complete output of "curl -v <429-uri>".

I don't see any visible output from a 429 with nginx 1.4.1 - I just
see an empty response body.

J


From alex at zeitgeist.se Sun Oct 27 15:04:57 2013
From: alex at zeitgeist.se (Alex)
Date: Sun, 27 Oct 2013 16:04:57 +0100
Subject: Various debugging info not shown (
In-Reply-To: 20131027141013.GS7074@mdounin.ru
Message-ID: <526D2B99.8000905@zeitgeist.se>

Hi Maxim,

Good question. I have been debugging a SSL configuration for some time,
and one of the things I've been testing for is the renewal of session
tickets. I used a thin client for that purpose:
https://github.com/grooverdan/rfc5077

Anyhow, according to the test, session renewal appears to work as intended:

./gnutls-client -r -d 10 mysite 443

[?] Parse arguments.
[?] Initialize GNU TLS library.
[?] Solve mysite:443:
? Will connect to myip
[?] Initialize TLS session.
[?] Enable use of session tickets (RFC 5077).
[?] Connect to mysite:443.
[?] Start TLS renegotiation.
[?] Check if session was reused:
? SSL session was not used
[?] Get current session:
? Session context:
? Protocol : TLS1.2
? Cipher : AES-256-CBC
? Kx : DHE-RSA
? Compression : NULL
? PSK : (null)
? ID : D18B216F82B277FCA97B95E35E91A323F922873483FD02FB025FE94106CB50C3
[?] Send HTTP GET.
[?] Get HTTP answer:
? HTTP/1.1 301 Moved Permanently
[?] End TLS connection.
[?] waiting 10 seconds.
[?] Initialize TLS session.
[?] Enable use of session tickets (RFC 5077).
[?] Copy old session.
[?] Connect to mysite:443.
[?] Start TLS renegotiation.
[?] Check if session was reused:
? SSL session correctly reused
[?] Get current session:
? Session context:
? Protocol : TLS1.2
? Cipher : AES-256-CBC
? Kx : DHE-RSA
? Compression : NULL
? PSK : (null)
? ID : D18B216F82B277FCA97B95E35E91A323F922873483FD02FB025FE94106CB50C3
[?] Send HTTP GET.
[?] Get HTTP answer:
? HTTP/1.1 301 Moved Permanently
[?] End TLS connection.

So I thought when I enable full debugging, I'd see the relevant debug
information in the error log, such as ssl new session / ssl get session
from ngx_event_openssl.c - of which nothing is shown however.

FWIW, the reason why I am actually trying to debug this is because for
some reason, when I choose a larger delay between the two test
renegotiation, instead of 10s, let's say 3600s, then the previous
session would not get reused - despite the fact that in my nginx site
config, I set a very large session timeout (1680m).

Cheers,
Alex


From nginx-forum at nginx.us Sun Oct 27 17:22:06 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Sun, 27 Oct 2013 13:22:06 -0400
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <CAKsTx7BHYEmWq-yt9bZfcQidmS7_103VugcM6RW1VxZGFmTQ_Q@mail.gmail.com>
References: <CAKsTx7BHYEmWq-yt9bZfcQidmS7_103VugcM6RW1VxZGFmTQ_Q@mail.gmail.com>
Message-ID: <5d87c17c9d8f0b758513c5d8d2ebe9c4.NginxMailingListEnglish@forum.nginx.org>

This is the output:


root at ip-10-139-33-71:~# curl -v <URL was here >
* About to connect() to api.xxxxxxxxxxxxxx.com port 80 (#0)
* Trying 40.57.235.104... connected
> GET /location/locate-ip?key=BBANBWEDS7UZ6FD8747F76VZ&ip=201.1.1.1
HTTP/1.1
> User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: api.xxxxxxxxxxxx.com
> Accept: */*
>
< HTTP/1.1 429 nginx/1.4.3 <-----------------------------------------THIS
IS WRONG
< Date: Sun, 27 Oct 2013 17:12:42 GMT
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host api.xxxxxxxxxxxxxx.com left intact
* Closing connection #0
root at ip-10-139-33-71:~#



Please look the line I'm considering wrong. After "429" it says
"nginx/1.4.3", whereas it shoud say "too many connections" or something like
that.

The HTML output is empty, certainly. I can customize that with the
"error_page" directive so that not a big problem (even though I think it
should deliver a standard non-empty response explaning the 429 code).

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244156,244169#msg-244169


From francis at daoine.org Sun Oct 27 18:36:35 2013
From: francis at daoine.org (Francis Daly)
Date: Sun, 27 Oct 2013 18:36:35 +0000
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <958df91a65ec0b908189e9efd955bde5.NginxMailingListEnglish@forum.nginx.org>
References: <20131027100517.GE4365@craic.sysops.org>
<958df91a65ec0b908189e9efd955bde5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131027183635.GF4365@craic.sysops.org>

On Sun, Oct 27, 2013 at 09:40:11AM -0400, Brian08275660 wrote:

Hi there,

> I think I'm actually reporting a malformed response, it is Nginx that send
> "nginx/1.4.3" instead of something useful as "too many requests".

This is quite confusing to me.

What you have shown looks well-formed to me, but doesn't look as useful
as you want. (They are different things. If it is a well-formed http
429 response, then it is the client's job to know what that means. The
reason-phrase and the http body content are optional enhancements that
the server can choose to send. According to my reading of RFC 2616.)

What is confusing is that when I do something similar, I get different
output which does not look well-formed to me:

===
$ curl -i -s http://localhost:8000/a.html | od -bc | head -n 6
0000000 110 124 124 120 057 061 056 061 040 064 062 071 015 012 123 145
H T T P / 1 . 1 4 2 9 \r \n S e
0000020 162 166 145 162 072 040 156 147 151 156 170 057 061 056 064 056
r v e r : n g i n x / 1 . 4 .
0000040 063 015 012 104 141 164 145 072 040 123 165 156 054 040 062 067
3 \r \n D a t e : S u n , 2 7
===

The lack of a space immediately following "429" up there looks incorrect
to me.

This is when using:

===
$ sbin/nginx -V
nginx version: nginx/1.4.3
built by gcc 4.4.5 (Debian 4.4.5-8)
configure arguments: --with-debug

$ cat conf/nginx.conf

events {
worker_connections 1024;
debug_connection 127.0.0.1;
}

http {
limit_req_zone $binary_remote_addr zone=zone1:128m rate=1r/m;

limit_req_status 429;
server {
location = /a.html {
limit_req zone=zone1 nodelay;
}
}
}
===

> I have
> tested it with Firefox and there is no doubt. Also tested it with a java
> HTTP client component, the same result. The status code received is correct
> (429), but not the message that comes with it.

The message is irrelevant from a correctness point of view.

> I had just discovered the "error_page" directive after I sent the email, I
> will create it, thanks. It is easy to do it. But why doesn't nginx deliver a
> default HTML response for 429, as it does for 503 and other codes?

My guess is that nginx doesn't claim to support whichever standard
defines code 429; so as far as nginx is concerned, you using it is just
like you using, say, code 477. It's a number you choose, so you get to
ensure that the client can handle it.

(It seems to be RFC 6585, currently "Proposed Standard", that defines 429.)

Now I think that it would probably be nice if there were a way to
provide a reason-phrase alongside a status code in nginx. I'm sure that
if somebody cares enough, a patch will be forthcoming.

And if we can find out why the response that you get and the response
that I get differ, and if it turns out that the one that is generated
by pure-nginx is actually malformed and causes a client that matters to
break, then that may become patched too.

But the response you get looks correct to me, for a status code that nginx
doesn't know about. The client knows it is a client error (400-series);
if the client supports RFC 6585 is also knows that it means "Too Many
Requests"; and if it doesn't, it can show the body that you choose to
send with error_page so that the user can work out what it means.

f
--
Francis Daly francis at daoine.org


From nmilas at noa.gr Sun Oct 27 19:09:50 2013
From: nmilas at noa.gr (Nikolaos Milas)
Date: Sun, 27 Oct 2013 21:09:50 +0200
Subject: Nagios check for nginx with separate metrics
Message-ID: <526D64FE.7030004@noa.gr>

Hello,

I am trying to run a Nagios check for nginx (in Opsview Core) but I have
a problem: All of the available (to my knowledge) nginx Nagios checks
(http://exchange.nagios.org/directory/Plugins/Web-Servers/nginx/)
produce comprehensive output which includes all "metrics" together,
while I would want one that can output a selected metric at a time, by
using a parameter, like "-m <metric>," in the following example:

./check_nginx.sh -H localhost -P 80 -p /var/run -n nginx.pid -s
nginx_status -o /tmp -m current_requests
- or -
./check_nginx.sh -H localhost -P 80 -p /var/run -n nginx.pid -s
nginx_status -o /tmp -m requests_per_second
- or -
./check_nginx.sh -H localhost -P 80 -p /var/run -n nginx.pid -s
nginx_status -o /tmp -m accesses
etc.

Does anyone know whether such an nginx Nagios check exists (and where)
or we would have to modify source code of one of these plugins to
achieve the required behavior? (Frankly, I would be surprised if such a
check does not exist yet, but I couldn't find one on the Net, despite my
searches.)

Output with a single metric at a time is important for use in
server/network monitoring systems.

(I know that the issue is more related to Nagios, but I am hoping that
someone on this mailing list has faced this problem with monitoring
NGINX and can point to a solution.)

Thanks and regards,
Nick


From luky-37 at hotmail.com Sun Oct 27 20:45:40 2013
From: luky-37 at hotmail.com (Lukas Tribus)
Date: Sun, 27 Oct 2013 21:45:40 +0100
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <20131027183635.GF4365@craic.sysops.org>
References: <20131027100517.GE4365@craic.sysops.org>,
<958df91a65ec0b908189e9efd955bde5.NginxMailingListEnglish@forum.nginx.org>,
<20131027183635.GF4365@craic.sysops.org>
Message-ID: <DUB107-W18825CEC3BD5D34EAD7DAED0F0@phx.gbl>

Hi!


> What you have shown looks well-formed to me, but doesn't look as useful
> as you want. (They are different things. If it is a well-formed http
> 429 response, then it is the client's job to know what that means. The
> reason-phrase and the http body content are optional enhancements that
> the server can choose to send. According to my reading of RFC 2616.)
>
> What is confusing is that when I do something similar, I get different
> output which does not look well-formed to me:

I think nginx is returning the same thing for you both, and that curl
fails to parse this bogus HTTP response (maybe you are using different
curl releases).

If we look again at Brian's curl output, we don't see any Server header
in the response, which is not configurable in nginx afaik:

< HTTP/1.1 429 nginx/1.4.3? <--- THIS IS WRONG
< Date: Sun, 27 Oct 2013 17:12:42 GMT
< Content-Length: 0
< Connection: keep-alive



>From rfc2616#section-6.1 [1]:

> 6.1 Status-Line
>
>? The first line of a Response message is the Status-Line, consisting
>? of the protocol version followed by a numeric status code and its
>? associated textual phrase, with each element separated by SP
>? characters. No CR or LF is allowed except in the final CRLF sequence.
>
>??? Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF

While nginx seems to return:
? HTTP-Version SP Status-Code CRLF

as per Francis' output.

The Reason-Phrase clearly is a hard requirement and cannot be omitted.



Regards,

Lukas


[1] http://tools.ietf.org/html/rfc2616#section-6.1

From alex at zeitgeist.se Sun Oct 27 21:18:07 2013
From: alex at zeitgeist.se (Alex)
Date: Sun, 27 Oct 2013 22:18:07 +0100
Subject: Various debugging info not shown (
In-Reply-To: 526D2B99.8000905@zeitgeist.se
Message-ID: <526D830F.1030206@zeitgeist.se>

OK, I found out why sessions wouldn't be resumed after 3600s in my
testings... it's not that nginx would have stopped caching the session,
but it's the client. For example, openssl wouldn't cache sessions for
longer than two hours:

/ssl/t1_lib.c (same also for sslv3)

long tls1_default_timeout(void)
{
/* 2 hours, the 24 hours mentioned in the TLSv1 spec
* is way too long for http, the cache would over fill */
return(60*60*2);
}

Oh well. rfc2246 states that cached sessions may be used for up to 24
hours (http://tools.ietf.org/html/rfc2246#appendix-F.1.4).

Curious how popular browsers such as Chrome or Firefox behave in this
regard.

Anyhow, I am still not sure why the nginx debug data didn't show
anything about session resumption in my case, but I guess I won't need
the information now.

Thanks again.
Alex


From francis at daoine.org Sun Oct 27 21:44:59 2013
From: francis at daoine.org (Francis Daly)
Date: Sun, 27 Oct 2013 21:44:59 +0000
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <DUB107-W18825CEC3BD5D34EAD7DAED0F0@phx.gbl>
References: <20131027183635.GF4365@craic.sysops.org>
<DUB107-W18825CEC3BD5D34EAD7DAED0F0@phx.gbl>
Message-ID: <20131027214459.GG4365@craic.sysops.org>

On Sun, Oct 27, 2013 at 09:45:40PM +0100, Lukas Tribus wrote:

Hi there,

> > What you have shown looks well-formed to me, but doesn't look as useful
> > as you want.

> > What is confusing is that when I do something similar, I get different
> > output which does not look well-formed to me:

> I think nginx is returning the same thing for you both, and that curl
> fails to parse this bogus HTTP response (maybe you are using different
> curl releases).

If curl isn't showing exactly what is being returned, that would be
disappointing. I guess we could test with "nc" or "tcpdump" if necessary.

> >??? Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF
>
> While nginx seems to return:
> ? HTTP-Version SP Status-Code CRLF
>
> as per Francis' output.
>
> The Reason-Phrase clearly is a hard requirement and cannot be omitted.

It can't be omitted, but it can be zero-length. The SP before it looks
to be the part that shouldn't be omitted.


It looks like a straightforward fix:

diff -pru nginx-1.4.3/src/http/ngx_http_header_filter_module.c nginx-1.4.3-wip/src/http/ngx_http_header_filter_module.c
--- nginx-1.4.3/src/http/ngx_http_header_filter_module.c 2013-10-08 13:07:14.000000000 +0100
+++ nginx-1.4.3/src/http/ngx_http_header_filter_module.c 2013-10-27 21:25:20.693842199 +0000
@@ -448,7 +448,7 @@ ngx_http_header_filter(ngx_http_request_
b->last = ngx_copy(b->last, status_line->data, status_line->len);

} else {
- b->last = ngx_sprintf(b->last, "%03ui", status);
+ b->last = ngx_sprintf(b->last, "%03ui ", status);
}
*b->last++ = CR; *b->last++ = LF;


I think that line 270 in the file, which currently says

len += NGX_INT_T_LEN;

should probably currently say something like

len += 3; /* sizeof("404") */

and should be changed to say something like

len += 4; /* sizeof("404 ") */

but since NGX_INT_T_LEN is at least 4 anyway, len is big enough to hold
the extra space without changing that line.

Extra eyes to ensure I've not done something stupid are welcome.

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Sun Oct 27 23:25:28 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Sun, 27 Oct 2013 19:25:28 -0400
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <20131027183635.GF4365@craic.sysops.org>
References: <20131027183635.GF4365@craic.sysops.org>
Message-ID: <7bb67fb1805f592904595837efd24a0c.NginxMailingListEnglish@forum.nginx.org>

Hi Francis,

Probably I shouldn't have said "malformed" when I chose a word to express
the problem with the response. But I assumed that Nginx should show the
phrase that corresponds to the code. I assumed that Nginx has been coded so
it know that 429 means "Too Many Requests" and that we should receive that
string instead of the generic -and not very useful- string
"Nginx/<version>". I just expected Nginx to behave with http status 429 as
it does with http status 503.
I agree that the client should know what to do. In fact, the most important
thing is the code, and that is being delivered perfectly. I just think the
explanation would be useful.

I don't know why your output is different than mine. Weird!

I know that I chose to send 429 to the client, yes, but given that 429 means
"too many requests" for the whole world (I mean, its not a status that I
haven just invented), wouldn't it be nice if Nginx considers this and
delivers the correct phrase?

Brian

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244156,244181#msg-244181


From lists at der-ingo.de Mon Oct 28 00:56:16 2013
From: lists at der-ingo.de (Ingo Schmidt)
Date: Mon, 28 Oct 2013 01:56:16 +0100
Subject: rewrite last using variable
Message-ID: <526DB630.80502@der-ingo.de>

Hi,

consider the following simplified nginx config snippet:

set $var "/url?par=val";
location = /a {
rewrite ^ $var redirect;
}
location = /b {
rewrite ^ $var last;
}
location = /url {
proxy_pass http://some_backend;
}

For location a the redirect goes via the client and everything works
just fine.
For location b the redirect is internal and now the backend cannot
process the request anymore.

I found out that it in the 2nd case the backend sees a URL encoded form
of the contents of $var. So the question mark has become %3F and
naturally the backend can't find the request parameters.
I guess that the URL encoding also takes place in the first case, but
here the client decodes the URL and thus everything is ok again.

So here is my question:
Is it possible in nginx to make rewrites with variables where the
variable contains a URL with parameters like in the example? If yes,
what do I need to change? If no, what other options do I have?

Cheers, Ingo =;->


From appa at perusio.net Mon Oct 28 07:52:54 2013
From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=)
Date: Mon, 28 Oct 2013 08:52:54 +0100
Subject: HttpLimitReqModule delivers "nginx/1.4.3" as a message for HTTP
status code 429. Too generic!
In-Reply-To: <7bb67fb1805f592904595837efd24a0c.NginxMailingListEnglish@forum.nginx.org>
References: <20131027183635.GF4365@craic.sysops.org>
<7bb67fb1805f592904595837efd24a0c.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <CA+VA=FZDPfVqd6pVWRcdWt2_d2UrcqTN=4=itfbR3hyZLN6prw@mail.gmail.com>

Nginx has actually no support for the 429 code. Either you fix it by
proposing a patch to support the error page in core or you use an
error_page directive.

error_page 429 @toomany;

location @toomany {
return 429 'Too many requests.\n';
}

Just a simple example.
Le 28 oct. 2013 00:25, "Brian08275660" <nginx-forum at nginx.us> a ?crit :

> Hi Francis,
>
> Probably I shouldn't have said "malformed" when I chose a word to express
> the problem with the response. But I assumed that Nginx should show the
> phrase that corresponds to the code. I assumed that Nginx has been coded so
> it know that 429 means "Too Many Requests" and that we should receive that
> string instead of the generic -and not very useful- string
> "Nginx/<version>". I just expected Nginx to behave with http status 429 as
> it does with http status 503.
> I agree that the client should know what to do. In fact, the most important
> thing is the code, and that is being delivered perfectly. I just think the
> explanation would be useful.
>
> I don't know why your output is different than mine. Weird!
>
> I know that I chose to send 429 to the client, yes, but given that 429
> means
> "too many requests" for the whole world (I mean, its not a status that I
> haven just invented), wouldn't it be nice if Nginx considers this and
> delivers the correct phrase?
>
> Brian
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,244156,244181#msg-244181
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131028/1e3ae8ad/attachment.html>

From francis at daoine.org Mon Oct 28 08:09:38 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 28 Oct 2013 08:09:38 +0000
Subject: rewrite last using variable
In-Reply-To: <526DB630.80502@der-ingo.de>
References: <526DB630.80502@der-ingo.de>
Message-ID: <20131028080938.GH4365@craic.sysops.org>

On Mon, Oct 28, 2013 at 01:56:16AM +0100, Ingo Schmidt wrote:

Hi there,

> set $var "/url?par=val";
> location = /b {
> rewrite ^ $var last;
> }

> For location b the redirect is internal and now the backend cannot
> process the request anymore.

> I guess that the URL encoding also takes place in the first case, but
> here the client decodes the URL and thus everything is ok again.

No; here what is sent to the client includes the bare ?.

> Is it possible in nginx to make rewrites with variables where the
> variable contains a URL with parameters like in the example? If yes,
> what do I need to change? If no, what other options do I have?

I believe it is intended to be "no", for internal rewrites at least.

You can do something like

set $var "/url";
set $vararg "par=val";
rewrite ^ $var?$vararg last;

Although you may need to be more subtle if you know that your backend
handles "/url" and "/url?" differently, in case $vararg is empty.

f
--
Francis Daly francis at daoine.org


From lists at der-ingo.de Mon Oct 28 08:36:20 2013
From: lists at der-ingo.de (Ingo Schmidt)
Date: Mon, 28 Oct 2013 09:36:20 +0100
Subject: rewrite last using variable
In-Reply-To: <20131028080938.GH4365@craic.sysops.org>
References: <526DB630.80502@der-ingo.de>
<20131028080938.GH4365@craic.sysops.org>
Message-ID: <526E2204.6030007@der-ingo.de>

Hi!

> I believe it is intended to be "no", for internal rewrites at least.
Hmm, any reason why it might be intended?
Also, it would be nice if the docs could mention this, because I find it
unintuitive, if rewrite behaves differently depending on the rewrite type.

> You can do something like set $var "/url"; set $vararg "par=val";
> rewrite ^ $var?$vararg last;
Well, in my case that is exactly what I am trying to avoid, because my
variable is set in a map and I would have to create two maps then, one
with the URLs, the other with arguments.
And, as you already pointed out, sometimes the argument is optional.

Hmm, but it looks like this is my only option right now.
Anyway, thanks for your answer!

Cheers, Ingo =;->


From francis at daoine.org Mon Oct 28 09:57:10 2013
From: francis at daoine.org (Francis Daly)
Date: Mon, 28 Oct 2013 09:57:10 +0000
Subject: rewrite last using variable
In-Reply-To: <526E2204.6030007@der-ingo.de>
References: <526DB630.80502@der-ingo.de>
<20131028080938.GH4365@craic.sysops.org> <526E2204.6030007@der-ingo.de>
Message-ID: <20131028095710.GI4365@craic.sysops.org>

On Mon, Oct 28, 2013 at 09:36:20AM +0100, Ingo Schmidt wrote:

Hi there,

I was (partly) wrong when I said

"""
> I guess that the URL encoding also takes place in the first case, but
> here the client decodes the URL and thus everything is ok again.

No; here what is sent to the client includes the bare ?.
"""

Reading the code, the string does go through some uri-escaping and some
uri-unescaping within nginx, and the end result is that a ? in $var
remains a ? in what is written to the client. But it's not a straight
copy of the string.

> >I believe it is intended to be "no", for internal rewrites at least.
> Hmm, any reason why it might be intended?

My guess? The value given must either be already uri-escaped, or not. And
nginx chooses "not".

> Also, it would be nice if the docs could mention this, because I find it
> unintuitive, if rewrite behaves differently depending on the rewrite type.

If you presume that the docs are in the directory called "src", then
it's mentioned there ;-) Anything "redirect"ed goes through an unescape.

I expect that a correct clarifying patch to the docs will be welcomed,
if you fancy writing one.

> >You can do something like set $var "/url"; set $vararg "par=val";
> >rewrite ^ $var?$vararg last;
> Well, in my case that is exactly what I am trying to avoid, because my
> variable is set in a map and I would have to create two maps then, one
> with the URLs, the other with arguments.

Generally in nginx, if there's something that you can't do in config,
you can do it in a module you write. Possibly you can do it in config
using one of the scripting language modules.

(Usually, config is simpler in the short term.)

Good luck with it,

f
--
Francis Daly francis at daoine.org


From nginx-forum at nginx.us Mon Oct 28 10:47:01 2013
From: nginx-forum at nginx.us (Jolly)
Date: Mon, 28 Oct 2013 06:47:01 -0400
Subject: Pandora Jewelry, Your individuality choice
Message-ID: <3b3a713d8abef18886e9c2116c45ea46.NginxMailingListEnglish@forum.nginx.org>

For nearly 50 years,
[url=http://www.pandoracharmscanadaonline2013.org/pandora-jewelry-1/pandora-charms.html]pandora
charms[/url] is quietly carried out a revolution, and imperceptibly changing
the jewelry industry. A new way of thinking has taken a new atmosphere and
charm to the fashion jewelry.

The personalized design and a variety of materials made Pandora Jewelry a
most eye-catching part at a multitude of Fashion Shows this year. This trend
makes handicrafts becoming a hot trend around the world.

Designers no longer worked to satisfy consumer demand of wearer, which
commonly previous jewelers did. They regard the design of
[url=http://www.pandoracharmscanadaonline2013.org/pandora-jewelry-1/pandora-charms.html]pandora
charms online[/url] as a unique way to express themselves. They think the
primary value of a Pandora Jewelry is the creation idea, the thought and
idea that inject at creation, rather than its materials.

In fact, the revolution of Jewelry industry are motivated by the pulling
force of market needs, the pursuit of more personal and artistic which
Pandora jewelry's wearer required create the revolution.
[url=http://www.pandoracharmscanadaonline2013.org/pandora-jewelry-1/pandora-charms.html][img]http://www.pandoracharmscanadaonline2013.org/media/catalog/product/cache/1/small_image/170x/9df78eab33525d08d6e5fb8d27136e95/p/e/pec-046.jpg[/img][/url]
Designers give up the valuable diamond jewelry, and choose the common
natural stones like coral, turquoise, agate, even the wood, bone, shell,
ceramics and other materials. Jewelry which made of these materials also
appeals to refined taste. people cannot help but suspect whether designers
are tired of diamond, gold, platinum, and looking back to explore the
materials which with natural affection. Although the material is retro, the
design is very modern.

You can easily to find the Pandora jewelry's wearer when you walk in the
street. Regardless of men and women, maybe they are for beauty or there is
also for beliefs. The jewelry they wearied becomes more and more personal
and artistic, either from materials, design technology or connotations.

People are interesting in mixing game; it leads a new trend of
[url=http://www.pandoracharmscanadaonline2013.org/pandora-jewelry-1/pandora-charms.html]pandora
canada[/url] jewelry fashion development. According to personal feeling and
taste, Mix-and-match whatever materials in one jewelry and gives the jewelry
its unique personal style. This jewelry complete with fashionable young
people to pursue the psychological. Pandora Jewelry has various kinds of
styles, such as exaggerated, cartoon, exquisite, or eccentric etc. Whether
wear separately or mix and match together, it can be worn for various
occasions, truly make the fashionistas live with a full of surprises every
day.


Welcome to visit
homepage:http://www.pandoracharmscanadaonline2013.org/pandora-jewelry-1/pandora-charms.html?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244196,244196#msg-244196


From nginx-forum at nginx.us Mon Oct 28 12:08:45 2013
From: nginx-forum at nginx.us (eddy1234)
Date: Mon, 28 Oct 2013 08:08:45 -0400
Subject: Any rough ETA on SPDY/3 & push?
In-Reply-To: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <3916ee6f3ad2df3c71f7d89bb60151bc.NginxMailingListEnglish@forum.nginx.org>

spdy/2 support has been removed from the Firefox code base (
https://bugzilla.mozilla.org/show_bug.cgi?id=912550 ) and >= Firefox 27 will
only support >= spdy/3. Firefox 27 will be released in January 2014 (
https://wiki.mozilla.org/RapidRelease/Calendar ) so there is some urgency in
getting spdy/3(.1) support into nginx.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243684,244199#msg-244199


From nginx-forum at nginx.us Mon Oct 28 12:25:07 2013
From: nginx-forum at nginx.us (goyal,nitj)
Date: Mon, 28 Oct 2013 08:25:07 -0400
Subject: Nginx Jasig CAS Integration..
Message-ID: <b52fb54a5122aba4fbc05d6d5c874652.NginxMailingListEnglish@forum.nginx.org>

How do we use CAS login screen to authenticate Nginx.

We have deployed front end using EXT JS in Nginx server and Nginx server is
using reverse proxy to get the data from JBOSS server (spring MVC rest
services).

We have deployed Jasig CAS application in tomcat server and configured our
JBOSS server for the CAS Authentication. So far so Good.

Since user is entering the data in EXT JS forms so we are supposed to enable
SSL at nginx server level. Also, when user is not logged in, we have to
redirect users to the CAS login screen.How do we do that ?

One way is to check for the cookie in nginx for authentication and redirect
to cas login screen if cookie is not available. Is it the right way of doing
this ?

Do have any user guide for nginx and CAS integration ?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244200,244200#msg-244200


From nginx-forum at nginx.us Mon Oct 28 18:14:03 2013
From: nginx-forum at nginx.us (Brian08275660)
Date: Mon, 28 Oct 2013 14:14:03 -0400
Subject: HttpLimitReqModule: In the error log,
what is the unit of the "excess" value?
Message-ID: <d00987221280c2ed385fa7774b9b57d4.NginxMailingListEnglish@forum.nginx.org>

Hi,

This is an example of an entry in the log, left by HttpLimitReqModule:

2013/10/27 10:00:11 [error] 1402#0: *313355 limiting requests, excess: 0.580
by zone "zone1", client: 20.147.43.103, server: api.acme.com, request: "GET
/location/locate-ip?key=xxxx&ip=85.210.42.204 HTTP/1.1", host:
"api.acme.com"

What does "excess: 0.580" mean exactly? What is the unit of that value?
Requests? Requests per second?
There is no information in the documentation, as far as I know.

Thanks in advance,

Brian

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244205,244205#msg-244205


From piotr at cloudflare.com Mon Oct 28 20:16:05 2013
From: piotr at cloudflare.com (Piotr Sikora)
Date: Mon, 28 Oct 2013 13:16:05 -0700
Subject: Various debugging info not shown (
In-Reply-To: <0492EC89-7790-4DDF-B24E-204FD61D2905@mail.slogh.com>
References: <0492EC89-7790-4DDF-B24E-204FD61D2905@mail.slogh.com>
Message-ID: <CADMhe6dYkT5Dhp0i5twaYmrmr-Jesn-Vhztxq0Rm-tuQ9nYOKQ@mail.gmail.com>

Hi,

> Any ideas?

Your client is using TLS Session Tickets (client-side caching), so
nginx-side cache isn't used for that sessions.

Best regards,
Piotr Sikora


From smainklh at free.fr Tue Oct 29 09:23:37 2013
From: smainklh at free.fr (smainklh at free.fr)
Date: Tue, 29 Oct 2013 10:23:37 +0100 (CET)
Subject: Webdav & chunk encoding : error code 500
In-Reply-To: <1562727623.431994376.1383038022357.JavaMail.root@zimbra23-e3.priv.proxad.net>
Message-ID: <546537788.432022153.1383038617365.JavaMail.root@zimbra23-e3.priv.proxad.net>

Hello,

We're trying to use an appliance with the Nginx Webdav server.
It is a streaming encoding plateform which is trying to send chunks.

But for each chunks, we got a 500 error.

We noticed the following error logs :
2013/10/25 16:24:17 [error] 35861#0: *50524 no user/password was provided for basic authentication, client: 109.231.227.156, server: localhost, request: "PROPFIND /864/ HTTP/1.1", host: "95.81.159.200"
2013/10/25 16:24:17 [alert] 32000#0: worker process 35861 exited on signal 11
2013/10/25 16:24:19 [crit] 35865#0: *50527 chmod() "/data/ram/864/.tmp/0000036090" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_300_00001.ts HTTP/1.1", host: "x.x.x.x"
2013/10/25 16:24:19 [crit] 35865#0: *50527 unlink() "/data/ram/864/.tmp/0000036090" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_300_00001.ts HTTP/1.1", host: "x.x.x.x"
2013/10/25 16:24:19 [crit] 35865#0: *50528 chmod() "/data/ram/864/.tmp/0000036091" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_600_00001.ts HTTP/1.1", host: "x.x.x.x"
2013/10/25 16:24:19 [crit] 35865#0: *50528 unlink() "/data/ram/864/.tmp/0000036091" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_600_00001.ts HTTP/1.1", host: "x.x.x.x"
2013/10/25 16:24:19 [crit] 35865#0: *50529 chmod() "/data/ram/864/.tmp/0000036092" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_1000_00001.ts HTTP/1.1", host: "x.x.x.x"
2013/10/25 16:24:19 [crit] 35865#0: *50529 unlink() "/data/ram/864/.tmp/0000036092" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_1000_00001.ts HTTP/1.1", host: "x.x.x.x"
2013/10/25 16:24:19 [alert] 32000#0: worker process 35865 exited on signal 11

Please find enclose my nginx configuration.
Could you please help me ?

Regards,
Smana
-------------- next part --------------
A non-text attachment was scrubbed...
Name: nginx_test
Type: application/octet-stream
Size: 777 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131029/0a6f70ed/attachment.obj>

From steve at greengecko.co.nz Tue Oct 29 09:56:31 2013
From: steve at greengecko.co.nz (Steve Holdoway)
Date: Tue, 29 Oct 2013 22:56:31 +1300
Subject: Webdav & chunk encoding : error code 500
Message-ID: <kcmstle2f1ek81m6eh6uul9g.1383040548918@email.android.com>

mkdir -p /data/ram/864/.tmp ??

smainklh at free.fr wrote:

>Hello,
>
>We're trying to use an appliance with the Nginx Webdav server.
>It is a streaming encoding plateform which is trying to send chunks.
>
>But for each chunks, we got a 500 error.
>
>We noticed the following error logs :
>2013/10/25 16:24:17 [error] 35861#0: *50524 no user/password was provided for basic authentication, client: 109.231.227.156, server: localhost, request: "PROPFIND /864/ HTTP/1.1", host: "95.81.159.200"
>2013/10/25 16:24:17 [alert] 32000#0: worker process 35861 exited on signal 11
>2013/10/25 16:24:19 [crit] 35865#0: *50527 chmod() "/data/ram/864/.tmp/0000036090" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_300_00001.ts HTTP/1.1", host: "x.x.x.x"
>2013/10/25 16:24:19 [crit] 35865#0: *50527 unlink() "/data/ram/864/.tmp/0000036090" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_300_00001.ts HTTP/1.1", host: "x.x.x.x"
>2013/10/25 16:24:19 [crit] 35865#0: *50528 chmod() "/data/ram/864/.tmp/0000036091" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_600_00001.ts HTTP/1.1", host: "x.x.x.x"
>2013/10/25 16:24:19 [crit] 35865#0: *50528 unlink() "/data/ram/864/.tmp/0000036091" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_600_00001.ts HTTP/1.1", host: "x.x.x.x"
>2013/10/25 16:24:19 [crit] 35865#0: *50529 chmod() "/data/ram/864/.tmp/0000036092" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_1000_00001.ts HTTP/1.1", host: "x.x.x.x"
>2013/10/25 16:24:19 [crit] 35865#0: *50529 unlink() "/data/ram/864/.tmp/0000036092" failed (2: No such file or directory), client: x.x.x.x, server: localhost, request: "PUT /864/test_1000_00001.ts HTTP/1.1", host: "x.x.x.x"
>2013/10/25 16:24:19 [alert] 32000#0: worker process 35865 exited on signal 11
>
>Please find enclose my nginx configuration.
>Could you please help me ?
>
>Regards,
>Smana
>_______________________________________________
>nginx mailing list
>nginx at nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx

From e1c1bac6253dc54a1e89ddc046585792 at posteo.net Tue Oct 29 10:27:48 2013
From: e1c1bac6253dc54a1e89ddc046585792 at posteo.net (e1c1bac6253dc54a1e89ddc046585792 at posteo.net)
Date: Tue, 29 Oct 2013 11:27:48 +0100
Subject: minor misleading configure error-msg around PCRE
Message-ID: <d40c072af72971a43eb1cd0c9701442b@posteo.de>

Hi,

./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using
--without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE
library
statically from the source with nginx by using --with-pcre=<path>
option.

This happens when not having pcre, using --without-http_rewrite_module
BUT
also --with-pcre as an option, like:
./configure --without-http_rewrite_module --with-pcre

For sure that's a stupid combination, but just happened in a very long,
generated
configure line here. FWIW, this was on OpenBSD 5.1.


From mdounin at mdounin.ru Tue Oct 29 10:41:56 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 29 Oct 2013 14:41:56 +0400
Subject: Webdav & chunk encoding : error code 500
In-Reply-To: <546537788.432022153.1383038617365.JavaMail.root@zimbra23-e3.priv.proxad.net>
References: <1562727623.431994376.1383038022357.JavaMail.root@zimbra23-e3.priv.proxad.net>
<546537788.432022153.1383038617365.JavaMail.root@zimbra23-e3.priv.proxad.net>
Message-ID: <20131029104156.GB7074@mdounin.ru>

Hello!

On Tue, Oct 29, 2013 at 10:23:37AM +0100, smainklh at free.fr wrote:

> Hello,
>
> We're trying to use an appliance with the Nginx Webdav server.
> It is a streaming encoding plateform which is trying to send chunks.
>
> But for each chunks, we got a 500 error.
>
> We noticed the following error logs :
> 2013/10/25 16:24:17 [error] 35861#0: *50524 no user/password was provided for basic authentication, client: 109.231.227.156, server: localhost, request: "PROPFIND /864/ HTTP/1.1", host: "95.81.159.200"
> 2013/10/25 16:24:17 [alert] 32000#0: worker process 35861 exited on signal 11

[...]

> Please find enclose my nginx configuration.
> Could you please help me ?

It looks you are using an old nginx version with 3rd party chunkin
module. Upgrade to at least recent stable version (1.4.3) with
chunked support available out of the box.

See http://nginx.org/en/download.html for various download links,
including packages for various Linux versions.

--
Maxim Dounin
http://nginx.org/en/donation.html


From mdounin at mdounin.ru Tue Oct 29 10:48:22 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 29 Oct 2013 14:48:22 +0400
Subject: minor misleading configure error-msg around PCRE
In-Reply-To: <d40c072af72971a43eb1cd0c9701442b@posteo.de>
References: <d40c072af72971a43eb1cd0c9701442b@posteo.de>
Message-ID: <20131029104822.GC7074@mdounin.ru>

Hello!

On Tue, Oct 29, 2013 at 11:27:48AM +0100, e1c1bac6253dc54a1e89ddc046585792 at posteo.net wrote:

> Hi,
>
> ./configure: error: the HTTP rewrite module requires the PCRE library.
> You can either disable the module by using
> --without-http_rewrite_module
> option, or install the PCRE library into the system, or build the
> PCRE library
> statically from the source with nginx by using --with-pcre=<path>
> option.
>
> This happens when not having pcre, using
> --without-http_rewrite_module BUT
> also --with-pcre as an option, like:
> ./configure --without-http_rewrite_module --with-pcre
>
> For sure that's a stupid combination, but just happened in a very
> long, generated
> configure line here. FWIW, this was on OpenBSD 5.1.

Actually, this is _valid_ configuration. It will compile nginx
with PCRE support in various places like location matching, but
without the rewrite module.

The error message provided is a bit misleading as it refers to a
module already disabled, but generating proper error messages for
this and similar non-trivial cases is tricky and was considered to
be an overkill.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Tue Oct 29 16:56:38 2013
From: nginx-forum at nginx.us (Fleshgrinder)
Date: Tue, 29 Oct 2013 12:56:38 -0400
Subject: alternative ssl
In-Reply-To: <A364109F-C116-436C-8767-A119E62C471A@nginx.com>
References: <A364109F-C116-436C-8767-A119E62C471A@nginx.com>
Message-ID: <881c65c2c534df27171e2ee5ed6d6dc4.NginxMailingListEnglish@forum.nginx.org>

Was there ever any progress on this topic? Personally I think it would be
very interesting to see nginx supporting CyaSSL with the NTRU algorithm for
high performance websites. To be honest, I'd love to run some benchmarks on
my own on this.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222763,244238#msg-244238


From philipp.kraus at tu-clausthal.de Tue Oct 29 18:34:15 2013
From: philipp.kraus at tu-clausthal.de (Philipp Kraus)
Date: Tue, 29 Oct 2013 19:34:15 +0100
Subject: location problem with static content
Message-ID: <CB10C977-E755-48F3-BED5-9EB4C5051023@tu-clausthal.de>

Hello,

I have created for my GitLab installation this entries in the configuration:

location /gitlab {
root /home/gitlab/gitlab/public;
try_files $uri $uri/index.html $uri.html @gitlab;
}

location @gitlab {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;

proxy_pass http://localhost:9080;
}

so I can use GitLab with https://myserver/gitlab, most functions of GitLab works well, but I have got a problem with the static content.
I get this error:

[error] 4573#0: *4 open() "/home/www/static.css" failed (2: No such file or directory)

The message is correct, because the main root dir (above the location) is set to /home/www, so the fallback mechanism works.
But in my case I need a correct rule, that also static content, which have got the /gitlab URL part are mapped into /home/gitlab/gitlab/public

How can I tell my location rule, that static content is stored in the correct root folder?

Thanks

Phil
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131029/3645cff4/attachment.bin>

From nginx-forum at nginx.us Tue Oct 29 18:35:15 2013
From: nginx-forum at nginx.us (sgammon)
Date: Tue, 29 Oct 2013 14:35:15 -0400
Subject: Ubuntu+Nginx packet loss / dropped upstream connections
Message-ID: <caa066fb078d02dce172354f15385da1.NginxMailingListEnglish@forum.nginx.org>

Hello Nginx experts,

We make heavy use of Nginx as a reverse-proxy/load-balancer. It communicates
with Apache and Tornado hosts upstream and proxies them publicly on port
80/443 - pretty standard.

The problem is, when pinging the LB host, every 30 pings or so, a ping is
completely dropped and the latency immediately jumps from <20ms to >1000ms,
then after a few pings, calms down again.

We are receiving a lot of messages like:

2013/10/28 08:49:05 [error] 20612#0: *77590567 recv() failed (104:
Connection reset by peer) while reading response header from upstream,
client: 50.xx.xx.169, server: loadbalancer, request: "GET / HTTP/1.1",
upstream: "http://10.xx.xx.84:8014/", host: "loadbalancer"

and:

2013/10/28 08:49:05 [error] 20612#0: *77590567 no live upstreams while
connecting to upstream, client: 50.xx.xx.169, server: loadbalancer, request:
"GET / HTTP/1.1", upstream: "http://api-read-frontends/", host:
"loadbalancer"


Is there configuration in Nginx that could be causing this? We have also
cusomized sysctl.conf to try and fix it, no luck so far. There's more info,
ping dumps, and our sysctl file attached to this question:
http://serverfault.com/questions/549273/diagnosing-packet-loss-high-latency-in-ubuntu

Thanks in advance, any help is immensely appreciated :) Nginx is awesome!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244241,244241#msg-244241


From francis at daoine.org Tue Oct 29 20:30:23 2013
From: francis at daoine.org (Francis Daly)
Date: Tue, 29 Oct 2013 20:30:23 +0000
Subject: location problem with static content
In-Reply-To: <CB10C977-E755-48F3-BED5-9EB4C5051023@tu-clausthal.de>
References: <CB10C977-E755-48F3-BED5-9EB4C5051023@tu-clausthal.de>
Message-ID: <20131029203023.GM4365@craic.sysops.org>

On Tue, Oct 29, 2013 at 07:34:15PM +0100, Philipp Kraus wrote:

Hi there,

> location /gitlab {
> root /home/gitlab/gitlab/public;
> try_files $uri $uri/index.html $uri.html @gitlab;

I suspect that the "$uri/index.html" there may cause you problems. You
may be better off using "$uri/" instead.

> I get this error:
>
> [error] 4573#0: *4 open() "/home/www/static.css" failed (2: No such file or directory)

What url did you access to get this message?

What file on the filesystem did you want it to serve for that url?

> How can I tell my location rule, that static content is stored in the correct root folder?

You may need to use "alias" rather than "root"; but that should become
clear when you describe the url -> filename mapping that you want.

f
--
Francis Daly francis at daoine.org


From agentzh at gmail.com Tue Oct 29 20:49:09 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Tue, 29 Oct 2013 13:49:09 -0700
Subject: [ANN] ngx_openresty mainline version 1.4.3.1 released
Message-ID: <CAB4Tn6PO=qB+e3vpEc40T_aRv0=xmoT7c06iS-CjrYFSue-+cg@mail.gmail.com>

Hello folks!

I am pleased to announce that the new mainline version of
ngx_openresty, 1.4.3.1, is now released:

http://openresty.org/#Download

Special thanks go to all the contributors for making this happen!

Below is the complete change log for this release, as compared to the
last (mainline) release, 1.4.2.9:

* upgraded the Nginx core to 1.4.3.

* see the changes here: <http://nginx.org/en/CHANGES-1.4>

* upgraded LuaNginxModule to 0.9.1.

* feature: added the new configuration directive
"lua_use_default_type" for controlling whether to send out a
default "Content-Type" response header (as defined by the
default_type directive). default on. thanks aviramc for the
patch.

* feature: now the raw request cosocket returned by
ngx.req.socket(true) no longer requires the request body to
be read already, which means that one can use this cosocket
to read the raw request body data as well. thanks aviramc
for the original patch.

* bugfix: when there were no existing "Cache-Control" response
headers, "ngx.header.cache_control = nil" would
(incorrectly) create a new "Cache-Control" header with an
empty value. thanks jinglong for the patch.

* bugfix: the original letter-case of the header name was lost
when creating the "Cache-Control" response header via the
ngx.header.HEADER API.

* bugfix: ngx.req.set_header("Host", value) would overwrite
the value of $host with bad values. thanks aviramc for the
patch.

* bugfix: use of ngx.exit() to abort pending subrequests in
other "light threads" might lead to segfault or request hang
when HTTP 1.0 full buffering is in effect.

* bugfix: removing a request header might lead to memory
corruptions. thanks Bj??rnar Ness for the report.

* bugfix: reading ngx.status might get different values than
$status. thanks Kevin Burke for the report.

* bugfix: downstream write events might interfere with
upstream cosockets that are slow to write to. thanks Aviram
Cohen for the report.

* bugfix: the bookkeeping state for already-freed user threads
might be incorrectly used by newly-created threads that were
completely different, which could lead to bad results.
thanks Sam Lawrence for the report.

* bugfix: calling ngx.thread.wait() on a user thread object
that is already waited (i.e., already dead) would hang
forever. thanks Sam Lawrence for the report.

* bugfix: the alert "zero size buf" could be logged when
assigning an empty Lua string ("") to "ngx.arg[1]" in
body_filter_by_lua*.

* bugfix: subrequests initiated by ngx.location.capture* could
trigger unnecessary response header sending actions in the
subrequest because our capturing output header filter did
not set "r->header_sent".

* bugfix: the Lua error message for the case that ngx.sleep()
was used in log_by_lua* was not friendly. thanks Jiale Zhi
for the report.

* bugfix: now ngx.req.socket(true) returns proper error when
there is some other "light thread" reading the request body.

* bugfix: header_filter_by_lua*, body_filter_by_lua*, and
ngx.location.capture* might not work properly with multiple
"http {}" blocks in "nginx.conf". thanks flygoast for the
report.

* optimize: made ngx.re.match and ngx.re.gmatch faster for
LuaJIT 2.x when there is no submatch captures.

* optimize: pre-allocate space for the Lua tables in various
places.

* doc: fixed the context for the lua_package_path and
lua_package_cpath directives. thanks duhoobo for the report.

* upgraded HeadersMoreNginxModule to 0.23.

* bugfix: removing request headers via
more_clear_input_headers might lead to memory corruptions.

* bugfix: more_set_input_headers might overwrite the value of
the $host variable with bad values.

* bugfix: more_set_headers and more_clear_headers might not
work when multiple "http {}" blocks were used in
"nginx.conf".

* bugfix: eliminated use of C global variables during
configuration phase because it might lead to issues when HUP
reload failed.

* upgraded SrcacheNginxModule to 0.23.

* bugfix: this module might not work properly with multiple
"http {}" blocks in "nginx.conf".

* bugfix: we might (incorrectly) return 500 in our output
filters.

* bugfix: we did not set "r->header_sent" when we want to
discard the header in our header filter.

* upgraded RdsJsonNginxModule to 0.12.

* bugfix: in case of multiple "http {}" blocks in
"nginx.conf", our output filters might be disabled even when
this module is configured properly.

* bugix: we did not check the "NULL" pointer returned by an
Nginx array element allocation.

* upgraded RdsCsvNginxModule to 0.05.

* optimize: we now only register our output filters when this
module is indeed used (the only exception is when multiple
"http {}" blocks are used).

* upgraded XssNginxModule to 0.04.

* optimize: we now only register our output filters when this
module is indeed used (the only exception is when multiple
"http {}" blocks are used).

* upgraded EchoNginxModule to 0.49.

* bugfix: echo_before_body and echo_after_body might now work
properly when multiple "http {}" blocks were used in
"nginx.conf".

* upgraded LuaRestyRedisLibrary to 0.17.

* optimize: added an optional argument "n" to init_pipeline()
as a hint for the number of pipelined commands.

* optimize: use LuaJIT 2.1's new table.new() primitive to
pre-allocate space for Lua tables.

* upgraded LuaRestyUploadLibrary to 0.09.

* bugfix: removed use of the module() function to prevent bad
side-effects.

* optimize: Removed use of lua tables and table.concat() for
simple one-line Lua string concatenations.

* upgraded LuaRestyMySQLLibrary to 0.14.

* bugfix: avoided using Lua 5.1's module() function for
defining our Lua modules because it has bad side effects.

* optimize: added an optional new argument "nrows" to the
query() and read_result() methods, which can speed up things
a bit.

* optimize: use LuaJIT v2.1's new table.new() API to optimize
Lua table allocations. when table.new is missing, just fall
back to the good old "{}" constructor. this gives 12%
overall speed-up for a typical result set with 500 rows when
LuaJIT 2.1 is used.

* optimize: eliminated use of table.insert() because it is
slower than "tb[#tb + 1] = val".

* optimize: switched over to the multi-argument form of
string.char().

* optimize: no longer use Lua tables and table.concat() to
construct simple query strings.

* upgraded LuaRestyWebSocketLibrary to 0.02.

* optimize: use LuaJIT 2.1's table.new() to preallocate space
for Lua tables, eliminating the overhead of Lua table
rehash.

* feature: applied the proxy_host_port_vars patch to the Nginx
core to make $proxy_host and $proxy_port accessible for dynamic
languages like Lua and Perl.

* bugfix: applied the gzip_flush_bug patch to the Nginx core to
fix request hang caused by the ngx_gzip and ngx_gunzip modules
when using ngx.flush(true), for example. Thanks Maxim Dounin for
the review.

* bugfix: applied the cache_lock_hang_in_subreq patch to the Nginx
core to fix the request hang when using proxy_cache_lock in
subrequests and the cache lock timeout happens.

* bugfix: backported Maxim Dounin's patch to fix an issue in the
ngx_gzip module: it did not clear "r->connection->buffered" when
the pending data was already flushed out. this could hang
LuaNginxModule's ngx.flush(true) call, for example.

The HTML version of the change log with lots of helpful hyper-links
can be browsed here:

http://openresty.org/#ChangeLog1004003

OpenResty (aka. ngx_openresty) is a full-fledged web application
server by bundling the standard Nginx core, lots of 3rd-party Nginx
modules and Lua libraries, as well as most of their external
dependencies. See OpenResty's homepage for details:

http://openresty.org/

We have run extensive testing on our Amazon EC2 test cluster and
ensured that all the components (including the Nginx core) play well
together. The latest test report can always be found here:

http://qa.openresty.org

Enjoy!
-agentzh


From nginx-forum at nginx.us Wed Oct 30 07:55:31 2013
From: nginx-forum at nginx.us (antjkennedy)
Date: Wed, 30 Oct 2013 03:55:31 -0400
Subject: default virtual host overrides all other virtual hosts
Message-ID: <035f5349d1555c88dedb7aa0b118e1dc.NginxMailingListEnglish@forum.nginx.org>

hopefully someone can help, i've been trying to set up nginx with virtual
hosts but the default host always overrides any of the others that i
specify.

here is my config file found in /etc/nginx/sites-enabled

server {
listen 80;
server_name sub.example.com;
return 404;
}

server {
listen 80 default;
server_name *.example.com;
return 501;
}

and the access.log shows

*.*.*.* - - [30/Oct/2013:04:09:11 +0400] "GET / HTTP/1.1" 501 582
"http://sub.example.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36"
*.*.*.* - - [30/Oct/2013:04:09:14 +0400] "GET / HTTP/1.1" 501 582
"http://www.example.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36"

thanks in advance for any help / ideas.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244255,244255#msg-244255


From philipp.kraus at tu-clausthal.de Wed Oct 30 10:15:42 2013
From: philipp.kraus at tu-clausthal.de (Philipp Kraus)
Date: Wed, 30 Oct 2013 11:15:42 +0100
Subject: location problem with static content
In-Reply-To: <20131029203023.GM4365@craic.sysops.org>
References: <CB10C977-E755-48F3-BED5-9EB4C5051023@tu-clausthal.de>
<20131029203023.GM4365@craic.sysops.org>
Message-ID: <9E6557B4-D9C1-4164-83CE-CD2B84E1B9D7@tu-clausthal.de>


Am 29.10.2013 um 21:30 schrieb Francis Daly <francis at daoine.org>:

> On Tue, Oct 29, 2013 at 07:34:15PM +0100, Philipp Kraus wrote:
>
> Hi there,
>
>> location /gitlab {
>> root /home/gitlab/gitlab/public;
>> try_files $uri $uri/index.html $uri.html @gitlab;
>
> I suspect that the "$uri/index.html" there may cause you problems. You
> may be better off using "$uri/" instead.

I have changed it to

location /gitlab {
alias /home/gitlab/gitlab/public;
try_files $uri/ @gitlab;
}

that does not work also. I have tested it with the "try_files $uri $uri.css".

>
>> I get this error:
>>
>> [error] 4573#0: *4 open() "/home/www/static.css" failed (2: No such file or directory)
>
> What url did you access to get this message?
>
> What file on the filesystem did you want it to serve for that url?

The message shows:

2013/10/30 11:10:18 [error] 6692#0: *5 open() "/home/www/static.css" failed (2: No such file or directory), client: <ip>, server: <server>,
request: "GET /static.css HTTP/1.1", host: "<server>", referrer: "https://server/gitlab/profile/keys"

>
>> How can I tell my location rule, that static content is stored in the correct root folder?
>
> You may need to use "alias" rather than "root"; but that should become
> clear when you describe the url -> filename mapping that you want.

I try to port this configuration https://github.com/gitlabhq/gitlabhq/blob/master/lib/support/nginx/gitlab
to the subdirectory, so GitLab is not called on https://myserver/ but rather https://myserver/gitlab


Thanks for help

Phil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131030/71e4d07b/attachment-0001.html>

From monitor.xoyo at gmail.com Wed Oct 30 15:39:31 2013
From: monitor.xoyo at gmail.com (Xiangong Yang)
Date: Wed, 30 Oct 2013 23:39:31 +0800
Subject: Hi, agentzh,
the chunkin-nginx-module is not compatible with nginx 1.5.x
Message-ID: <CAKxrJ_diBVMm0=rGCt0fy2Qik35Bm3AQ6jaoOtgcXJOozwa6vw@mail.gmail.com>

Hi,agentzh, the chunkin-nginx-module is not compatible with nginx 1.5.x

*In nginx 1.5.3, the error when making is as follows:*

cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g
-DNDK_SET_VAR -DNDK_SET_VAR -DNDK_SET_VAR -DNDK_SET_VAR -DNDK_SET_VAR
-DNDK_SET_VAR -DNDK_UPSTREAM_LIST -I src/core -I src/event -I
src/event/modules -I src/os/unix -I ../ngx_devel_kit/objs -I objs/addon/ndk
-I /opt/luajit/include/luajit-2.0 -I ../lua-nginx-module/src/api -I objs -I
src/http -I src/http/modules -I src/http/modules/perl -I
../ngx_devel_kit/src -I src/mail \
-o objs/addon/src/ngx_http_echo_request_info.o \
../echo-nginx-module/src/ngx_http_echo_request_info.c
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: *** Waiting for unfinished jobs....
../chunkin-nginx-module/src/ngx_http_chunkin_util.c: In function
?ngx_http_chunkin_process_request?:
../chunkin-nginx-module/src/ngx_http_chunkin_util.c:291: error:
?ngx_http_request_t? has no member named ?plain_http?
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.5.3'
make: *** [build] Error 2


*In nginx 1.5.5, the error when making is as follows:*

s -I objs/addon/ndk -I /opt/luajit/include/luajit-2.0 -I
../lua-nginx-module/src/api -I objs -I src/http -I src/http/modules -I
src/http/modules/perl -I ../ngx_devel_kit/src -I src/mail \
-o objs/addon/src/ngx_http_encrypted_session_cipher.o \

../encrypted-session-nginx-module/src/ngx_http_encrypted_session_cipher.c
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.5.5'
make: *** [build] Error 2


*In nginx 1.5.6, the error when making is as follows:*
-o objs/addon/src/ngx_http_drizzle_handler.o \
../drizzle-nginx-module/src/ngx_http_drizzle_handler.c
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: *** Waiting for unfinished jobs....
src/chunked_parser.rl: In function ?ngx_http_chunkin_run_chunked_parser?:
src/chunked_parser.rl:296: error: ?ngx_http_request_t? has no member named
?plain_http?
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.5.6'
make: *** [build] Error 2



Best Regards

Thank you!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131030/2d79102a/attachment.html>

From monitor.xoyo at gmail.com Wed Oct 30 15:46:12 2013
From: monitor.xoyo at gmail.com (Xiangong Yang)
Date: Wed, 30 Oct 2013 23:46:12 +0800
Subject: Hi, agentzh,
the chunkin-nginx-module is not compatible with nginx 1.5.x
Message-ID: <CAKxrJ_dqOj94Oq3+JBjqZ560_Qj6BP9RaHkaOetQFHnAoPk+jQ@mail.gmail.com>

Hi,agentzh, the chunkin-nginx-module is not compatible with nginx 1.5.x

*In nginx 1.5.3, the error when making is as follows:*

cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g
-DNDK_SET_VAR -DNDK_SET_VAR -DNDK_SET_VAR -DNDK_SET_VAR -DNDK_SET_VAR
-DNDK_SET_VAR -DNDK_UPSTREAM_LIST -I src/core -I src/event -I
src/event/modules -I src/os/unix -I ../ngx_devel_kit/objs -I objs/addon/ndk
-I /opt/luajit/include/luajit-2.0 -I ../lua-nginx-module/src/api -I objs -I
src/http -I src/http/modules -I src/http/modules/perl -I
../ngx_devel_kit/src -I src/mail \
-o objs/addon/src/ngx_http_echo_request_info.o \
../echo-nginx-module/src/ngx_http_echo_request_info.c
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: *** Waiting for unfinished jobs....
../chunkin-nginx-module/src/ngx_http_chunkin_util.c: In function
?ngx_http_chunkin_process_request?:
../chunkin-nginx-module/src/ngx_http_chunkin_util.c:291: error:
?ngx_http_request_t? has no member named ?plain_http?
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.5.3'
make: *** [build] Error 2


*In nginx 1.5.5, the error when making is as follows:*

s -I objs/addon/ndk -I /opt/luajit/include/luajit-2.0 -I
../lua-nginx-module/src/api -I objs -I src/http -I src/http/modules -I
src/http/modules/perl -I ../ngx_devel_kit/src -I src/mail \
-o objs/addon/src/ngx_http_encrypted_session_cipher.o \

../encrypted-session-nginx-module/src/ngx_http_encrypted_session_cipher.c
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.5.5'
make: *** [build] Error 2


*In nginx 1.5.6, the error when making is as follows:*
-o objs/addon/src/ngx_http_drizzle_handler.o \
../drizzle-nginx-module/src/ngx_http_drizzle_handler.c
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: *** Waiting for unfinished jobs....
src/chunked_parser.rl: In function ?ngx_http_chunkin_run_chunked_parser?:
src/chunked_parser.rl:296: error: ?ngx_http_request_t? has no member named
?plain_http?
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.5.6'
make: *** [build] Error 2

*And the error in nginx 1.4.3:*

DNDK_SET_VAR -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -I src/core -I src/event -I
src/event/modules -I src/os/unix -I ../ngx_devel_kit/objs -I objs/addon/ndk
-I /opt/luajit/include/luajit-2.0 -I ../lua-nginx-module/src/api -I objs -I
src/http -I src/http/modules -I src/http/modules/perl -I
../ngx_devel_kit/src -I src/mail \
-o objs/addon/src/ngx_http_drizzle_keepalive.o \
../drizzle-nginx-module/src/ngx_http_drizzle_keepalive.c
make[1]: *** [objs/addon/src/chunked_parser.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [objs/addon/src/ngx_http_chunkin_util.o] Error 1
make[1]: Leaving directory `/opt/distfiles/nginx/nginx-1.4.3'
make: *** [build] Error 2



Best Regards

Thank you!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131030/31de6584/attachment.html>

From francis at daoine.org Wed Oct 30 18:47:52 2013
From: francis at daoine.org (Francis Daly)
Date: Wed, 30 Oct 2013 18:47:52 +0000
Subject: location problem with static content
In-Reply-To: <9E6557B4-D9C1-4164-83CE-CD2B84E1B9D7@tu-clausthal.de>
References: <CB10C977-E755-48F3-BED5-9EB4C5051023@tu-clausthal.de>
<20131029203023.GM4365@craic.sysops.org>
<9E6557B4-D9C1-4164-83CE-CD2B84E1B9D7@tu-clausthal.de>
Message-ID: <20131030184752.GA25969@craic.sysops.org>

On Wed, Oct 30, 2013 at 11:15:42AM +0100, Philipp Kraus wrote:
> Am 29.10.2013 um 21:30 schrieb Francis Daly <francis at daoine.org>:
> > On Tue, Oct 29, 2013 at 07:34:15PM +0100, Philipp Kraus wrote:

Hi there,

There are a few possible different reasons for things not to be working
the way you want.

The best chance of getting things fixed if is there is clarity about
what was tried and failed.

The best information is of the form "I did A, I got B, but I expected
to get C".

> >> location /gitlab {
> >> root /home/gitlab/gitlab/public;
> >> try_files $uri $uri/index.html $uri.html @gitlab;
> >
> > I suspect that the "$uri/index.html" there may cause you problems. You
> > may be better off using "$uri/" instead.
>
> I have changed it to
>
> location /gitlab {
> alias /home/gitlab/gitlab/public;
> try_files $uri/ @gitlab;
> }

That's not what I intended to suggest you do.

try_files $uri $uri/ $uri.html @gitlab;

where the $uri.html part is presumably gitlab-specific.

And "root" vs "alias" depends on what url to filename mapping you want to have.

> that does not work also.

So, next time, can you do something like

curl -i http://server/gitlab

and see how what you get differs from what you expect to get?

What I'm guessing is that the file
/home/gitlab/gitlab/public/gitlab/index.html contains something
like <link href="static.css">. When you access the file via the
url http://server/gitlab, the browser follows the link and asks for
http://server/static.css -- which nginx expects to refer to the file
/home/www/static.css.

If you use the suggested try_files, then when you access
the url http://server/gitlab, you will be redirected to the
url http://server/gitlab/; when you access *that* url, you'll
get the same file content, but now the browser will ask for
http://server/gitlab/static.css, which nginx expects to refer to the
file /home/gitlab/gitlab/public/gitlab/static.css.

> > What url did you access to get this message?
> >
> > What file on the filesystem did you want it to serve for that url?
>
> The message shows:
>
> 2013/10/30 11:10:18 [error] 6692#0: *5 open() "/home/www/static.css" failed (2: No such file or directory), client: <ip>, server: <server>,
> request: "GET /static.css HTTP/1.1", host: "<server>", referrer: "https://server/gitlab/profile/keys"

So, the url is /static.css.

nginx tries to return the file /home/www/static.css.

Which file on the filesystem do you want nginx to return, when it gets
a request for /static.css?

> I try to port this configuration https://github.com/gitlabhq/gitlabhq/blob/master/lib/support/nginx/gitlab
> to the subdirectory, so GitLab is not called on https://myserver/ but rather https://myserver/gitlab

>From the extra log line, I guess that perhaps the file
/home/gitlab/gitlab/public/gitlab/index.html contains instead something
like <link href="/static.css">. If that is the case, then the best
thing you can do if you want to proxy the application "gitlab" behind
the non-root-url /gitlab/, is to configure the application so that it
knows that its root url is /gitlab/, not /.

After you've done that, the nginx config should be simpler.

Good luck with it,

f
--
Francis Daly francis at daoine.org


From agentzh at gmail.com Wed Oct 30 19:27:47 2013
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Wed, 30 Oct 2013 12:27:47 -0700
Subject: Hi, agentzh,
the chunkin-nginx-module is not compatible with nginx 1.5.x
In-Reply-To: <CAKxrJ_diBVMm0=rGCt0fy2Qik35Bm3AQ6jaoOtgcXJOozwa6vw@mail.gmail.com>
References: <CAKxrJ_diBVMm0=rGCt0fy2Qik35Bm3AQ6jaoOtgcXJOozwa6vw@mail.gmail.com>
Message-ID: <CAB4Tn6NZta=785ftJskZwj=z4ounQL_TOv5suPjN7q4HM+_NVQ@mail.gmail.com>

Hello!

On Wed, Oct 30, 2013 at 8:39 AM, Xiangong Yang wrote:
> the chunkin-nginx-module is not compatible with nginx 1.5.x
>
> In nginx 1.5.3, the error when making is as follows:
>

To answer your question, I'd just quote the ngx_chunkin's official
documentation:

"This module is no longer needed for Nginx 1.3.9+ because since 1.3.9,
the Nginx core already has built-in support for the chunked request
bodies."

See also https://github.com/agentzh/chunkin-nginx-module#status

Best regards,
-agentzh


From nginx-forum at nginx.us Thu Oct 31 10:01:20 2013
From: nginx-forum at nginx.us (luckyknight)
Date: Thu, 31 Oct 2013 06:01:20 -0400
Subject: SPDY, SSL termination and proxy_pass
Message-ID: <e5a02b6701db854d771550510c635df9.NginxMailingListEnglish@forum.nginx.org>

I have setup SPDY on my application and have observed some nice reductions
in page load times. However in a production environment my setup is somewhat
different.

At the moment my setup looks like this:

server 1 running nginx, terminates ssl and uses proxy_pass to server 2
server 2 running nginx, php-fpm and varnish etc. (actual web application)

I have just setup on server 1 spdy protocol:

server {

listen *:443 ssl spdy;

location / {

proxy_pass http://someip;
}
}

Now my question is with SSL and SPDY being terminated on the first server,
do I still get the relevant performance improvement from using SPDY from the
application that I am trying to use on server 2?

Does server 2 need SPDY enabled on nginx or does it not matter? Do I need to
use proxy_pass https:// or https://someip:443 (and thus SSL installed on
server 2?) Do I need to add_header alternative spdy on server 1?

On a related note, I am also planning on setting up load balancing on a
seperate application. It will have load balancing setup on server 1, with
ssl/spdy and then upstream to server 2/3/4/5/6 etc. Will I also get SPDY
performance improvements?

Any comments/suggestions welcome!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244286,244286#msg-244286


From pasik at iki.fi Thu Oct 31 12:26:41 2013
From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=)
Date: Thu, 31 Oct 2013 14:26:41 +0200
Subject: nginx http proxy support for backend server health checks / status
monitoring url
Message-ID: <20131031122641.GY2924@reaktio.net>

Hello,

I'm using nginx as a http proxy / loadbalancer for an application which
which has the following setup on the backend servers:

- https/403 provides the application at:
- https://hostname-of-backend/app/

- status monitoring url is available at:
- http://hostname-of-backend/foo/server_status
- https://hostname-of-backend/foo/server_status

So the status url is available over both http and https, and the status url tells if the application is fully up and running or not.
Actual application is only available over https.

It's important to decide the backend server availability based on the status url contents/reply,
otherwise you might push traffic to a backend that isn't fully up and running yet,
causing false errors to end users.

So.. I don't think nginx currently provides proper status monitoring url support for proxy backends ?

I've found some plugins for this, but they seem to have limitations aswell:

- http://wiki.nginx.org/HttpHealthcheckModule
- https://github.com/cep21/healthcheck_nginx_upstreams
- only http 1.0 support, no http 1.1 support
- doesn't seem to be maintained anymore, latest version 2+ years old

- https://github.com/yaoweibin/nginx_upstream_check_module
- only supports http backends, so health checks must be over http aswell, not over https
- if actual app is on 443/https, cannot configure separate port 80 for health checks over http
- only "ssl" health check possible for https backends


Any suggestions? Or should I start hacking and improving the existing plugins..
Thanks!

-- Pasi


From vbart at nginx.com Thu Oct 31 12:29:22 2013
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Thu, 31 Oct 2013 16:29:22 +0400
Subject: SPDY, SSL termination and proxy_pass
In-Reply-To: <e5a02b6701db854d771550510c635df9.NginxMailingListEnglish@forum.nginx.org>
References: <e5a02b6701db854d771550510c635df9.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <201310311629.23070.vbart@nginx.com>

On Thursday 31 October 2013 14:01:20 luckyknight wrote:
> I have setup SPDY on my application and have observed some nice reductions
> in page load times. However in a production environment my setup is
> somewhat different.
>
> At the moment my setup looks like this:
>
> server 1 running nginx, terminates ssl and uses proxy_pass to server 2
> server 2 running nginx, php-fpm and varnish etc. (actual web application)
>
> I have just setup on server 1 spdy protocol:
>
> server {
>
> listen *:443 ssl spdy;
>
> location / {
>
> proxy_pass http://someip;
> }
> }
>
> Now my question is with SSL and SPDY being terminated on the first server,
> do I still get the relevant performance improvement from using SPDY from
> the application that I am trying to use on server 2?
>

It's likely, yes.

> Does server 2 need SPDY enabled on nginx or does it not matter? Do I need
> to use proxy_pass https:// or https://someip:443 (and thus SSL installed
> on server 2?)

No, don't need, unless you care about security between your servers.

> Do I need to add_header alternative spdy on server 1?
>

Only if you also use plain http and want users switch to spdy when it's
possible.

> On a related note, I am also planning on setting up load balancing on a
> seperate application. It will have load balancing setup on server 1, with
> ssl/spdy and then upstream to server 2/3/4/5/6 etc. Will I also get SPDY
> performance improvements?
>

You will.

wbr, Valentin V. Bartenev


From ru at nginx.com Thu Oct 31 12:41:20 2013
From: ru at nginx.com (Ruslan Ermilov)
Date: Thu, 31 Oct 2013 16:41:20 +0400
Subject: nginx http proxy support for backend server health checks /
status monitoring url
In-Reply-To: <20131031122641.GY2924@reaktio.net>
References: <20131031122641.GY2924@reaktio.net>
Message-ID: <20131031124120.GD90747@lo0.su>

On Thu, Oct 31, 2013 at 02:26:41PM +0200, Pasi K?rkk?inen wrote:
> Hello,
>
> I'm using nginx as a http proxy / loadbalancer for an application which
> which has the following setup on the backend servers:
>
> - https/403 provides the application at:
> - https://hostname-of-backend/app/
>
> - status monitoring url is available at:
> - http://hostname-of-backend/foo/server_status
> - https://hostname-of-backend/foo/server_status
>
> So the status url is available over both http and https, and the status url tells if the application is fully up and running or not.
> Actual application is only available over https.
>
> It's important to decide the backend server availability based on the status url contents/reply,
> otherwise you might push traffic to a backend that isn't fully up and running yet,
> causing false errors to end users.
>
> So.. I don't think nginx currently provides proper status monitoring url support for proxy backends ?
>
> I've found some plugins for this, but they seem to have limitations aswell:
>
> - http://wiki.nginx.org/HttpHealthcheckModule
> - https://github.com/cep21/healthcheck_nginx_upstreams
> - only http 1.0 support, no http 1.1 support
> - doesn't seem to be maintained anymore, latest version 2+ years old
>
> - https://github.com/yaoweibin/nginx_upstream_check_module
> - only supports http backends, so health checks must be over http aswell, not over https
> - if actual app is on 443/https, cannot configure separate port 80 for health checks over http
> - only "ssl" health check possible for https backends
>
>
> Any suggestions? Or should I start hacking and improving the existing plugins..
> Thanks!

This functionality is currently available in our commercial version:
http://nginx.com/products/

The documentation is here:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#health_check


From nginx-forum at nginx.us Thu Oct 31 14:33:33 2013
From: nginx-forum at nginx.us (j0nes2k)
Date: Thu, 31 Oct 2013 10:33:33 -0400
Subject: SSI working on Apache backend, but not on gunicorn backend
Message-ID: <5b3b04d88d8fd7865a9808de26ec9f75.NginxMailingListEnglish@forum.nginx.org>

Hello,

I have nginx in front of an Apache server and a gunicorn server for
different parts of my website. I am using the SSI module in nginx to display
a snippet in every page. The websites include a snippet in this form:
<!--# include virtual="/mysnippet.txt" -->

For static pages served by nginx everything is working fine, the same goes
for the Apache-generated pages - the SSI include is evaluated and the
snippet is filled. However for requests to my gunicorn backend running a
Python app in Django, the SSI include does not get evaluated.

Here is the relevant part of the nginx config:

location /cgi-bin/script.pl {
ssi on;
proxy_pass http://default_backend/cgi-bin/script.pl;
include sites-available/aspects/proxy-default.conf;
}

location /directory/ {
ssi on;
limit_req zone=directory nodelay burst=3;
proxy_pass http://django_backend/directory/;
include sites-available/aspects/proxy-default.conf;
}

Backends:
upstream django_backend {
server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s;
}
upstream default_backend {
server dynamic.mydomain.com:80;
server dynamic2.mydomain.com:80;
}

proxy_default.conf:

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


What is the cause for this behaviour? How can I get SSI includes working for
my pages generated on gunicorn? How can I debug this further?

Thank you for your help!

Best regards,

Jonas

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244299,244299#msg-244299


From mdounin at mdounin.ru Thu Oct 31 16:34:11 2013
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 31 Oct 2013 20:34:11 +0400
Subject: SSI working on Apache backend, but not on gunicorn backend
In-Reply-To: <5b3b04d88d8fd7865a9808de26ec9f75.NginxMailingListEnglish@forum.nginx.org>
References: <5b3b04d88d8fd7865a9808de26ec9f75.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20131031163411.GA95765@mdounin.ru>

Hello!

On Thu, Oct 31, 2013 at 10:33:33AM -0400, j0nes2k wrote:

> Hello,
>
> I have nginx in front of an Apache server and a gunicorn server for
> different parts of my website. I am using the SSI module in nginx to display
> a snippet in every page. The websites include a snippet in this form:
> <!--# include virtual="/mysnippet.txt" -->
>
> For static pages served by nginx everything is working fine, the same goes
> for the Apache-generated pages - the SSI include is evaluated and the
> snippet is filled. However for requests to my gunicorn backend running a
> Python app in Django, the SSI include does not get evaluated.

[...]

> What is the cause for this behaviour? How can I get SSI includes working for
> my pages generated on gunicorn? How can I debug this further?

Possible reasons, in no particular order:

- Content-Type of responses returned by gunicorn isn't listed in
ssi_types in your nginx config.

- Responses returned by gunicorn are compressed (use
"Content-Encoding: gzip" or alike).

Further debugging can be done e.g. using a debug log, see
http://nginx.org/en/docs/debugging_log.html.

--
Maxim Dounin
http://nginx.org/en/donation.html


From nginx-forum at nginx.us Thu Oct 31 23:55:15 2013
From: nginx-forum at nginx.us (nehay2j)
Date: Thu, 31 Oct 2013 19:55:15 -0400
Subject: proxy_pass not passing to dynamic $host
Message-ID: <4f1e450f6c1c3300600eb28fea87877a.NginxMailingListEnglish@forum.nginx.org>

Hi,

I need to do proxy_pass to host name passed in url and rewrite url as well.
Since the host name is difference with each request, I cannot provide an
upstream for it. Below is the nginx configuration I am using but it doesnt
do proxy pass and returns 404 error. The hostname resembles ec2...com.

location ~* ^(/ec2..*)$ {
# try_files $uri $uri/index.html;
# rewrite ^(/ec2..*)$ https://example.com:8080/test last;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://$1:8080/test;

}


Thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244308,244308#msg-244308


Ϊҳ | ղ |

All Rights Reserved Powered by ĵ

Copyright © 2011
ĵ磬ַϵtousu#anggang.com
ض