Sophie

Sophie

distrib > Mageia > 7 > armv7hl > media > core-release > by-pkgid > d8544620e4ac7bee48ddb48c85d55709 > files > 369

ikiwiki-3.20190228-1.mga7.noarch.rpm

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>

<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>troubleshooting</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />

<link rel="stylesheet" href="../../style.css" type="text/css" />

<link rel="stylesheet" href="../../local.css" type="text/css" />










</head>
<body>

<div class="page">

<div class="pageheader">
<div class="header">
<span>
<span class="parentlinks">

<a href="../../index.html">ikiwiki</a>/ 

<a href="../../plugins.html">plugins</a>/ 

<a href="../openid.html">openid</a>/ 

</span>
<span class="title">
troubleshooting

</span>
</span>



</div>









</div>





<div id="pagebody">

<div id="content" role="main">
<p><strong>TL;DR</strong></p>

<div class="toc">
<ol>
	<li class="L1"><a href="#index1h1">An odyssey through lots of things that have to be right before OpenID works</a>
	<ol>
		<li class="L2"><a href="#index1h2">no_identity_server: Could not determine ID provider from URL.</a>
		<ol>
			<li class="L3"><a href="#index1h3">various possible causes ...</a>
			</li>
			<li class="L3"><a href="#index2h3">make a luckier setting of useragent ?!</a>
			<ol>
				<li class="L4"><a href="#index1h4">culprit was an Atomicorp ModSecurity rule</a>
				</li>
			</ol>
			</li>
		</ol>
		</li>
		<li class="L2"><a href="#index2h2">Error: OpenID failure: naive_verify_failed_network: Could not contact ID provider to verify response.</a>
		<ol>
			<li class="L3"><a href="#index3h3">set PERL_NET_HTTPS_SSL_SOCKET_CLASS appropriately</a>
			</li>
			<li class="L3"><a href="#index4h3">undo change that broke PERL_NET_HTTPS_SSL_SOCKET_CLASS</a>
			</li>
		</ol>
		</li>
		<li class="L2"><a href="#index3h2">Still naive_verify_failed_network, new improved reason</a>
		<ol>
			<li class="L3"><a href="#index5h3">ensure OpenSSL, Net::SSLeay, IO::Socket::SSL new enough for SNI</a>
			</li>
			<li class="L3"><a href="#index6h3">Local OpenSSL installation will need certs to trust</a>
			</li>
		</ol>
		</li>
		<li class="L2"><a href="#index4h2">Still certificate verify failed</a>
		<ol>
			<li class="L3"><a href="#index7h3">ensure that LWPx::ParanoidAgent passes server name to SSL layer for SNI</a>
			</li>
		</ol>
		</li>
	</ol>
	</li>
	<li class="L1"><a href="#index2h1">Success!!</a>
	</li>
</ol>
</div>

<h1><a name="index1h1"></a>An odyssey through lots of things that have to be right before OpenID works</h1>

<p>Having just (at last) made an ikiwiki installation accept my
OpenID, I have learned many of the things that may have to be checked
when getting the <a href="../openid.html">openid</a> plugin to work. (These are probably
the reasons why <a href="/">ikiwiki.info</a> itself won't accept my OpenID!)</p>

<p>Just to describe my OpenID setup a bit (and why it makes a good stress-test
for the OpenID plugin :).</p>

<p>I'm using my personal home page URL as my OpenID. My page lives at
a shared-hosting service I have hired. It contains links that delegate
my OpenID processing to <a href="https://indieauth.com">indieauth.com</a>.</p>

<p>IndieAuth, in turn, uses
<a href="http://microformats.org/wiki/RelMeAuth">rel-me authentication</a> to find
an <a href="http://microformats.org/wiki/OAuth">OAuth</a> provider that can authenticate
me. (At present, I am using <a href="http://github.com">github</a> for that, which
is an OAuth provider but not an OpenID provider, so the gatewaying provided
by IndieAuth solves that problem.) As far as ikiwiki is concerned,
IndieAuth is my OpenID provider; the details beyond that are transparent.</p>

<p>So, what were the various issues I had to sort out before my first successful
login with the <a href="../openid.html">openid</a> plugin?</p>

<h2><a name="index1h2"></a>no_identity_server: Could not determine ID provider from URL.</h2>

<p>This is the message <a href="/">ikiwiki.info</a> shows as soon as I enter my home URL
as an OpenID. It is also the first one I got on my own ikiwiki installation.</p>

<h3><a name="index1h3"></a>various possible causes ...</h3>

<p>There could be lots of causes. Maybe:</p>

<ul>
<li>the offered OpenID is an <code>https:</code> URL and there is an issue in checking
the certificate, so the page can't be retrieved?</li>
<li>the page can be retrieved, but it isn't well-formed HTML and the library
can't parse it for the needed OpenID links?</li>
<li>...?</li>
</ul>

<h3><a name="index2h3"></a>make a luckier setting of useragent ?!</h3>

<p>In my case, it was none of the above. It turns out my shared-hosting provider
has a rule that refuses requests with <code>User-Agent: libwww-perl/6.03</code> (!!).
This is the sort of problem that's really hard to anticipate or plan around.
I could fix it (<em>for this case!</em>) by changing <code>useragent:</code> in <code>ikiwiki.setup</code>
to a different string that my goofy provider lets through.</p>

<p><strong>Recommendation:</strong> set <code>useragent:</code> in <code>ikiwiki.setup</code> to some
unlikely-to-be-blacklisted value. I can't guess what the best
unlikely-to-be-blacklisted value is; if there is one, it's probably the
next one all the rude bots will be using anyway, and some goofy provider
like mine will blacklist it.</p>

<blockquote>
  <p>If your shared hosting provider is going to randomly break functionality,
  I would suggest "voting with your wallet" and taking your business to
  one that does not.</p>
  
  <p>In principle we could set the default UA (if <code>&#036;config{useragent}</code> is
  unspecified) to <code>IkiWiki/3.20140915</code>, or <code>IkiWiki/3.20140915 libwww-perl/6.03</code>
  (which would be the "most correct" option AIUI), or some such.
  That might work, or might get randomly blacklisted too, depending on the
  whims of shared hosting providers. If you can't trust your provider to
  behave helpfully then there isn't much we can do about it.</p>
  
  <p>Blocking requests according to UA seems fundamentally flawed, since
  I'm fairly sure no hosting provider can afford to blacklist UAs that
  claim to be, for instance, Firefox or Chrome. I wouldn't want
  to patch IkiWiki to claim to be an interactive browser by default,
  but malicious script authors will have no such qualms, so I would
  argue that your provider's strategy is already doomed... --<span class="createlink">smcv</span></p>
  
  <blockquote>
    <p>I agree, and I'll ask them to fix it (and probably refer them to this page).
    One reason they still have my business is that their customer service has
    been notably good; I always get a response from a human on the first try,
    and on the first or second try from a human who understands what I'm saying
    and is able to fix it. With a few exceptions over the years. I've dealt with organizations not like that....</p>
    
    <p>But I included the note here because I'm sure if <em>they're</em> doing it, there's
    probably some nonzero number of other hosting providers where it's also
    happening, so a person setting up OpenID and being baffled by this failure
    needs to know to check for it. Also, while the world of user-agent strings
    can't have anything but relatively luckier and unluckier choices, maybe
    <code>libwww/perl</code> is an especially unlucky one?</p>
    
    <blockquote>
      <p>Yippee! <em>My</em> provider found their offending <code>mod_security</code> rule and took it out,
      so now <a href="/">ikiwiki.info</a> accepts my OpenID. I'm still not sure it wouldn't be
      worthwhile to change the useragent default.... -- Chap</p>
    </blockquote>
  </blockquote>
</blockquote>

<h4><a name="index1h4"></a>culprit was an Atomicorp ModSecurity rule</h4>

<p>Further followup: my provider is using <a href="https://www.modsecurity.org/">ModSecurity</a>
with a ruleset commercially supplied by <a href="https://www.atomicorp.com/products/modsecurity.html">Atomicorp</a>,
which seems to be where this rule came from. They've turned the rule off for <em>my account</em>.
I followed up on my ticket with them, suggesting they at least think about turning it off
more systemwide (without waiting for other customers to have bizarre problems that are
hard to troubleshoot), or opening a conversation with Atomicorp about whether such a rule
is really a good idea. Of course, while they were very responsive about turning it off
<em>for me</em>, it's much iffier whether they'll take my advice any farther than that.</p>

<p>So, this may crop up for anybody with a provider that uses Atomicorp ModSecurity rules.</p>

<p>The ruleset produces a log message saying "turn this rule off if you use libwww-perl", which
just goes to show whoever wrote that message wasn't thinking about what breaks what. It would
have to be "turn this rule off if any of <em>your</em> customers might ever need to use or depend on
an app or service <em>hosted anywhere else</em> that <em>could</em> have been implemented using libwww-perl,
over which you and your customer have no knowledge or control."</p>

<p>Sigh. -- Chap</p>

<blockquote>
  <p>Thanks for the pointer. It seems the open-source ruleset blacklists libwww-perl by default
  too... this seems very misguided but whatever. I've changed our default User-Agent to
  <code>ikiwiki/3.20141012</code> (or whatever the version is). If we get further UA-blacklisting
  problems I'm very tempted to go for <code>Mozilla/5.0 (but not really)</code> as the
  next try. --<span class="createlink">smcv</span></p>
</blockquote>

<h2><a name="index2h2"></a>Error: OpenID failure: naive_verify_failed_network: Could not contact ID provider to verify response.</h2>

<p>Again, this could have various causes. It was helpful to bump the debug level
and get some logging, to see:</p>

<pre><code>500 Can't connect to indieauth.com:443 (Net::SSL from Crypt-SSLeay can't
verify hostnames; either install IO::Socket::SSL or turn off verification
by setting the PERL_LWP_SSL_VERIFY_HOSTNAME environment variable to 0)
</code></pre>

<p>I don't belong to the camp that solves every verification problem by turning
verification off, so this meant finding out how to get verification to be done.
It turns out there are two different Perl modules that can be used for SSL:</p>

<ul>
<li><code>IO::Socket::SSL</code> (verifies hostnames)</li>
<li><code>Net::SSL</code> (<em>does not</em> verify hostnames)</li>
</ul>

<p>Both were installed on my hosted server. How was Perl deciding which one
to use?</p>

<h3><a name="index3h3"></a>set <code>PERL_NET_HTTPS_SSL_SOCKET_CLASS</code> appropriately</h3>

<p>It turns out
<a href="https://rt.cpan.org/Public/Bug/Display.html?id=71599">there's an environment variable</a>.
So just set <code>PERL_NET_HTTPS_SSL_SOCKET_CLASS</code> to <code>IO::Socket::SSL</code> and the
right module gets used, right?</p>

<p><a href="https://github.com/csirtgadgets/LWPx-ParanoidAgent/commit/fed6f7d7df8619df0754e8883cfad2ac15703a38#diff-2">Wrong</a>.
That change was made to <code>ParanoidAgent.pm</code> back in November 2013 because of an
unrelated <a href="https://github.com/csirtgadgets/LWPx-ParanoidAgent/issues/4">bug</a>
in <code>IO::Socket::SSL</code>. Essentially, <em>hmm, something goes wrong in
<code>IO::Socket::SSL</code> when reading certain large documents, so we'll fix it by
forcing the use of <code>Net::SSL</code> instead (the one that never verifies hostnames!),
no matter what the admin has set <code>PERL_NET_HTTPS_SSL_SOCKET_CLASS</code> to!</em></p>

<h3><a name="index4h3"></a>undo change that broke <code>PERL_NET_HTTPS_SSL_SOCKET_CLASS</code></h3>

<p>Plenty of <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=738493">comments</a>
quickly appeared about how good an idea that wasn't, and it was corrected in
June 2014 with <a href="https://github.com/csirtgadgets/LWPx-ParanoidAgent/commit/a92ed8f45834a6167ff62d3e7330bb066b307a35">one commit</a>
to fix the original reading-long-documents issue in <code>IO::Socket::SSL</code> and
<a href="https://github.com/csirtgadgets/LWPx-ParanoidAgent/commit/815c691ad5554a219769a90ca5f4001ae22a4019">another commit</a>
that reverts the forcing of <code>Net::SSL</code> no matter how the environment is set.</p>

<p>Unfortunately, there isn't a release in CPAN yet that includes those two
commits, but they are only a few lines to edit into your own locally-installed
module.</p>

<blockquote>
  <p>To be clear, these are patches to <a href="http://search.cpan.org/search?mode=dist&amp;query=LWPx%3A%3AParanoidAgent">LWPx::ParanoidAgent</a>.
  Debian's <code>liblwpx-paranoidagent-perl (&gt;= 1.10-3)</code> appears to
  have those two patches. --<span class="createlink">smcv</span></p>
  
  <p>Irrelevant to this ikiwiki instance, perhaps relevant to others:
  I've added these patches to <a href="http://www.pkgsrc.org">pkgsrc</a>'s
  <a href="http://pkgsrc.se/www/p5-LWPx-ParanoidAgent">www/p5-LWPx-ParanoidAgent</a> and they'll be included in the
  soon-to-be-cut 2014Q3 branch. --<span class="createlink">schmonz</span></p>
</blockquote>

<h2><a name="index3h2"></a>Still naive_verify_failed_network, new improved reason</h2>

<pre><code>500 Can't connect to indieauth.com:443 (SSL connect attempt failed
with unknown error error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed)
</code></pre>

<p>Yay, at least it's trying to verify! Now why can't it verify IndieAuth's
certificate?</p>

<p><a href="https://tools.ietf.org/html/rfc6066#section-3">Here's why</a>. As it turns out,
<a href="https://indieauth.com/">indieauth.com</a> is itself a virtual host on a shared
server. If you naively try</p>

<pre><code>openssl s_client -connect indieauth.com:443
</code></pre>

<p>you get back a certificate for <a href="https://indieweb.org/">indieweb.org</a>
instead, so the hostname won't verify. If you explicitly indicate what server
name you're connecting to:</p>

<pre><code>openssl s_client -connect indieauth.com:443 -servername indieauth.com
</code></pre>

<p>then, magically, the correct certificate comes back.</p>

<h3><a name="index5h3"></a>ensure <code>OpenSSL</code>, <code>Net::SSLeay</code>, <code>IO::Socket::SSL</code> new enough for SNI</h3>

<p>If your <code>openssl</code> doesn't recognize the <code>-servername</code> option, it is too old
to do SNI, and a newer version needs to be built and installed. In fact,
even though SNI support was reportedly backported into OpenSSL 0.9.8f, it will
not be used by <code>IO::Socket::SSL</code> unless it is
<a href="http://search.cpan.org/~sullr/IO-Socket-SSL-1.998/lib/IO/Socket/SSL.pod#SNI_Support">1.0 or higher</a>.</p>

<p>Then a recent <code>Net::SSLeay</code> perl module needs to be built and linked against it.</p>

<blockquote>
  <p>I would tend to be somewhat concerned about the update status and security
  of a shared hosting platform that is still on an OpenSSL major version from
  pre-2010 - it might be fine, because it might be RHEL or some similarly
  change-averse distribution backporting security fixes to ye olde branch,
  but equally it might be as bad as it seems at first glance.
  "Let the buyer beware", I think... --<span class="createlink">smcv</span></p>
  
  <blockquote>
    <p>As far as I can tell, this particular provider <em>is</em> on Red Hat (EL 5).
    I can't conclusively tell because I'm in what appears to be a CloudLinux container when I'm in,
    and certain parts of the environment (like <code>rpm</code>) I can't see. But everything
    I <em>can</em> see is like several RHEL5 boxen I know and love.</p>
  </blockquote>
</blockquote>

<h3><a name="index6h3"></a>Local OpenSSL installation will need certs to trust</h3>

<p>Bear in mind that the OpenSSL distribution doesn't come with a collection
of trusted issuer certs. If a newer version is built and installed locally
(say, on a shared server where the system locations can't be written), it will
need to be given a directory of trusted issuer certs, say by linking to the
system-provided ones. However, a change to the certificate hash algorithm used
for the symlinks in that directory was <a href="http://www.cilogon.org/openssl1">reportedly</a>
made with OpenSSL 1.0.0. So if the system-provided trusted certificate directory
was set up for an earlier OpenSSL version, all the certificates in it will be
fine but the hash symlinks will be wrong. That can be fixed by linking only the
named certificate files from the system directory into the newly-installed one,
and then running the new version of <code>c_rehash</code> there.</p>

<h2><a name="index4h2"></a>Still certificate verify failed</h2>

<p>Using <a href="https://tools.ietf.org/html/rfc6066#section-3">SNI</a>-supporting versions
of <code>IO::Socket::SSL</code>, <code>Net::SSLeay</code>, and <code>OpenSSL</code> doesn't do any good if an
upper layer hasn't passed down the name of the host being connected to so the
SSL layer can SNI for it.</p>

<h3><a name="index7h3"></a>ensure that <code>LWPx::ParanoidAgent</code> passes server name to SSL layer for SNI</h3>

<p>That was fixed in <code>LWPx::ParanoidAgent</code> with
<a href="https://github.com/csirtgadgets/LWPx-ParanoidAgent/commit/df6df19ccdeeb717c709cccb011af35d3713f546">this commit</a>,
which needs to be backported by hand if it hasn't made it into a CPAN release
yet.</p>

<blockquote>
  <p>Also in Debian's <code>liblwpx-paranoidagent-perl (&gt;= 1.10-3)</code>, for the record.
  --<span class="createlink">smcv</span></p>
  
  <p>And now in pkgsrc's <code>www/p5-LWPx-ParanoidAgent</code>, FWIW. --<span class="createlink">schmonz</span></p>
</blockquote>

<p>Only that still doesn't end the story, because that hand didn't know what
<a href="https://github.com/noxxi/p5-io-socket-ssl/commit/4f83a3cd85458bd2141f0a9f22f787174d51d587#diff-1">this hand</a>
was doing. What good is passing the name in
<code>PeerHost</code> if the SSL code looks in <code>PeerAddr</code> first ... and then, if that
doesn't match a regex for a hostname, decides you didn't supply one at all,
without even looking at <code>PeerHost</code>?</p>

<p>Happily, is is possible to assign a key that <em>explicitly</em> supplies the
server name for SNI:</p>

<pre><code>--- LWPx/Protocol/http_paranoid.pm    2014-09-08 03:33:00.000000000 -0400
+++ LWPx/Protocol/http_paranoid.pm    2014-09-08 03:33:27.000000000 -0400
@@ -73,6 +73,7 @@
        close(&#036;el);
         &#036;sock = &#036;self-&gt;socket_class-&gt;new(PeerAddr =&gt; &#036;addr,
                                          PeerHost =&gt; &#036;host,
+                                         SSL_hostname =&gt; &#036;host,
                                          PeerPort =&gt; &#036;port,
                                          Proto    =&gt; 'tcp',
                                          Timeout  =&gt; &#036;conn_timeout,
</code></pre>

<p>... not submitted upstream yet, so needs to be applied by hand.</p>

<blockquote>
  <p>I've <a href="https://bugs.debian.org/761635">reported this to Debian</a>
  (which is where ikiwiki.info's supporting packages come from).
  Please report it upstream too, if the Debian maintainer doesn't
  get there first. --<span class="createlink">smcv</span></p>
  
  <p>Applied in pkgsrc. I haven't attempted to conduct before-and-after
  test odysseys, but here's hoping your travails save others some
  time and effort. --<span class="createlink">schmonz</span></p>
  
  <p>Reported upstream as <a href="https://github.com/csirtgadgets/LWPx-ParanoidAgent/issues/14">LWPx-ParanoidAgent#14</a>
  <em>and</em> <a href="https://github.com/noxxi/p5-io-socket-ssl/issues/16">IO-Socket-SSL#16</a>. -- Chap</p>
</blockquote>

<h1><a name="index2h1"></a>Success!!</h1>

<p>And with that, ladies and gents, I got my first successful OpenID login!
I'm pretty sure that if the same fixes can be applied to
<a href="/">ikiwiki.info</a> itself, a wider range of OpenID logins (like mine, for
example <img src="../../smileys/smile.png" alt=":)" /> will work here too.</p>

<p>-- Chap</p>

</div>







</div>

<div id="footer" class="pagefooter" role="contentinfo">

<div id="pageinfo">






<div id="backlinks">
Links:

<a href="../openid.html">openid</a>


</div>






<div class="pagedate">
Last edited <span class="date">Tue Feb 26 23:01:54 2019</span>
<!-- Created <span class="date">Tue Feb 26 23:01:54 2019</span> -->
</div>

</div>


<!-- from ikiwiki -->
</div>

</div>

</body>
</html>