Hi Daniel,

On Wed, Dec 14, 2016 at 6:36 PM, Daniel Wagner <wagi@monom.org> wrote:
Hi Shrikant,

On 12/09/2016 02:37 PM, Shrikant Bobade wrote:
Getting connman waiting/stuck at  /getrandom
(_rnd_get_system_entropy_getrandom) (by default gnutls enabled)

Hmm, getrandom is part of gnutls? Maybe post the stack trace, that would help to understand the situation better.

using connman with gnutls
Reading symbols from /usr/sbin/connmand...Reading symbols from /usr/sbin/.debug/connmand...done.
(gdb) r -d -n
Starting program: /usr/sbin/connmand -d -n
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/libthread_db.so.1".
Program received signal SIGINT, Interrupt.
0x76c3193c in syscall () from /lib/libc.so.6
(gdb) bt
#0  0x76c3193c in syscall () from /lib/libc.so.6
#1  0x76e272a8 in force_getrandom (flags=0, buflen=<optimized out>, buf=<optimized out>) at ../../../gnutls-3.5.3/lib/nettle/rnd-linux.c:80
#2  _rnd_get_system_entropy_getrandom (_rnd=<optimized out>, size=<optimized out>) at ../../../gnutls-3.5.3/lib/nettle/rnd-linux.c:98
#3  0x76e24344 in do_device_source (init=init@entry=1, event=event@entry=0x7efffcdc, ctx=0x76e62a38 <rnd_ctx>) at ../../../gnutls-3.5.3/lib/nettle/rnd.c:132
#4  0x76e244ac in wrap_nettle_rnd_init (ctx=<optimized out>) at ../../../gnutls-3.5.3/lib/nettle/rnd.c:234
#5  0x76d72a28 in _gnutls_rnd_init () at ../../gnutls-3.5.3/lib/random.c:49
#6  0x76d64dfc in _gnutls_global_init (constructor=constructor@entry=1) at ../../gnutls-3.5.3/lib/global.c:307
#7  0x76d3d948 in lib_init () at ../../gnutls-3.5.3/lib/global.c:504
#8  0x76fdf2dc in call_init.part () from /lib/ld-linux-armhf.so.3
#9  0x76fdf438 in _dl_init () from /lib/ld-linux-armhf.so.3
#10 0x76fcfac4 in _dl_start_user () from /lib/ld-linux-armhf.so.3
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

So tried to increase entropy with help of /dev/urandom & and with
sufficient ~3k+ entropy count observed the getrandom call completed
Is this know behaviour Or experienced by anyone.. ?

Haven't seen this problem before but that doesn't mean it doesn't exists.

:~# rngd -r /dev/urandom -o

:~# cat /proc/sys/kernel/random/entropy_avail

Please advise, we suppose to use only /dev/random for better security
reasons and not /dev/urandom ? thoughts if any for other ways to deal
with less/no entropy available with /dev/random?

I had to read up on the random and urandom. I also checked if any of our calls to __connman_util_get_random() is a problem (see below [1]).
It looks like that our code base is using urandom correctly. So
it is how we use gnutls. I wonder how this could be a problem in assigning IP addresses. Could you past a complete log?

Thanks for the consolidate shared urandom details, agreed that we can use urandom..

The particular delay of assigning IP addresses was in term of getrandom hang,
when connmon.service get multiple attempts...during these attempts minimal entropy available ranging from 10 to 30.. to moved ahead of getrandom..

So once 3-4k entropy available not facing this issue.


Thanks for the response.



** src/config.c:                   __connman_util_get_random(&rand);

   generate_random_string(rstr, 11);
   vfile = g_strdup_printf("service_mutable_%s.config", rstr);

** src/dhcpv6.c:   __connman_util_get_random(&rand);

        /* Initial timeout, RFC 3315, 18.1.5 */
        delay = rand % 1000;

RFC 3314 chapter 18.1.5
  The first Confirm message from the client on the interface MUST be
  delayed by a random amount of time between 0 and CNF_MAX_DELAY.

** src/dhcpv6.c:   __connman_util_get_random(&rand);

        /* Initial timeout, RFC 3315, 17.1.2 */
        delay = rand % 1000;

RFC 3314 chapter 17.1.2
  The first Solicit message from the client on the interface MUST be
  delayed by a random amount of time between 0 and SOL_MAX_DELAY.  In
  the case of a Solicit message transmitted when DHCP is initiated by
  IPv6 Neighbor Discovery, the delay gives the amount of time to wait
  after IPv6 Neighbor Discovery causes the client to invoke the
  stateful address autoconfiguration protocol (see section 5.5.3 of RFC
  2462).  This random delay desynchronizes clients which start at the
  same time (for example, after a power outage).

** src/dhcpv6.c:   __connman_util_get_random(&rand);
        static guint compute_random(guint val)
                uint64_t rand;


                return val - val / 10 +
                        ((guint) rand % (2 * 1000)) * val / 10 / 1000;

        /* Calculate a random delay, RFC 3315 chapter 14 */
        /* RT and MRT are milliseconds */
        static guint calc_delay(guint RT, guint MRT)
                if (MRT && (RT > MRT / 2))
                        RT = compute_random(MRT);
                        RT += compute_random(RT);

                return RT;

RFC 3315 chapter 14
  Each of the computations of a new RT include a randomization factor
  (RAND), which is a random number chosen with a uniform distribution
  between -0.1 and +0.1.  The randomization factor is included to
  minimize synchronization of messages transmitted by DHCP clients.

  The algorithm for choosing a random number does not need to be
  cryptographically sound.  The algorithm SHOULD produce a different
  sequence of random numbers from each invocation of the DHCP client.

** src/dnsproxy.c: __connman_util_get_random(&rand);

        req->dstid = get_id();
        req->altid = get_id();

RFC 5625 chapter 6.1
  It has been standard guidance for many years that each DNS query
  should use a randomly generated Query ID.  However, many proxies have
  been observed picking sequential Query IDs for successive requests.

  It is strongly RECOMMENDED that DNS proxies follow the relevant
  recommendations in [RFC5452], particularly those in Section 9.2
  relating to randomisation of Query IDs and source ports.  This also
  applies to source port selection within any NAT function

RFC 5452 chapter 9.2. Extending the Q-ID Space by Using Ports and Addresses

   Resolver implementations MUST:

   o  Use an unpredictable source port for outgoing queries from the
      range of available ports (53, or 1024 and above) that is as large
      as possible and practicable;

   o  Use multiple different source ports simultaneously in case of
      multiple outstanding queries;

   o  Use an unpredictable query ID for outgoing queries, utilizing the
      full range available (0-65535).

   Resolvers that have multiple IP addresses SHOULD use them in an
   unpredictable manner for outgoing queries.

   Resolver implementations SHOULD provide means to avoid usage of
   certain ports.

   Resolvers SHOULD favor authoritative nameservers with which a trust
   relation has been established; stub-resolvers SHOULD be able to use
   Transaction Signature (TSIG) ([RFC2845]) or IPsec ([RFC4301]) when
   communicating with their recursive resolver.

   In case a cryptographic verification of response validity is
   available (TSIG, SIG(0)), resolver implementations MAY waive above
   rules, and rely on this guarantee instead.

   Proper unpredictability can be achieved by employing a high quality
   (pseudo-)random generator, as described in [RFC4086].

RFC 4086 chapter 7.  Randomness Generation Examples and Standards
   Several public standards and widely deployed examples are now in
   place for the generation of keys or other cryptographically random
   quantities.  Some, in section 7.1, include an entropy source.
   Others, described in section 7.2, provide the pseudo-random number
   strong-sequence generator but assume the input of a random seed or
   input from a source of entropy.

RFC 4086 chapter 7.1.2.  The /dev/random Device
   Several versions of the UNIX operating system provide a kernel-
   resident random number generator.  Some of these generators use
   events captured by the Kernel during normal system operation.

   For example, on some versions of Linux, the generator consists of a
   random pool of 512 bytes represented as 128 words of 4 bytes each.
   When an event occurs, such as a disk drive interrupt, the time of the
   event is XOR'ed into the pool, and the pool is stirred via a
   primitive polynomial of degree 128.  The pool itself is treated as a
   ring buffer, with new data being XOR'ed (after stirring with the
   polynomial) across the entire pool.

   Each call that adds entropy to the pool estimates the amount of
   likely true entropy the input contains.  The pool itself contains a
   accumulator that estimates the total over all entropy of the pool.

   Input events come from several sources, as listed below.
   Unfortunately, for server machines without human operators, the first
   and third are not available, and entropy may be added slowly in that

   1. Keyboard interrupts.  The time of the interrupt and the scan code
      are added to the pool.  This in effect adds entropy from the human
      operator by measuring inter-keystroke arrival times.

   2. Disk completion and other interrupts.  A system being used by a
      person will likely have a hard-to-predict pattern of disk
      accesses.  (But not all disk drivers support capturing this timing
      information with sufficient accuracy to be useful.)

   3. Mouse motion.  The timing and mouse position are added in.

   When random bytes are required, the pool is hashed with SHA-1 [SHA*]
   to yield the returned bytes of randomness.  If more bytes are
   required than the output of SHA-1 (20 bytes), then the hashed output
   is stirred back into the pool and a new hash is performed to obtain
   the next 20 bytes.  As bytes are removed from the pool, the estimate
   of entropy is correspondingly decremented.

   To ensure a reasonably random pool upon system startup, the standard
   startup and shutdown scripts save the pool to a disk file at shutdown
   and read this file at system startup.

   There are two user-exported interfaces. /dev/random returns bytes
   from the pool but blocks when the estimated entropy drops to zero.
   As entropy is added to the pool from events, more data becomes
   available via /dev/random.  Random data obtained from such a
   /dev/random device is suitable for key generation for long term keys,
   if enough random bits are in the pool or are added in a reasonable
   amount of time.

   /dev/urandom works like /dev/random; however, it provides data even
   when the entropy estimate for the random pool drops to zero.  This
   may be adequate for session keys or for other key generation tasks
   for which blocking to await more random bits is not acceptable.  The
   risk of continuing to take data even when the pool's entropy estimate
   is small in that past output may be computable from current output,
   provided that an attacker can reverse SHA-1.  Given that SHA-1 is
   designed to be non-invertible, this is a reasonable risk.

   To obtain random numbers under Linux, Solaris, or other UNIX systems
   equipped with code as described above, all an application has to do
   is open either /dev/random or /dev/urandom and read the desired
   number of bytes.

   (The Linux Random device was written by Theodore Ts'o.  It was based
   loosely on the random number generator in PGP 2.X and PGP 3.0 (aka
   PGP 5.0).)

man page on urandom:

       A read from the /dev/urandom device will not  block  waiting
       for  more  entropy.   If  there is not sufficient entropy, a
       pseudorandom  number  generator  is  used  to   create   the
       requested  bytes.   As  a  result, in this case the returned
       values  are  theoretically  vulnerable  to  a  cryptographic
       attack  on  the algorithms used by the driver.  Knowledge of
       how to do this is not available in the current  unclassified
       literature,  but  it  is theoretically possible that such an
       attack may exist.  If this is a concern in your application,
       use  /dev/random  instead.   O_NONBLOCK  has  no effect when
       opening /dev/urandom.  When calling read(2) for  the  device
       /dev/urandom,  signals  will  not be handled until after the
       requested random bytes have been generated.