[ofw] [PATCH] ibbus [take 2] - IOC rescan on demand instead of 30 sec intervals.

Smith, Stan stan.smith at intel.com
Mon Jun 13 18:06:11 PDT 2011


Hello all,

Specifically, perform an IOC sweep only when

1) requested (QUERY_DEVICE_RELATIONS for device 'IB Bus') or
2) a PORT_ACTIVE pnp event occurs.

The registry key 'IocPollInterval' value definitions have been expanded:

0 == no IOC sweeping/rescan.
1 == IOC sweep on demand ( QUERY_DEVICE_RELATIONS for device 'IB Bus', 'devcon rescan') or PORT_ACTIVE pnp event occurs (generally when Device relations are invalidated).
 > 1 == IOC sweep every 'IocPollInterval' milliseconds (current behavior).

Tests performed against the following Linux SRP target (scst-1.0.1.1) Linux RHEL 5.3, OFED-1.5.1 SRP target(vdisks).

Testing consisted of loading SRP & IOU or installing SRP drivers on each Windows system prior to the SRP target coming online; systems without SRP drivers loaded did not see the SRP targets as expected.
Two Server 2008 R2 systems were used: one with the current IOC sweep every 30 seconds, the other system was IocPollInterval ==1 (sweep on demand).

OpenSM 3.3.9 was running on a separate Svr 2008 R2 (x86) system.

On each Windows system the 'Computer Management-->Storage Manager-->Disk Manager view was opened.

Once the Linux SRP targets were started (SRP targets (vdisks) exported: /dev/sdb1, /dev/sdb2 & /dev/sdb3.

The 30 sec sweeping system (SS) reported all 3 of the expected SRP targets within 10 seconds, the IOC-sweep-on-demand (SOD) system did not register the 3 SRP targets until a 'device rescan' was forced via 'devcon.exe rescan'.

SRP target functionality (reading & writing files into Windows mounted filesystems) was verified using the SOD system.

The Linux SRP targets were taken offline at the Linux box.

The sweeping Windows system reported the SRP targets had been removed within a few seconds, while the SOD system continued to display the SRP targets.
Once the SOD system had a forced an IOC rescan (devcon.exe rescan) the SRP targets were no longer displayed.

Initial IOC sweep on demand experiments demonstrate the feasibility of the code changes.
More testing needs to take place using more and different fabric IOC's which I do not have access to; I will continue SRP target experiments.

Rebooting Windows system with SRP disks online, offline-and-recognized demonstrated no failures.

At this juncture, I would recommend the code changes be committed as the original IOC periodic sweep functionality is still available (IocPollingInterval==0).

Furthermore the default IocPollInterval should be set == 1 (sweep on demand).

Following patches contain the two previously submitted IBBUS patches for passing p_bfi instead of the ca_guid.

IOC resweeps on demand are now done in batches: 3 total with 10 second wait between resweeps.
Diskmgmt.msc view shows disk device configuration change faster with rescan-on-demand (after devcon rescan) than continuous sweeping.
IRP pending/resume was not implemented due to
  1) required too much duplication of cl_pnp() code due to completing the IRP on a different thread outside of complib PNP
  2) additional complexity vs. gain - when to resume? After 1st IOC rescan, 2nd, ...? Device relations are a snapshot in time.

The port manager struct now contains a count of the number of PNP PORT_ACTIVE ports for this HCA. Use to prevent IOC rescans prior to a port becoming ACTIVE.
port_mgr_get_bus_relations() & ioc_mgr_get_bus_relations() both lost the code for ca_guid == 0 as the caller skips the call if ca_guid == 0.
Simplify code: p_port_mgr variable, as with p_ioc_mgr variable, replaced with p_bfi->p_port_mgr & p_bfi->p_ioc_mgr.


--- core/bus/kernel/bus_port_mgr.h	Mon Jun 13 17:17:28 2011
+++ core/bus/kernel/bus_port_mgr.h	Mon Jun 13 16:58:34 2011
@@ -37,6 +37,7 @@
 #include <iba/ib_al.h>
 #include <complib/cl_mutex.h>
 #include <complib/cl_obj.h>
+#include <complib/cl_atomic.h>
 
 
 /* Global load service */
@@ -50,6 +51,9 @@
 	/* Pointer vector of child IPoIB port PDOs. */
 	cl_qlist_t					port_list;
 
+	/* count of PNP_PORT_ACTIVE Ports */
+	atomic32_t					active_ports;
+
 }	port_mgr_t;
 
 
@@ -57,12 +61,12 @@
 
 ib_api_status_t
 create_port_mgr(
-	IN				struct _bus_filter_instance	*p_bfi );
+	IN		struct _bus_filter_instance	*p_bfi );
 
 
 NTSTATUS
 port_mgr_get_bus_relations(
-	IN		const	net64_t						ca_guid,
-	IN				IRP* const					p_irp );
+	IN		struct _bus_filter_instance	*p_bfi,
+	IN		IRP* const					p_irp );
 
 #endif

--- core/bus/kernel/bus_port_mgr.c	Mon Jun 13 17:15:10 2011
+++ core/bus/kernel/bus_port_mgr.c	Mon Jun 13 17:00:25 2011
@@ -489,10 +489,13 @@
 	IN				ib_pnp_rec_t*				p_pnp_rec )
 {
 	ib_api_status_t		status=IB_SUCCESS;
+	port_pnp_ctx_t		*p_ctx;
+	bus_filter_t		*p_bfi;
 
 	BUS_ENTER( BUS_DBG_PNP );
 
 	CL_ASSERT( p_pnp_rec );
+	p_ctx = p_pnp_rec->context;
 
 	switch( p_pnp_rec->pnp_event )
 	{
@@ -501,11 +504,28 @@
 		break;
 
 	case IB_PNP_PORT_REMOVE:
+		CL_ASSERT( p_ctx );
+		if (p_ctx)
+		{
+			p_bfi = p_ctx->p_bus_filter;
+			CL_ASSERT( p_bfi );
+			if (p_bfi->p_port_mgr->active_ports > 0)
+				cl_atomic_dec( &p_bfi->p_port_mgr->active_ports );
+		}
 		port_mgr_port_remove( (ib_pnp_port_rec_t*)p_pnp_rec );
 		break;
 
+	case IB_PNP_PORT_ACTIVE:
+		if (p_ctx)
+		{
+			p_bfi = p_ctx->p_bus_filter;
+			CL_ASSERT( p_bfi );
+			cl_atomic_inc( &p_bfi->p_port_mgr->active_ports );
+		}
+		break;
+
 	default:
-		XBUS_PRINT( BUS_DBG_PNP, ("Unhandled PNP Event %s\n",
+		BUS_PRINT( BUS_DBG_PNP, ("Ignored PNP Event %s\n",
 					ib_get_pnp_event_str(p_pnp_rec->pnp_event) ));
 		break;
 	}
@@ -520,57 +540,25 @@
  */
 NTSTATUS
 port_mgr_get_bus_relations(
-	IN		const	net64_t						ca_guid,
+	IN				bus_filter_t*				p_bfi,
 	IN				IRP* const					p_irp )
 {
 	NTSTATUS			status;
-	bus_filter_t		*p_bfi;
-	port_mgr_t			*p_port_mgr;
-	DEVICE_RELATIONS	*p_rel;
 
 	BUS_ENTER( BUS_DBG_PNP );
 
-	BUS_PRINT(BUS_DBG_PNP, ("CA_guid %I64x\n",ca_guid));
+	BUS_PRINT(BUS_DBG_PNP, ("CA_guid %I64x\n",p_bfi->ca_guid));
 
-	/* special case guid == 0 - walk all bus filter instances */
-	if ( ca_guid == 0ULL ) {
-		BUS_PRINT(BUS_DBG_PNP, ("CA_guid 0\n"));
-		for(p_bfi=g_bus_filters; p_bfi < &g_bus_filters[MAX_BUS_FILTERS]; p_bfi++) {
-			p_port_mgr = p_bfi->p_port_mgr;
-			if ( !p_port_mgr )
-				continue;
-			cl_mutex_acquire( &p_port_mgr->pdo_mutex );
-			status = bus_get_relations( &p_port_mgr->port_list,
-										p_bfi->ca_guid,
-										p_irp );
-			cl_mutex_release( &p_port_mgr->pdo_mutex );
-		}
-
-		p_rel = (DEVICE_RELATIONS*)p_irp->IoStatus.Information;
-		if ( p_rel ) {
-			BUS_PRINT(BUS_DBG_PNP, ("CA_guid 0 Reports %d\n", p_rel->Count));
-		}
-		BUS_EXIT( BUS_DBG_PNP );
-		return STATUS_SUCCESS;
-	}
-
-	p_bfi = get_bfi_by_ca_guid(ca_guid);
-	if (p_bfi == NULL) {
-		BUS_PRINT(BUS_DBG_PNP,
-			("Null *p_bfi from ca_guid %I64x\n",ca_guid));
-		BUS_EXIT( BUS_DBG_PNP );
-		return STATUS_NO_SUCH_DEVICE;
-	}
-	p_port_mgr = p_bfi->p_port_mgr;
+	CL_ASSERT( p_bfi->ca_guid );
 
 	BUS_PRINT(BUS_DBG_PNP, ("%s for ca_guid %I64x port_mgr %p\n",
-							p_bfi->whoami, ca_guid, p_port_mgr) );
-	if (!p_port_mgr)
+							p_bfi->whoami, p_bfi->ca_guid, p_bfi->p_port_mgr) );
+	if (!p_bfi->p_port_mgr)
 		return STATUS_NO_SUCH_DEVICE;
 
-	cl_mutex_acquire( &p_port_mgr->pdo_mutex );
-	status = bus_get_relations( &p_port_mgr->port_list, ca_guid, p_irp );
-	cl_mutex_release( &p_port_mgr->pdo_mutex );
+	cl_mutex_acquire( &p_bfi->p_port_mgr->pdo_mutex );
+	status = bus_get_relations( &p_bfi->p_port_mgr->port_list, p_bfi->ca_guid, p_irp );
+	cl_mutex_release( &p_bfi->p_port_mgr->pdo_mutex );
 
 	BUS_EXIT( BUS_DBG_PNP );
 	return STATUS_SUCCESS;
@@ -1168,10 +1156,10 @@
 	port_mgr_t		*p_port_mgr;
 	bus_filter_t	*p_bfi;
 	port_pnp_ctx_t	*p_ctx = p_pnp_rec->pnp_rec.context;
-	cl_list_item_t		*p_list_item;
-	bus_port_ext_t		*p_port_ext;
-	bus_pdo_ext_t		*p_pdo_ext;
-	cl_qlist_t*	   		p_pdo_list;
+	cl_list_item_t	*p_list_item;
+	bus_port_ext_t	*p_port_ext;
+	bus_pdo_ext_t	*p_pdo_ext;
+	cl_qlist_t		*p_pdo_list;
 
 	BUS_ENTER( BUS_DBG_PNP );
 

--- core/bus/kernel/bus_pnp.c	Mon Jun 13 17:13:43 2011
+++ core/bus/kernel/bus_pnp.c	Mon Jun 13 17:03:21 2011
@@ -72,6 +72,9 @@
 extern UNICODE_STRING	g_CDO_dev_name, g_CDO_dos_name;
 
 
+void
+ioc_pnp_request_ioc_rescan(void);
+
 static NTSTATUS
 fdo_start(
 	IN					DEVICE_OBJECT* const	p_dev_obj,
@@ -816,13 +819,23 @@
 		waitLoop++;
 		if(waitLoop>50) break;
 	}
+
 	if ( p_bfi->ca_guid != 0ULL )
 	{
-		status = port_mgr_get_bus_relations( p_bfi->ca_guid, p_irp );
+		if ( g_ioc_poll_interval == 1 && p_bfi->p_port_mgr->active_ports &&
+			p_bfi->p_bus_ext && p_bfi->p_bus_ext->cl_ext.vfptr_pnp_po->identity &&
+			strcmp(p_bfi->p_bus_ext->cl_ext.vfptr_pnp_po->identity, "IB Bus") == 0 )
+		{
+			BUS_PRINT(BUS_DBG_PNP, ("**** device '%s' requesting IOC rescan\n",
+					p_bfi->p_bus_ext->cl_ext.vfptr_pnp_po->identity) );
+			ioc_pnp_request_ioc_rescan();
+		}
+
+		status = port_mgr_get_bus_relations( p_bfi, p_irp );
 		if( status == STATUS_SUCCESS || 
 			status == STATUS_NO_SUCH_DEVICE )
 		{
-			status = iou_mgr_get_bus_relations( p_bfi->ca_guid, p_irp );
+			status = iou_mgr_get_bus_relations( p_bfi, p_irp );
 		}
 		if( status == STATUS_NO_SUCH_DEVICE )
 			status = STATUS_SUCCESS;

--- core/bus/kernel/bus_iou_mgr.h	Mon Jun 13 17:12:40 2011
+++ core/bus/kernel/bus_iou_mgr.h	Mon Jun 13 17:13:10 2011
@@ -61,7 +61,7 @@
 
 NTSTATUS
 iou_mgr_get_bus_relations(
-	IN		const	net64_t						ca_guid,
-	IN				IRP* const					p_irp );
+	IN			struct _bus_filter_instance*	p_bfi,
+	IN			IRP* const						p_irp );
 
 #endif

--- core/bus/kernel/bus_iou_mgr.h	Mon Jun 13 17:12:40 2011
+++ core/bus/kernel/bus_iou_mgr.h	Mon Jun 13 17:13:10 2011
@@ -61,7 +61,7 @@
 
 NTSTATUS
 iou_mgr_get_bus_relations(
-	IN		const	net64_t						ca_guid,
-	IN				IRP* const					p_irp );
+	IN			struct _bus_filter_instance*	p_bfi,
+	IN			IRP* const						p_irp );
 
 #endif

--- core/bus/kernel/bus_iou_mgr.c	Mon Jun 13 17:11:50 2011
+++ core/bus/kernel/bus_iou_mgr.c	Mon Jun 13 17:05:13 2011
@@ -526,52 +526,25 @@
  */
 NTSTATUS
 iou_mgr_get_bus_relations(
-	IN		const	net64_t						ca_guid,
+	IN				bus_filter_t*				p_bfi,
 	IN				IRP* const					p_irp )
 {
 	NTSTATUS			status;
-	bus_filter_t		*p_bfi;
-	iou_mgr_t			*p_iou_mgr;
-	DEVICE_RELATIONS	*p_rel;
 
 	BUS_ENTER( BUS_DBG_PNP );
 
-	BUS_PRINT(BUS_DBG_PNP, ("CA_guid %I64x\n",ca_guid));
+	BUS_PRINT(BUS_DBG_PNP, ("CA_guid %I64x\n",p_bfi->ca_guid));
 
-	/* special case guid == 0 - walk all bus filter instances */
-	if ( ca_guid == 0ULL ) {
-		for(p_bfi=g_bus_filters; p_bfi < &g_bus_filters[MAX_BUS_FILTERS]; p_bfi++) {
-			p_iou_mgr = p_bfi->p_iou_mgr;
-			if ( !p_iou_mgr )
-				continue;
-			cl_mutex_acquire( &p_iou_mgr->pdo_mutex );
-			status = bus_get_relations( &p_iou_mgr->iou_list, ca_guid, p_irp );
-			cl_mutex_release( &p_iou_mgr->pdo_mutex );
-		}
-		p_rel = (DEVICE_RELATIONS*)p_irp->IoStatus.Information;
-		if ( p_rel ) {
-			BUS_PRINT(BUS_DBG_PNP, ("CA_guid 0 Reports %d relations\n", p_rel->Count));
-		}
-		BUS_EXIT( BUS_DBG_PNP );
-		return STATUS_SUCCESS;
-	}
-
-	p_bfi = get_bfi_by_ca_guid(ca_guid);
-	if (p_bfi == NULL) {
-		BUS_TRACE_EXIT(BUS_DBG_PNP,
-								("NULL p_bfi from ca_guid %I64x ?\n",ca_guid));
-		return STATUS_UNSUCCESSFUL;
-	}
-	p_iou_mgr = p_bfi->p_iou_mgr;
+	CL_ASSERT( p_bfi->ca_guid );
 
 	BUS_PRINT(BUS_DBG_PNP, ("%s for ca_guid %I64x iou_mgr %p\n",
-							p_bfi->whoami, ca_guid, p_iou_mgr) );
-	if (!p_iou_mgr)
+							p_bfi->whoami, p_bfi->ca_guid, p_bfi->p_iou_mgr) );
+	if (!p_bfi->p_iou_mgr)
 		return STATUS_NO_SUCH_DEVICE;
 
-	cl_mutex_acquire( &p_iou_mgr->pdo_mutex );
-	status = bus_get_relations( &p_iou_mgr->iou_list, ca_guid, p_irp );
-	cl_mutex_release( &p_iou_mgr->pdo_mutex );
+	cl_mutex_acquire( &p_bfi->p_iou_mgr->pdo_mutex );
+	status = bus_get_relations( &p_bfi->p_iou_mgr->iou_list, p_bfi->ca_guid, p_irp );
+	cl_mutex_release( &p_bfi->p_iou_mgr->pdo_mutex );
 
 	BUS_EXIT( BUS_DBG_PNP );
 	return status;

--- core/al/kernel/al_ioc_pnp.c	Mon Jun 13 17:09:03 2011
+++ core/al/kernel/al_ioc_pnp.c	Mon Jun 13 16:57:40 2011
@@ -97,10 +97,11 @@
  * progress towards.
  */
 
-
 /* Number of entries in the various pools to grow by. */
 #define IOC_PNP_POOL_GROW	(10)
 
+#define IOC_RESWEEPS 3
+#define IOC_RESWEEP_WAIT (10 * 1000) // time to wait between resweeps in milliseconds
 
 /* IOC PnP Manager structure. */
 typedef struct _ioc_pnp_mgr
@@ -126,6 +127,7 @@
 	cl_fmap_t				sweep_map;	/* Map of IOUs from sweep results. */
 	cl_timer_t				sweep_timer;/* Timer to trigger sweep. */
 	atomic32_t				query_cnt;	/* Number of sweep results outstanding. */
+	atomic32_t				reSweep;	/* Number of IOC resweeps per batch. */
 
 }	ioc_pnp_mgr_t;
 
@@ -294,7 +296,6 @@
 	cl_async_proc_item_t	async_item;
 	sweep_state_t			state;
 	ioc_pnp_svc_t			*p_svc;
-	atomic32_t				query_cnt;
 	cl_fmap_t				iou_map;
 
 }	ioc_sweep_results_t;
@@ -313,8 +314,12 @@
 ioc_pnp_mgr_t	*gp_ioc_pnp = NULL;
 uint32_t		g_ioc_query_timeout = 250;
 uint32_t		g_ioc_query_retries = 4;
-uint32_t		g_ioc_poll_interval = 30000;
-
+uint32_t		g_ioc_poll_interval = 1;
+					/* 0 == no IOC polling
+					 * 1 == IOC poll on demand (IB_PNP_SM_CHANGE, IB_PNP_PORT_ACTIVE,
+					 *			QUERY_DEVICE_RELATIONS for device 'IB Bus')
+					 * > 1 == poll interval in millisecond units.
+					 */
 
 
 /******************************************************************************
@@ -775,6 +780,8 @@
 			("cl_timer_init failed with %#x\n", cl_status) );
 		return ib_convert_cl_status( cl_status );
 	}
+	if ( g_ioc_poll_interval == 1 )
+		p_ioc_mgr->reSweep = 0;
 
 	status = init_al_obj( &p_ioc_mgr->obj, p_ioc_mgr, TRUE,
 		__destroying_ioc_pnp, NULL, __free_ioc_pnp );
@@ -803,6 +810,7 @@
 
 	/* Stop the timer. */
 	cl_timer_stop( &gp_ioc_pnp->sweep_timer );
+	gp_ioc_pnp->reSweep = 0;
 
 	if( gp_ioc_pnp->h_pnp )
 	{
@@ -1204,6 +1212,34 @@
 }
 
 
+void
+ioc_pnp_request_ioc_rescan(void)
+{
+	ib_api_status_t	status;
+
+	AL_ENTER( AL_DBG_PNP );
+
+	CL_ASSERT( g_ioc_poll_interval == 1 );
+	CL_ASSERT( gp_ioc_pnp );
+
+	/* continue IOC sweeping or start a new series of sweeps? */
+	cl_atomic_add( &gp_ioc_pnp->reSweep, IOC_RESWEEPS );
+	if ( !gp_ioc_pnp->query_cnt )
+	{
+		status = cl_timer_start( &gp_ioc_pnp->sweep_timer, 3 );
+		CL_ASSERT( status == CL_SUCCESS );
+	}
+	AL_EXIT( AL_DBG_PNP );
+}
+
+
+static const char *
+__ib_get_pnp_event_str( ib_pnp_event_t event )
+{
+	return ib_get_pnp_event_str( event );
+}
+
+
 /*
  * PnP callback for port event notifications.
  */
@@ -1218,7 +1254,7 @@
 
 	AL_PRINT( TRACE_LEVEL_INFORMATION, AL_DBG_PNP,
 		("p_pnp_rec->pnp_event = 0x%x (%s)\n",
-		p_pnp_rec->pnp_event, ib_get_pnp_event_str( p_pnp_rec->pnp_event )) );
+			p_pnp_rec->pnp_event, __ib_get_pnp_event_str(p_pnp_rec->pnp_event)) );
 
 	switch( p_pnp_rec->pnp_event )
 	{
@@ -1257,8 +1293,21 @@
 		((ioc_pnp_svc_t*)p_pnp_rec->context)->obj.pfn_destroy(
 			&((ioc_pnp_svc_t*)p_pnp_rec->context)->obj, NULL );
 		p_pnp_rec->context = NULL;
+		break;
+
+	case IB_PNP_IOU_ADD:
+	case IB_PNP_IOU_REMOVE:
+	case IB_PNP_IOC_ADD:
+	case IB_PNP_IOC_REMOVE:
+	case IB_PNP_IOC_PATH_ADD:
+	case IB_PNP_IOC_PATH_REMOVE:
+		AL_PRINT( TRACE_LEVEL_ERROR, AL_DBG_PNP, ("!Handled PNP Event %s\n",
+			__ib_get_pnp_event_str(p_pnp_rec->pnp_event)) );
+		break;
 
 	default:
+		AL_PRINT( TRACE_LEVEL_ERROR, AL_DBG_ERROR, ("Ignored PNP Event %s\n",
+			__ib_get_pnp_event_str(p_pnp_rec->pnp_event)) );
 		break;	/* Ignore other PNP events. */
 	}
 
@@ -1494,6 +1543,10 @@
 		if( status != IB_SUCCESS )
 			cl_atomic_dec( &p_mgr->query_cnt );
 	}
+
+	if ( g_ioc_poll_interval == 1 && p_mgr->reSweep > 0 )
+		cl_atomic_dec( &p_mgr->reSweep );
+
 	/* Release the reference we took and see if we're done sweeping. */
 	if( !cl_atomic_dec( &p_mgr->query_cnt ) )
 		cl_async_proc_queue( gp_async_pnp_mgr, &p_mgr->async_item );
@@ -2606,6 +2659,7 @@
 {
 	cl_status_t		status;
 	cl_fmap_t		old_ious, new_ious;
+	uint32_t		interval=0;
 
 	AL_ENTER( AL_DBG_PNP );
 
@@ -2630,11 +2684,19 @@
 	__remove_ious( &old_ious );
 	CL_ASSERT( !cl_fmap_count( &old_ious ) );
 
-	/* Reset the sweep timer. */
-	if( g_ioc_poll_interval )
+	/* Reset the sweep timer.
+	 * 0 == No IOC polling.
+	 * 1 == IOC poll on demand.
+	 * > 1 == IOC poll every g_ioc_poll_interval milliseconds.
+	 */
+	if( g_ioc_poll_interval == 1 && gp_ioc_pnp->reSweep > 0 )
+		interval = IOC_RESWEEP_WAIT;
+	else if( g_ioc_poll_interval > 1 )
+		interval = g_ioc_poll_interval;
+
+	if( interval > 0 )
 	{
-		status = cl_timer_start(
-			&gp_ioc_pnp->sweep_timer, g_ioc_poll_interval );
+		status = cl_timer_start( &gp_ioc_pnp->sweep_timer, g_ioc_poll_interval );
 		CL_ASSERT( status == CL_SUCCESS );
 	}
 
@@ -3045,8 +3107,7 @@
 	else
 	{
 		/* Report the IOU to all clients registered for IOU events. */
-		cl_qlist_find_from_head( &gp_ioc_pnp->iou_reg_list,
-			__notify_users, &event );
+		cl_qlist_find_from_head( &gp_ioc_pnp->iou_reg_list, __notify_users, &event );
 
 		/* Report IOCs - this will in turn report the paths. */
 		__add_iocs( p_iou, &p_iou->ioc_map, NULL );

--- hw/mlx4/kernel/hca/mlx4_hca.inx	Mon Jun 13 17:48:47 2011
+++ hw/mlx4/kernel/hca/mlx4_hca.inx	Mon Jun 13 17:48:31 2011
@@ -296,7 +296,11 @@
 HKR,"Parameters","SmiPollInterval",%REG_DWORD_NO_CLOBBER%,20000
 HKR,"Parameters","IocQueryTimeout",%REG_DWORD_NO_CLOBBER%,250
 HKR,"Parameters","IocQueryRetries",%REG_DWORD_NO_CLOBBER%,4
-HKR,"Parameters","IocPollInterval",%REG_DWORD_NO_CLOBBER%,30000
+
+; IocPollInterval: 0 == no ioc poll, 1 == poll on demand (device rescan)
+;   (> 1) poll every x milliseconds, 30000 (30 secs) previous default.
+HKR,"Parameters","IocPollInterval",%REG_DWORD%,1
+
 HKR,"Parameters","DebugFlags",%REG_DWORD%,0x80000000
 HKR,"Parameters","ReportPortNIC",%REG_DWORD%,1
 
--- hw/mthca/kernel/mthca.inx	Mon Jun 13 17:52:03 2011
+++ hw/mthca/kernel/mthca.inx	Mon Jun 13 17:51:52 2011
@@ -297,7 +297,11 @@
 HKR,"Parameters","SmiPollInterval",%REG_DWORD_NO_CLOBBER%,20000
 HKR,"Parameters","IocQueryTimeout",%REG_DWORD_NO_CLOBBER%,250
 HKR,"Parameters","IocQueryRetries",%REG_DWORD_NO_CLOBBER%,4
-HKR,"Parameters","IocPollInterval",%REG_DWORD_NO_CLOBBER%,30000
+
+; IocPollInterval: 0 == no ioc poll, 1 == poll on demand (device rescan)
+;   (> 1) poll every x milliseconds, 30000 (30 secs) previous default.
+HKR,"Parameters","IocPollInterval",%REG_DWORD%,1
+
 HKR,"Parameters","DebugFlags",%REG_DWORD%,0x80000000
 HKR,"Parameters","ReportPortNIC",%REG_DWORD%,1
 




More information about the ofw mailing list