First, my problem. I am attempting to setup a Two Node SQL Server 2014 Cluster Instance, I have successfully installed the SQL Default Instance onto both nodes - and am able to pass the role successfully between the two nodes. The problem is when a node is restarted, when it attempts to boot up it gets stuck. The C$ admin share will be accessible, we are able to ping the actual server with no difficulties - but within Failover Cluster Manager, the node will be stuck 'Joining...' or it will show as 'Up' but the server is not accessible via RDP. If it does allow for an RDP session, it will not be a fully initialized session (missing start menu, task bar, etc[Like explorer.exe isn't running - but it is in Task Manager]) and I have to use Hotkey combinations to bring anything up.This occurs on both nodes, if the node that currently possesses the SQL Server Role is rebooted (without draining roles [bear in mind I'm trying to break things to ensure this is going to be reliable during production use]) then the WSFC itself will not be accessible on the other node once it finishes coming up (if one server gets stuck booting, rebooting the opposing server will allow for the original server to boot).I am kind of at a loss to explain this issue, I can provide additional details, but am unsure of precisely what would be causing this type of issue. I'm thinking it has to be a conflict on accessing the CSV resources, the reason being that while the C$ share is accessible on a node stuck booting and the ClusterStorage folder is accessible and has the full list of my mount points, if I attempt to open one of the CSV Drives it will hang up.Additionally, I have been able to successfully bring them both up by passing the Disk that the SQL Instance resides upon, back to the Node that is stuck booting. (Ex: Node A is up, Node B is rebooting. Node A has the SQL Role running, and Disk 1 is Owned by Node A. Failover Cluster Manager lists Node B as 'Up' after reboot, but cannot RDP to server or open Disk 1 ClusterStorage Directory. Pass Disk 1 Owner to Node B, and now it finishes booting.) - Bear in mind, I've only gotten this example to work once or twice.If I remove the SQL Role (uninstall from bother servers the SQL Database Engine, etc) - then I can freely restart the servers and they will boot just fine.I cannot find any online resources that contradict my assumption, that rebooting a MSFC Node should not prevent it from being able to immediately come back up (and the behavior without the SQL Role, reinforces my belief that this should be acceptable). I also cannot locate any articles on the specific behavior that I'm seeing.Can anyone possibly provide a reason as to why this is occurring?[b]Environment: [/b]2 x Node WSFC (Both nodes are identical physical boxes): Node A & Node B - Both running Windows Server 2012 R2, both same patch level2 x SQL Server 2014 SP1 Enterprise (installed on both nodes), using the Advanced > Prepare Cluster Installation/Complete Cluster Installation wizards3 x CSV Disks, Current SQL instance is installed on Disk11 x Quorum Disk, set as Disk Witness in QuorumI apologize if my explanation is not as detailed as necessary, please identify any additional areas of my description that I can flesh out and I'll provide further details.
↧